paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:59c5d78884feb7f21eb812b69d71827770f6fe39
[ "This paper proposes an interesting framework for training interpretable CNNs, similar to distillation methods. The authors propose a probabilistic model to approximate CNN predictions (specifically the discriminatory part i.e. fully connected network, and a procedure for training CNN+ DCLM as a game. Results show interesting performance over benchmark datasets in comparison to existing distillation baselines. " ]
Recently, for finding inherent causality implied in CNN, the black box problem of its discrimination part, which is composed of all fully connected layers of the CNN, has been studied by different scientific communities. Many methods were proposed, which can extract various interpretable models from the optimal discrimination part based on inputs and outputs of the part for finding the inherent causality implied in the part. However, the inherent causality cannot readily be found. We think that the problem could be solved by shrinking an interpretable distance which can evaluate the degree for the discrimination part to be easily explained by an interpretable model. This paper proposes a lightweight interpretable model, Deep Cognitive Learning Model(DCLM). And then, a game method between the DCLM and the discrimination part is implemented for shrinking the interpretation distance. Finally, the proposed self-explanatory method was evaluated by some contrastive experiments with certain baseline methods on some standard image processing benchmarks. These experiments indicate that the proposed method can effectively find the inherent causality implied in the discrimination part of the CNN without largely reducing its generalization performance. Moreover, the generalization performance of the DCLM also can be improved.
[]
[ { "authors": [ "A.B. Arrietaa", "N. Diaz-Rodriguezb", "J.D. Sera", "A. Bennetotb", "S. Tabikg", "A. Barbadoh", "S. Garciag", "S. Gil-Lopeza", "D. Molinag", "R. Benjaminsh", "R. Chatilaf", "F. Herrerag" ], "title": "Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai", "venue": "Information Fusion,", "year": 2020 }, { "authors": [ "M. Gethsiyal Augasta", "T. Kathirvalavakumar" ], "title": "Reverse engineering the neural networks for rule extraction in classification problems", "venue": "Neural Process. Lett,", "year": 2012 }, { "authors": [ "O. Boz" ], "title": "Extracting decision trees from trained neural networks", "venue": "pp. 456C461, the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2002 }, { "authors": [ "J. Brandon" ], "title": "An ai god will emerge by 2042 and write its own bible. will you worship", "venue": null, "year": 2017 }, { "authors": [ "Z.P. Che", "S. Purushotham", "R. Khemani", "Y. Liu" ], "title": "Interpretable deep models for icu outcome prediction", "venue": "Amia Annu Symp Proc,", "year": 2017 }, { "authors": [ "Mark W. Craven", "Jude W. Shavlik" ], "title": "Using sampling and queries to extract rules from trained neural networks", "venue": "Machine Learning Proceedings,", "year": 1994 }, { "authors": [ "Mark W. Craven", "Jude W. Shavlik" ], "title": "Extracting tree-structured representations of trained networks", "venue": "Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "N. Frosst", "G. Hinton" ], "title": "Distilling a neural network into a soft decision tree, 2017", "venue": "arXiv preprint arXiv:1711.09784", "year": 2017 }, { "authors": [ "R. Giles" ], "title": "Lukasiewicz logic and fuzzy set theory", "venue": "International Journal of Man-Machine Studies,", "year": 1975 }, { "authors": [ "G. Hinton", "O. Vinyals", "J. Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "Computer Science,", "year": 2015 }, { "authors": [ "A. Holzinger", "G. Langs", "H. Denk", "K. Zatloukal", "H. Mller" ], "title": "Causability and explainability of artificial intelligence in medicine", "venue": "Causability and explainability of artificial intelligence in medicine,", "year": 2019 }, { "authors": [ "U. Johansson", "L. Niklasson" ], "title": "Rule extraction from trained neural networks using genetic programming", "venue": "Int.conf.neural Inform.processing,", "year": 2003 }, { "authors": [ "U. Johansson", "L. Niklasson" ], "title": "Evolving decision trees using oracle guides", "venue": "In Computational Intelligence and Data Mining,", "year": 2009 }, { "authors": [ "R. Krishnan", "G. Sivakumar", "P. Bhattacharya" ], "title": "Extracting decision trees from trained neural networks", "venue": "Pattern Recognition,", "year": 1999 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical Report TR-2009,", "year": 2009 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "M.C. Hughes" ], "title": "Beyond sparsity: Tree regularization of deep models for interpretability", "venue": null, "year": 2018 }, { "authors": [ "Grgoire Montavon", "Sebastian Lapuschkin", "Alexander Binder", "Wojciech Samek", "Klaus Robert Mller" ], "title": "Explaining nonlinear classification decisions with deep taylor decomposition", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "G. Riccardo", "M. Anna", "R. Salvatore", "T. Franco", "G. Fosca", "P. Dino" ], "title": "A survey of methods for explaining black box models", "venue": "ACM Computing Surveys,", "year": 2018 }, { "authors": [ "B. Sebastian", "B. Alexander", "M. Grgoire", "K. Frederick", "M. Klaus-Robert", "S. Wojciech", "S.O. Deniz" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "Plos One,", "year": 2015 }, { "authors": [ "A. Shrikumar", "P. Greenside", "A. Shcherbina", "A. Kundaje" ], "title": "Not just a black box: Learning important features through propagating activation differences, 2016", "venue": null, "year": 2016 }, { "authors": [ "J.J. Thiagarajan", "B. Kailkhura", "P. Sattigeri", "K.N. Ramamurthy" ], "title": "Treeview: Peeking into deep neural networks via feature-space partitioning, 2016", "venue": "arXiv preprint arXiv:1611.07429(2016)", "year": 2016 }, { "authors": [ "R. Traore", "H. Caselles-Dupre", "T. Lesort", "T. Sun", "G. Cai", "N.D. Rodriguez", "D. Filliat" ], "title": "Discorl:continual reinforcement learning via policy distillation, 2019", "venue": "arXiv preprint arXiv:1907.05855(2019)", "year": 2019 }, { "authors": [ "A. Wan", "L. Dunlap", "D. Ho", "J. Yin", "S. Lee", "H. Jin", "S. Petryk", "S.A. Bargal", "J.E. Gonzalez" ], "title": "Nbdt: Neural-backed decision trees, 2020", "venue": null, "year": 2020 }, { "authors": [ "J.R. Zilke", "E.L. Mencia", "F. Janssen" ], "title": "Deepred-rule extraction from deep neural networks", "venue": "In International Conference on Discovery Science, Springer,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolution neural network(CNN) has surpassed human abilities in some specific tasks such as computer game and computer vision etc. However, they are considered difficult to understand and explain(Brandon, 2017), which leads to many problems in aspects of privacy leaking, reliability and robustness. Explanation technology is of immense help for companies to create safer, more trustable products, and to better manage any possible liability of them (Riccardo et al., 2018). Recently, for finding inherent causality implied in the CNN, the unexplainable problem of CNN, especially concerning the discrimination part which is composed of the fully connected layers of the CNN, has been studied by different scientific communities. Many methods were proposed, which can extract various interpretable models from the optimal discrimination part based on inputs and outputs of the part for expressing the inherent causality implied in the part. However, because of data bias and noisy data in the training data set, the inherent causality cannot readily be found because the part is difficult to be approximated by any interpretable model. We think that the problem could be solved by the following procedure. Firstly, a lightweight interpretable model is designed which can be easily understood by human. And then, the model is initiatively extracted from the discrimination part by solving a Maximum Satisfiability(MAX-SAT) problem based on the activated states of the neurons in the first layer and the output layer of the part. An new distance is proposed which can evaluate the degree to which the discrimination part is easily explained, namely as interpretability performance or interpretable distance. For shrinking the interpretable distance, a game process between the interpretable model and the discrimination part is implemented. Finally, the optimal interpretable model can be obtained, which can express inherent causality implied in the discrimination part. Moreover, based on the procedure, it is also possible to monitor the evolution of the inherent causality implied in the part in the game process.\nMain contributions of this paper can be summarized as follows:\n• An interpretable model, Deep Cognitive Learning Model(DCLM), is proposed to express the inherent causality implied in the discrimination part, and a greedy method is given\nfor initiatively extracting the DCLM from the discrimination part by solving its Maximum Satisfiability(MAX-SAT) Problem.\n• A new game method is proposed to improve the interpretability performance of the discrimination part without largely reducing its generalization performance by iteratively shrinking the interpretable distance between DCLM and the discrimination part.\n• A new distance is proposed to evaluate the degree to which the discrimination part is easily explained, namely as interpretability performance or interpretable distance." }, { "heading": "2 RELATED WORK", "text": "There are usually two types of methods for the unexplainable problem of the discrimination part, such as post-hoc method and ante-hoc method (Holzinger et al., 2019). However, because ante-hoc method is a transparent modeling method(Arrietaa et al., 2020), it can not obtain an explanation about the discrimination part. So, the post-hoc method will be reviewed.\nEarly post-hoc method can obtain global explanations for a neural network by extracting an interpretable model. Some references(Craven & Shavlik, 1999; Krishnan et al., 1999; Boz, 2002; Johansson & Niklasson, 2009) proposed a few methods that can find a decision tree for explaining a neural network by maximizing the gain ratio and an estimation of the current model fidelity. Other references (Craven & Shavlik, 1994; Johansson & Niklasson, 2003; Augasta & Kathirvalavakumar, 2012; Sebastian et al., 2015; Zilke et al., 2016) proposed rule extraction methods for searching optimal interpretable rules from a neural network.\nRecently, some feature relevance methods have become progressively more popular. Montavon et al.(Montavon et al., 2017) proposed a decomposition method from a network classification decision into contributions of its input elements based on deep Taylor decomposition. Shrikumar et al.(Shrikumar et al., 2016) proposed DeepLIFT which can compute importance scores in a multilayer neural network by explaining the difference of the output from some reference output in terms of differences of the inputs from their reference inputs.\nSome other works make complex black box model simpler. Che et al.(Che et al., 2017) proposed a simple distillation method called Interpretable Mimic Learning for extracting an interpretable simple model by gradient boosting trees. Thiagarajan et al.(Thiagarajan et al., 2016) build a Treeview representation of the complex model by hierarchical partitioning of the feature space. In addition, some references (Hinton et al., 2015; Bucila et al., 2006; Frosst & Hinton, 2017; Traore et al., 2019) proposed the distillation method of knowledge from an ensemble of models into a single model. Wu et al.(M. Wu, 2018) proposed a tree regularization method via knowledge distillation to represent the output feature space of a RNN based on a Multilayered perception. However, these methods can only solve the unexplainable problem of trained neural network or trained deep neural networks with explicit input characteristics. Wan et al.(Wan et al., 2020) constructed a decision tree using the last fully connection layer of the discrimination part of a CNN based on a prior structure.\nIn the paper, our goal is to find the inherent causality implied in the discrimination part of CNN, which is composed of all fully connected layers of the CNN without hurting its generalization performance by initiatively extracting its logic relationships with no prior structure and finally obtain its explanation by these logic relationships." }, { "heading": "3 DEEP COGNITIVE LEARNING MODEL", "text": "For expressing the causal relationship between these neurons in the discrimination part, a new interpretable model is designed in the section. As we all known, a CNN includes a feature extractor and a discrimination part. The feature extractor composes of some convolution layers and some pooling layers. The outputs from the feature extractor are the inputs of the discrimination part of the CNN, namely feature maps, τ1, τ2, ..., τk where k is the number of feature maps. All these feature maps form a feature set Γ.\nWe suppose that the discrimination part should better be explained by the logic relationships of the activated states of the neurons in its first layer and its output layer. This is because the relationships\nare the inherent property of the part. In order to express the relationships, a deep cognitive learning model (DCLM)is proposed, shown in Fig.1(b).\nThe DCLM consists of three layers:feature predicate layer, disjunction layer, and decision layer. The top layer is feature predicate layer which consists of many nodes. Every node has a predicate Zj(Γ) that expresses a positive action or negative action of features which the jth neuron in the first fully connected layer of the discrimination part captures. The predicate Zj(Γ) is defined as follows:\nZ1(Γ) = 1, ∑k i=1 τi ∗ wi,1 > −b and τi ∈ Γ, (1) null, ∑k i=1 τi ∗ wi,1 = −b and τi ∈ Γ, (1′)\n0, otherwise. (1′′)\nwhere j ∈ 1, 2, ..., N , N is the number of the input neurons of the first fully connection layer of the discrimination part of the CNN. wi,j is a weight vector between the ith feature map and the jth neuron, bj is the bias of the jth neuron, and ”∗” is a convolution operation. ”1” and ”0” denote a positively activated state and a negatively activated state of the neuron respectively. ”null” denotes an inactivated state.\nThe bottom layer is a decision layer which includes all nodes used for decision. Every node has a predicate which expresses an activated state of an output neuron of the discrimination part. It is defined as follows:\nD(y1) = {\n1, y1 > 0, (2) 0, otherwise. (2′)\nwhere i ∈ 1, 2, ..., C,C is number of the output neurons of the CNN, yi is the output value of the ith output neuron of the discrimination part. All nodes on the feature predicate layer and every node on the decision layer are connected to one or more nodes on the middle layer, namely as disjunction layer, with true or false edges. Every node represents a disjunction relation, which is expressed by a disjunctive normal form. It is worth mentioning: if a node is connected to a node on the disjunction layer by a false edge, its predicate follows after a non-operator in the disjunctive normal form.\nThe potential function of a disjunctive normal form can be obtained by using the Lukasiewicz method(Giles, 1975).\nφc(yi) = min(1, T (Γ, yi)) (3)\nwhere T (Γ, yi) = ∑N\nj=1{aj [1− Zj(Γ)] + (1− aj)Zj(Γ)}+ (aN + 1)D(yi) and N is the number of the nodes on the feature predicate layer. If aj = 1, there is a false edge. Otherwise, there is a true edge.\nThe conditional probability distribution that a ground DCLM including all disjunctive normal forms is true is\np(y,Γ) = 1 Ξ\nexp( ∑G\ni=1 λiφci(yi)∑G i=1 λi ) (4)\nwhere G is the number of all ground formulas, Ξ = ∑ Γ∈F exp( ∑G\ni=1 λiφci(yi)∑G i=1 λi ) is a partition function, y = (y1, y2, ..., yG), yi is an output value of the CNN and λi is a weighted value of the ith ground formula.\nBy maximizing its likelihood function, the optimal ai and λi in the DCLM can be obtained.\nC(Γ) = arg max ai,λi [log p(y,Γ)] = arg max ai,λi\n( ∑\ni λiφci(yi)∑ i λi − log(Ξ)) (5)\nFor extracting a optimal DCLM, a Maximum A Posterior(MAP) algorithm on the Maximum Satisfiability Problem (MAX-SAT) was designed. Using the disjunction normal form with the greatest weighted value in the optimal DCLM, a prediction of an input image can be obtained." }, { "heading": "4 EVALUATION OF INTERPRETABILITY PERFORMANCE", "text": "We consider that if the discrimination part of a CNN has a similar shape of function curve with its optimal interpretable model, the former can be easily explained by the latter. Therefore, the interpretable performance of the discrimination part can be measured by the shape similarity between it and its optimal interpretable model. We posit that given the same input data set, the similarity may be measured by variance of differences between outputs of the both models. It can be named interpretation distance. It is easily proved that the smaller the interpretation distance is, the more similar their shapes are, and the better the interpretability performance of the discrimination part would be.\nDefinition 1 If X is a compact metric space and ν is a Borel measure in X , such as Lebesgue measure or marginal measures, inL2ν(X), a square integrable function space on X , the interpretation distance, φd(P ∗, f), between a discrimination part f(x) and its optimal DCLM P ∗(x) is\nφd(P ∗, f) = ∫\nZ\n(f(x)− P ∗(x)− µP∗(f))2dν (6)\nwhere µP ∗ (f) = ∫\nZ\n(f(x)− P ∗(x))dν (7)" }, { "heading": "5 GAME BETWEEN A DCLM AND THE DISCRIMINATION PART OF A CNN", "text": "As discussed above,when the shapes of the discrimination part of a CNN and its optimal interpretable model are enough similar, the discrimination part has well interpretability performance. However, its generalization performance will tend to decrease. This is mainly attributed to the fact that because of data bias and noisy data in training data set, the sufficient and necessary condition for the consistent convergence of the two performances, φd(P ∗, f∗) = 0(f∗ is the optimal prediction model), is difficult to be guaranteed. Therefore, for the tradeoff problem, in training process, extracting an DCLM from the discrimination part and then iteratively reducing the interpretation distance between the two models may be a feasible solution. A detailed discussion about the problem can be found in the appendix.\nTo avoid reducing the generalization performance, the maximum probability p(w | X, yt) should be guaranteed, where X is a training sample, w is the parameter set of the CNN and yt is the target vector of X .\np(w | X, yt) = p(w | X)p(yt | w, X)\np(yt | X) ∝ p(w)p(yt | w, X) (8)\nAlgorithm 1 Game between DCLM and the discrimination part of a CNN(Its time complexity is O(N + M) where O(N) is a time complexity of training CNN,O(M) is a time complexity of construction of Logic Net.)\nInput: Data set Output: DCLM Repeat CNN = CNN Train(Data set, Adam, loss = CrossEntropy) for data,label in Data set do\nFeature map = CNN featureextractor(CNN). disjunctive normal form = Disjunction(Feature map, Fnn, Wi,j , Rule1 = Eq.1, Rule2 = Eq.2) UpData DCLM(disjunctive normal form, Updata = Eq.5)\nend for for i = 1 to n do\nfor data,label in Data set do Feature map = CNN feature extractor(CNN) ym = DCLM(Feature map) CNN DCLM(data, ym, loss = Eq.14)\nend for end for Until the interpretation distance and accuracy converge\nwhere p(yt | w, X) = ∫ p(yt | f, w, X) ∫\np(f | ydclm, w, X)p(ydclm | w, X)dydclmdf and ydclm is a prediction of DCLM.\nWhen the DCLM is known, y∗dclm is its optimal prediction and p(y∗dclm | w, X) = 1. Then∫ p(f | ydclm, w, X)p(ydclm | w, X)dydclm = p(f | y∗dclm, w, X) (9)\nSimilarly, known the input X and w, fnn is the optimal solution of the CNN.\np(yt | w, X) = p(yt | fnn, w, X)p(fnn | y∗dclm, w, X) (10)\nIf w and X are given and the loss function φr(yt, fnn) = − 12 ∑\nl | yt − fnn |2, the conditional probability distribution function\np(yt | fnn, w, X) = exp(φr(yt, fnn))\nΞ1 (11)\nMeanwhile,\np(fnn | y∗dclm, w, X) = exp(−φd(y∗dclm, fnn))\nΞ2 (12)\nwhere Ξ1 and Ξ2 are partition functions. Then by maximizing a likelihood function of p(w | X, yt) the optimal w can be obtained. In particular, assuming that w follows Gaussian distribution, we get:\nCw(X, yt) = arg maxw [− α 2 ‖ w ‖2 +φr(yt, fnn)− log(Ξ1)− φd(y∗dclm, fnn)− log(Ξ2)] (13)\nwhere α is a meta-parameter determined by the variance of the selected Gaussian distributions. Turn it into a minimization problem:\nCw(X, yt) = arg minw [ α 2 ‖ w ‖2 −φr(yt, fnn) + log(Ξ1) + φd(y∗dclm, fnn) + log(Ξ2)] (14)\nThe iterative optimization algorithm is shown as follows:" }, { "heading": "6 EXPERIMENTAL VERIFICATION", "text": "We designed two experiments to verify the effectiveness of the proposed method. The first experiment verified whether the self-explanatory method could improve the interpretability performance\nof the CNN without sacrificing its generalization performance. The second experiment verified whether the proposed method can tend towards stability and convergence in the game process.\nIn the experiments, CNN3(includes 3 convolution layers,3 MaxPooling layers, 3 fully connect layers(FCLs) and 1 output layer),CNN5(includes 5 convolution layers,5 MaxPooling layers, 3 FCLs and 1 output layer), and CNN8(includes 8 convolution layers,8 MaxPooling layers, 3 FCLs and 1 output layer) were used. Traditional training methods on the three types CNN were named as CNN3Trad, CNN5-Trad and CNN8-Trad respectively. By contrast, our proposed methods on these CNNs were named as CNN3-DCLM, CNN5-DCLM and CNN8-DCLM respectively. All experiments used Mnist(Lecun et al., 1998), FashionMnist(Zalando, 2017), and Cifar-10(Krizhevsky, 2009) benchmark data sets. All algorithms were implemented in Python using the Pytorch library(Paszke et al., 2019). All experiments ran on a server with Intel Xeon 4110(2.1GHz) Silver Processor, 20GB RAM and Nvidia Telsa T4.\nExperiment 1: Performance verification of the proposed method on CNN. We replaced the discrimination part of three traditionally trained CNNs with soft decision tree(SDT)(Frosst & Hinton, 2017) and designated these methods as CNN3-Trad-SDT, CNN5-Trad-SDT and CNN8-Trad-SDT respectively. The accuracy of CNN, accuracy of STD or DCLM, and interpretation distance corresponding to all methods were shown in Table 1. Some values are ”—”, which indicates that these results do not exist.\nIt is observed in Table 1 that the accuracies of all CNN trained by the proposed method are higher than those of the two interpretable models, such as SDT and DCLM, on all benchmark data sets and are around 1.4 percentage points lower than those of all CNN trained by the traditional training method. But it is worth noticing that on the most of data sets the interpretation distances of these CNNs trained by the proposed method are lower than interpretation distances of CNNs obtained by the traditional method. These might prove that that the self-explanatory method can improve the interpretability performance of the discrimination part of a CNN without largely reducing its generalization performance. The accuracies of the DCLMs are higher than those of the SDTs except CNN3-DCLM on FashionMnist data set and CNN3-DCLM on Mnist. These results might prove that the proposed method can find more excellent interpretable model than the traditional method.\nExperiment 2: Convergence test of the proposed method We designed the experiments to demonstrate convergence of the proposed method. CNN3-Trad, CNN5-Trad and CNN8-Trad were used for comparing with CNN3-DCLM, CNN5-DCLM, and CNN8-DCLM respectively. Every training works out 25 epochs. Experiment results were measured at every epoch and shown in four figures, Fig.2, Fig.3, Fig.5 and Fig.4. Every figure includes nine subplots. The three subplots on the left column were shown for the experiment results on Cifar-10 data set. These subplots on the middle column were for FashionMnist data set and these subplots on the right column were for Mnist data set.\nIn Fig.2, the accuracies of the DCLMs and the CNNs of every epoch in the game process were shown. From these figures, it is obvious that accuracies of the DCLMs and these CNNs steadily increase in the early stage. In the next stage, these accuracies tend to be stable. This reflects that the game method do not affect the improvement of the generalization performances of these DCLMs and these CNNs. We also find that the accuracy gap obtain by CNN3-DCLM for the FashionMnist data set is much more than other two data sets. Meanwhile, it can be found that the DCLM convergence to a stable state. The main reason is that for CNN3, in the later stage of the game process, new inherent causality implied in the part can not readily be found by the FashionMnist data set. Even so, the proposed method also improves the accuracies of the DCLMs and these CNN-DCLMs steadily. We also find that the gaps between these accuracies of the DCLMs and the CNNs obtained by the CNN8 become very small at the later epochs. This reflects that CNN8 can extract more effective features by which the DCLMs can find more accurate causality implied in the discrimination part of the CNNs.\nFig.3 shows the interpretation distances of the CNNs trained by the traditional method and the CNNs trained by the proposed method. As seen from these subplots, the interpretation distances of CNNs trained by the traditional method are greater than those of the other CNNs at the most of the epochs, especially by the end of the game. The results indicate that the game method can effectively improve the interpretability performance of CNNs. From these subplots in Fig.2and Fig.3, we also\ncan see that after the 15th epoch, interpretation distances of CNNs from the proposed method tend to converge. The phenomenon indicates that the discrimination part of the CNNs can be explained by its DCLMs at every epoch after the fifteenth epoch.\nFrom Fig.4, it is evident that all DCLMs from CNN3-DCLMs, CNN5-DCLMs, and CNN8-DCLMs have been found to have stable information entropies at the end of the game,which calculate the diversity of disjunction normal forms in the DCLMs obtained by the game method. On Mnist data set, the entropies finally converge 135,113,and 51.3 respectively. On FashionMnist data set, the entropies finally converge 144,132,and 65. On Cifar-10 data set, the entropies finally converge 65,73,and 57. More complicated the extract part of CNN, more small the information entropies of DCLM obtained by the proposed method. The results indicate that the game algorithm can ensure that the diversity of disjunction normal forms of the DCLMs converges to a stable state. The game with the CNN with complex structure can obtain the more robust DCLMs than with the CNN with simply sturcture. The main reason is that the features captured by the CNN with the complex structure is so sparse and robust that the disjunction normal forms of the DCLMs is sparse and robust.\nIn Fig.5,the accuracies of the CNNs trained by the traditional method and the CNNs trained by the proposed method at every epoch were shown. From these subplots, it can be seen that its accuracies steadily increase in the early stage. But in the following stage, their accuracies tend to be stable and consistent. The main reason is that in the early stage, a tradeoff problem between the generalization performance and interpretability performance of the discrimination part of a CNN inevitably reduces its generalization performance in order to increase its interpretability performance. Though the proposed game method can effectively reduce the gap between two performances, it do not increase the gap between the accuracies of the CNNs trained by the traditional method and the CNNs trained by the proposed method. This reflects that the proposed method is effective for the tradeoff problem." }, { "heading": "7 CONCLUSION", "text": "The performance of the proposed method was demonstrated by experiments on benchmark data sets. The proposed method showed prominent advantages over traditional learning algorithm on CNN for improving the generalization performance and the interpretability performance of the discrimination part of the CNN.\nIn practical engineering, the proposed method may provide a new learning paradigm. The method can not only predict an relatively accurate result for new input data but also provide a reasonable causal interpretation for the prediction of the discrimination part of a CNN. We suppose that it can solve the black box problem in the discrimination part. We believe that the proposed method provide a way to understand the discrimination part." }, { "heading": "A APPENDIX: SUFFICIENT AND NECESSARY CONDITION FOR CONSISTENT CONVERGENCE OF THE GENERALIZATION PERFORMANCE AND THE INTERPRETABILITY PERFORMANCE", "text": "If the minimization of loss function of CNN can guarantee the minimum of its interpretability performance, learning algorithm of the CNN can improve its interpretability performance. If not, the tradeoff problem between the generalization performance and the interpretability performance will exit. For proving the existence of the problem, we focus on a neuron of CNN. From the foot we may judge of Hercules. If an input channel f(x) of the neuron is seen as a kernel function K(x,w)(w is weight vector including a bias of the neuron), it will span a kernel Hilbert space HK = {f(x) ∈ L2ν(X) | f(x) = K(x,w) = ∑∞ k=1 akφk(x)} for the neuron. HK can be regarded as a linear function set on L2ν(X). It is a solution space of the neuron. The necessary and sufficient conditions for consistent convergence between the generalization performance and the interpretability performance are discussed below in L2ν(X) based on the following lemmas. Lemma 1. Continuous linear functional set on a separable Hilbert space X is nowhere dense in a square integrable function space L2ν(X). Lemma 2. Continuous nonlinear functional set of the separable Hilbert space X is everywhere dense in L2ν(X). When the optimal input channel f∗(x) approximate a linear functional in HK while the optimal interpretable model P ∗(x) don’t approximate any linear functional or do not exist in HK , traditional training process cannot guarantee f(x) approximates P ∗(x) according Lemma 1. From Lemma 2, the approximate will cannot converge until P ∗(x) approximate f∗(x). Here, if we define the approximation as the similarity between the shapes of function curve of f∗(x) and P ∗(x), the sufficient and necessary condition for the consistent convergence of the two performances is φd(P ∗, f∗) = 0. For the discrimination part of the CNN, the sufficient and necessary condition is still true. However\nbecause of data bias and noisy data in training data set, the condition is difficult to to be ensured in the majority of engineering applications. The tradeoff problem always exists between the two performances of the discrimination part.\nAccording to the above conclusion, in order to completely solve the tradeoff problem, the φd(P ∗, f∗)(Here f∗(x) is the optimal discrimination part) should be reduced. However P ∗ and f∗(x) are unknown. Therefore, in training process, extracting a interpretable model P (x) from a discrimination part f(x) and then iteratively reducing φd(P, f) may be a convenient method." } ]
2,020
A SELF-EXPLANATORY METHOD FOR THE BLACK BOX PROBLEM ON DISCRIMINATION PART OF CNN
SP:a8c2db9bf91b517ea4317c85cab34a53206f7090
[ "The paper proposes a framework for the analysis of causal inference. Its main contribution is to decompose the indirect effect by teasing out the causal contribution of a set of mediators. In a series of experiments with simulated data the authors show that the proposed method, ANOCE, outperforms other comparison partners. The manuscript also contains an analysis of real-world data that describes the causal effects of the lockdown of cities in the Hubei province (China) to reduce the spread of COVID-19. An extensive supplementary file is also part of the submission and it includes additional experiments and technical proofs." ]
In the era of causal revolution, identifying the causal effect of an exposure on the outcome of interest is an important problem in many areas, such as epidemics, medicine, genetics, and economics. Under a general causal graph, the exposure may have a direct effect on the outcome and also an indirect effect regulated by a set of mediators. An analysis of causal effects that interprets the causal mechanism contributed through mediators is hence challenging but on demand. To the best of our knowledge, there are no feasible algorithms that give an exact decomposition of the indirect effect on the level of individual mediators, due to common interaction among mediators in the complex graph. In this paper, we establish a new statistical framework to comprehensively characterize causal effects with multiple mediators, namely, ANalysis Of Causal Effects (ANOCE), with a newly introduced definition of the mediator effect, under the linear structure equation model. We further propose a constrained causal structure learning method by incorporating a novel identification constraint that specifies the temporal causal relationship of variables. The proposed algorithm is applied to investigate the causal effects of 2020 Hubei lockdowns on reducing the spread of the coronavirus in Chinese major cities out of Hubei.
[ { "affiliations": [], "name": "Hengrui Cai" }, { "affiliations": [], "name": "Rui Song" } ]
[ { "authors": [ "Chen Avin", "Ilya Shpitser", "Judea Pearl" ], "title": "Identifiability of path-specific effects", "venue": null, "year": 2005 }, { "authors": [ "Albert-László Barabási", "Réka Albert" ], "title": "Emergence of scaling in random", "venue": "networks. science,", "year": 1999 }, { "authors": [ "Peter Bühlmann", "Jonas Peters", "Jan Ernest" ], "title": "Cam: Causal additive models, high-dimensional order search and penalized regression", "venue": "The Annals of Statistics,", "year": 2014 }, { "authors": [ "David Card" ], "title": "The causal effect of education on earnings", "venue": "In Handbook of labor economics,", "year": 1999 }, { "authors": [ "Abhishek Chakrabortty", "Preetam Nandy", "Hongzhe Li" ], "title": "Inference for individual mediation effects and interventional effects in sparse high-dimensional causal graphical models", "venue": "arXiv preprint arXiv:1809.10652,", "year": 2018 }, { "authors": [ "Huijie Chen", "Ye Chen", "Baijun Sun", "Ping Wang", "Lihai Wen", "Zhiyong Lian", "Ying Lu", "Ying Qi", "Shuo Zhao", "Linlin Zhang" ], "title": "Correlation between the migration scale index and the number of new confirmed novel coronavirus pneumonia cases in china", "venue": null, "year": 2020 }, { "authors": [ "David Maxwell Chickering" ], "title": "Optimal structure identification with greedy search", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Miguel Angel Hernán" ], "title": "A definition of causal effect for epidemiological research", "venue": "Journal of Epidemiology & Community Health,", "year": 2004 }, { "authors": [ "Miguel Ángel Hernán", "Babette Brumback", "James M Robins" ], "title": "Marginal structural models to estimate the causal effect of zidovudine on the survival of hiv-positive men", "venue": "Epidemiology, pp", "year": 2000 }, { "authors": [ "Markus Kalisch", "Peter Bühlmann" ], "title": "Estimating high-dimensional directed acyclic graphs with the pc-algorithm", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Sébastien Lachapelle", "Philippe Brouillard", "Tristan Deleu", "Simon Lacoste-Julien" ], "title": "Gradient-based neural dag learning", "venue": "arXiv preprint arXiv:1906.02226,", "year": 2019 }, { "authors": [ "Stephen A Lauer", "Kyra H Grantz", "Qifang Bi", "Forrest K Jones", "Qulu Zheng", "Hannah R Meredith", "Andrew S Azman", "Nicholas G Reich", "Justin Lessler" ], "title": "The incubation period of coronavirus disease", "venue": "Annals of internal medicine,", "year": 2019 }, { "authors": [ "Marloes H Maathuis", "Markus Kalisch", "Peter Bühlmann" ], "title": "Estimating high-dimensional intervention effects from observational data", "venue": "The Annals of Statistics,", "year": 2009 }, { "authors": [ "Preetam Nandy", "Marloes H Maathuis", "Thomas S Richardson" ], "title": "Estimating the effect of joint interventions from observational data in sparse high-dimensional settings", "venue": "The Annals of Statistics,", "year": 2017 }, { "authors": [ "Ugo Panizza", "Andrea F Presbitero" ], "title": "Public debt and economic growth: is there a causal effect", "venue": "Journal of Macroeconomics,", "year": 2014 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Judea Pearl" ], "title": "Causal inference in statistics: An overview", "venue": "Statistics surveys,", "year": 2009 }, { "authors": [ "Joseph Ramsey", "Madelyn Glymour", "Ruben Sanchez-Romero", "Clark Glymour" ], "title": "A million variables and more: the fast greedy equivalence search algorithm for learning high-dimensional graphical causal models, with an application to functional magnetic resonance", "venue": "images. International journal of data science and analytics,", "year": 2017 }, { "authors": [ "Paul R Rosenbaum", "Donald B Rubin" ], "title": "The central role of the propensity score in observational studies for causal effects", "venue": null, "year": 1983 }, { "authors": [ "Rajen D Shah", "Jonas Peters" ], "title": "The hardness of conditional independence testing and the generalised covariance measure", "venue": "arXiv preprint arXiv:1804.07203,", "year": 2018 }, { "authors": [ "Shohei Shimizu", "Patrik O Hoyer", "Aapo Hyvärinen", "Antti Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Pater Spirtes", "Clark Glymour", "Richard Scheines", "Stuart Kauffman", "Valerio Aimale", "Frank Wimberly" ], "title": "Constructing bayesian network models of gene expression networks from microarray", "venue": null, "year": 2000 }, { "authors": [ "Tyler VanderWeele", "Stijn Vansteelandt" ], "title": "Mediation analysis with multiple mediators", "venue": "Epidemiologic methods,", "year": 2014 }, { "authors": [ "Stijn Vansteelandt", "Rhian M Daniel" ], "title": "Interventional effects for mediation analysis with multiple mediators", "venue": "Epidemiology (Cambridge, Mass.),", "year": 2017 }, { "authors": [ "Sewall Wright" ], "title": "Correlation and causation", "venue": "Journal of agricultural research,", "year": 1921 }, { "authors": [ "Yue Yu", "Jie Chen", "Tian Gao", "Mo Yu" ], "title": "Dag-gnn: Dag structure learning with graph neural networks", "venue": "arXiv preprint arXiv:1904.10098,", "year": 2019 }, { "authors": [ "Xun Zheng", "Bryon Aragam", "Pradeep K Ravikumar", "Eric P Xing" ], "title": "Dags with no tears: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Fei Zhou", "Ting Yu", "Ronghui Du", "Guohui Fan", "Ying Liu", "Zhibo Liu", "Jie Xiang", "Yeming Wang", "Bin Song", "Xiaoying Gu" ], "title": "Clinical course and risk factors for mortality of adult inpatients with covid-19 in wuhan, china: a retrospective cohort study", "venue": null, "year": 2020 }, { "authors": [ "Shengyu Zhu", "Zhitang Chen" ], "title": "Causal discovery with reinforcement learning", "venue": "arXiv preprint arXiv:1906.04477,", "year": 2019 }, { "authors": [ "Yu" ], "title": "2019), we find that their tuned parameters ρ = 0.25 and ω", "venue": null, "year": 2019 }, { "authors": [ "Ba" ], "title": "2014) to minimize the loss function, and set the batch size as 25 for n = 50", "venue": null, "year": 2014 }, { "authors": [ "Nandy" ], "title": "Specifically, for each directed path from A to Y", "venue": null, "year": 2017 }, { "authors": [], "title": "2018) defined the individual mediation effect under the LSEM as follows", "venue": "Definition F.1 (Chakrabortty et al.,", "year": 2018 }, { "authors": [ "Chakrabortty" ], "title": "2018) may cancel out in some cases and their summation would equal to the IE by chance. Inspired by the proof of Theorem F.1, the mediator effect ηi can be decomposed into two parts, the natural direct and indirect effect", "venue": null, "year": 2018 }, { "authors": [ "Nandy" ], "title": "0p×p is a p× p zero matrix. Following the path method (the causal effect of Xi on Xj along a directed path from Xi → Xj in G can be calculated by multiplying all edge weights along the path", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the era of causal revolution, identifying the causal effect of an exposure on the outcome of interest is an important problem in many areas, such as epidemics (Hernán, 2004), medicine (Hernán et al., 2000), education (Card, 1999), and economics (Panizza & Presbitero, 2014). Under a general causal graph, the exposure may have a direct effect on the outcome and also an indirect effect regulated by a set of mediators (or intermediate variables). For instance, during the outbreak of Coronavirus disease 2019 (COVID-19), the Chinese government has taken extreme measures to stop the virus spreading such as locking Wuhan down on Jan 23rd, 2020, followed by 12 other cities in Hubei, known as the “2020 Hubei lockdowns”. This approach (viewed as the exposure), directly blocked infected people leaving from Hubei; and also stimulated various quarantine measures taken by cities outside of Hubei (as the mediators), which further decreased the migration countrywide in China, and thus indirectly control the spread of COVID-19. Quantifying the causal effects of 2020 Hubei lockdowns on reducing the COVID-19 spread regulated by different cities outside Hubei is challenging but of great interest for the current COVID-19 crisis. An analysis of causal effects that interprets the causal mechanism contributed via individual mediators is thus very important.\nMany recent efforts have been made on studying causal effects that are regulated by mediators. Chakrabortty et al. (2018) specified the individual mediation effect in a sparse high-dimensional causal graphical model. However, the sum of marginal individual mediation effect is not equal to the effect of all mediators considered jointly (i.e. the indirect effect) due to the common interaction among mediators (VanderWeele & Vansteelandt, 2014). Here, ‘interaction’ means that there exists at\nleast one mediator that is regulated by other mediator(s) (see Figure 1b for illustration), in contrast to the simple ‘parallel’ case (shown in Figure 1a). Vansteelandt & Daniel (2017) considered an exact decomposition of the indirect effect with a two-mediator setting based on the conditional densities of mediators, while there was no feasible algorithm provided to solve their proposed expressions yet. Therefore, a new framework with a computational friendly algorithm that gives an exact decomposition of the indirect effect on the level of individual mediators is desired under the complex causal network.\nTo estimate the underlying causal network, structure learning algorithms of the directed acyclic graph (DAG) are widely used. Popular methods such as the PC algorithm (Spirtes et al., 2000) that uses conditional independence tests to examine the existence of edges between each pair of variables, require strong assumptions and thus have no guarantee in the finite sample regime. Recently, Zheng et al. (2018) opened up another class of causal discovery methods by directly formulating a pure optimization problem over real metrics with a novel characteristic of the acyclicity. Yu et al. (2019) further extended Zheng et al. (2018)’s work with a deep generative model, and showed better performance on the structure learning with weaker assumptions on the noise. See more follow-up works in Lachapelle et al. (2019) and Zhu & Chen (2019). However, the current cutting-edge methods neglect the temporal causal relationship among variables, and thus cannot appropriately represent the causal network with pre-specified exposure and outcome.\nIn this paper, we consider establishing a new statistical framework to comprehensively characterize causal effects with multiple mediators, namely, ANalysis Of Causal Effects (ANOCE), under the linear structure equation model (LSEM). Specifically, we propose two causal effects on the level of individual mediators, the natural direct effect and the natural indirect effect for a mediator, denoted as DM and IM , respectively. Our proposed DM can be interpreted as the direct effect of a particular mediator on the outcome that is not regulated by other mediators, while the IM is the indirect effect of the mediator controlled by its descendant mediators. We prove that the DM is valid in the sense that it exactly decomposes the indirect effect of the exposure on the outcome, followed by an ANOCE table to explain different sources of causal effects. To bridge the cutting-edge graphical learning approaches with the temporal causal relationship of variables, we extend the variational auto-encoder (VAE) framework in Yu et al. (2019) with a novel identification constraint that specifies the topological order of the exposure and the outcome. The proposed constrained VAE algorithm is then used to estimate causal effects defined in our ANOCE table, named as ‘ANOCE-CVAE’.\nOur contributions can be summarized in the following three aspects: • 1). Conceptually, we define different sources of causal effects through mediators with a newly introduced definition of direct and indirect mediator effects, and give an exact decomposition of the indirect effect on the level of individual mediators, under the linear structure equation model. • 2). Methodologically, we incorporate the background knowledge (the temporal causal relationship among variables) when using an optimization approach to the causal discovery. Such prior knowledge can be generalized for any measured variable and on the possible set of their parents. Our proposed constrained structural learning can be easily extended to other score-based algorithms. • 3). Practically, extensive simulations are conducted to demonstrate the empirical validity of the\nproposed algorithm and its competitive performance among existing causal discovery algorithms. Our method is applied to investigate the causal effects of 2020 Hubei lockdowns on reducing the COVID-19 spread in China, by quantifying the individual effect for each city." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 GRAPH TERMINOLOGY", "text": "Consider a graph G = (X,E) with a node set X and an edge set E. There is at most one edge between any pair of nodes. If there is an edge between Xi and Xj , then Xi and Xj are adjacent. A node Xi is said to be a parent of Xj if there is a directed edge from Xi to Xj . Let the set of all parents of node Xj in G as PAXj (G). A path from Xi to Xj in G is a sequence of distinct vertices, π ≡ {a0, a1, · · · , aL} ⊂ V such that a0 = Xi, and aL = Xj . A directed path from Xi to Xj is a path between Xi and Xj where all edges are directed toward Xj . A directed cycle is formed by the directed path from Xi to Xj together with the directed edge Xj to Xi. A directed graph that does not contain directed cycles is called a directed acyclic graph (DAG). A directed graph is acyclic if and only if it has a topological ordering. Suppose a DAG G = (X,E) that characterizes the causal relationship among |X| = d nodes, where X = [X1, X2, · · · , Xd]> represents a random vector and an edge Xi → Xj means that Xi is a direct cause of Xj . Let B = {bi,j}1≤i≤d,1≤j≤d be a d × d matrix, where bi,j is the weight of the edge Xi → Xj ∈ E, and bi,j = 0 otherwise. Then, we say that G = (X,B) is a weighted DAG with the node set X and the weighted adjacency matrix B (the edge set E is nested in B)." }, { "heading": "2.2 RELATED WORK", "text": "Our work connects to the literature of the causal graphical model. Pearl et al. (2009) provided a comprehensive review of recent advances in the analysis of causes and counterfactuals using ‘dooperator’ by graphical methods. Later, Maathuis et al. (2009) started to use an unknown DAG without hidden variables to estimate the causal effects from the high-dimensional observational data. Nandy et al. (2017) extended the work of Maathuis et al. (2009) with the linear structure equation model, followed by the individual mediation effect defined in Chakrabortty et al. (2018). All of these models rely on the PC algorithm to search the Markov equivalence class of the partial DAG, and usually require strong assumptions due to the computational limit. Our ANOCE is established under the same causal structure of Chakrabortty et al. (2018) but without sparsity and normality assumptions.\nWide literature on causal discovery can be summarized in three classes. The first type focuses on local conditional independence tests to find a causal skeleton and then determine the orientation of edges, such as the well-known PC algorithm (Spirtes et al., 2000; Kalisch & Bühlmann, 2007). However, testing the conditional independence of continuous variables is not easy (Shah & Peters, 2018). The second class specifies properly functional causal models with additional assumptions on data distribution, including the ICA-LiNGAM (Shimizu et al., 2006) and the causal additive model (CAM) (Bühlmann et al., 2014). The last class, the score-based method, includes the greedy equivalence search (GES) (Chickering, 2002) and the fast GES (fGES) (Ramsey et al., 2017) that use for example Bayesian scores in searching a space of causal models. Recently, Zheng et al. (2018) opened up another track of score-based methods by constructing an optimization with an acyclicity constraint under the LSEM, i.e. the NOTEARS. A follow-up work using a VAE parameterized by a graph neural network that generalizes LSEM was proposed in Yu et al. (2019) with a more computational friendly constraint, namely DAG-GNN. Also see Zhu & Chen (2019) and Lachapelle et al. (2019) for other cutting-edge structural learning methods.\nThe improvement of our ANOCE-CVAE over the state-of-the-arts is as follows. We consider a new constrained structural learning, by incorporating the background knowledge (the temporal causal relationship among variables) into the score-based algorithms. We formulated such prior information as the identification constraint and add it as the penalty term in the objective function for the causal discovery. In this paper, we typically extend the DAG-GNN for an illustration. Note that the proposed constraint is not limited to the DAG-GNN and can be easily extended to other score-based algorithms." }, { "heading": "3 ANALYSIS OF CAUSAL EFFECTS", "text": "" }, { "heading": "3.1 STATISTICAL FRAMEWORK AND ASSUMPTIONS", "text": "Let A be the exposure/treatment, M = [M1,M2, · · · ,Mp]> be mediators with dimension p, and Y be the outcome of interest. Suppose there exists a weighted DAG G = (X,B) that characterizes the causal relationship among X = [A,M>, Y ]>, where the dimension of X is d = p + 2. Let Y ∗(A = a,M = m) be the potential outcome that would be observed after receiving treatment a and setting mediators as m, and M∗(A = a) be the potential mediators that would be observed after receiving treatment a. As standard in the causal inference (Rosenbaum & Rubin, 1983), we assume that there is no unmeasured confounder: (A1) the effect of the treatment A on the outcome Y is unconfounded, i.e. Y ∗(A = a,M = m) ⊥ A,∀a,m; (A2) the effect of the treatment A on the mediators M is unconfounded, i.e. M∗(A = a) ⊥ A,∀a; (A3) the effect of the mediators M on the outcome Y is unconfounded given the treatment A, i.e. Y ∗(A = a,M = m) ⊥ M |A,∀a,m. In addition, as standard in the graphical causal discovery, we also make the Markov condition, the faithfulness condition, causal sufficiency assumption, and the linear structural equation model (LSEM) such that X = [A,M>, Y ]> characterized by the pair (G, ) is generated by\nX = B>X + , (1)\nwhere is a random vector of jointly independent error variables.\nDenote all directed paths in G that start with the exposure A and end with the outcome Y as set {πAY (G)}. If there exists at least one directed path π∗ ∈ {πAY (G)} such that the length of π∗ is larger than 2, we say there is an interaction among mediators, as shown in Figure 1b; otherwise, we call mediators are ‘parallel’ as shown in Figure 1a. In this paper, we consider all possible causal structures with multiple mediators under assumptions (A1-A3).\nWe next give the total effect (TE), the natural direct effect that is not mediated by mediators (DE), and the natural indirect effect that is regulated by mediators (IE) defined in Pearl et al. (2009).\nDefinition 3.1 (Pearl et al., 2009)\nTE = ∂E{Y |do(A = a)}/∂a = E{Y |do(A = a+ 1)} − E{Y |do(A = a)}, DE = E{Y |do(A = a+ 1,M = m(a))} − E{Y |do(A = a)}, IE = E{Y |do(A = a,M = m(a+1))} − E{Y |do(A = a)},\nwhere do(A = a) is a mathematical operator to simulate physical interventions that hold A constant as a while keeping the rest of the model unchanged, which corresponds to remove edges into A and replace A by the constant a in G. Here, m(a) is the value of M if setting do(A = a), and m(a+1) is the value of M if setting do(A = a+ 1). Refer to Pearl et al. (2009) for more details of ‘do-operator’.\nNote that in the assumed linear model, the slope of the line is the same everywhere; for convenience and simplicity, we use a and a+ 1 to present the change of the treatment of 1 in the definition." }, { "heading": "3.2 NATURAL DIRECT AND INDIRECT EFFECT FOR INDIVIDUAL MEDIATORS", "text": "We first give the definition of the natural direct effect for an individual mediator (DM ).\nDefinition 3.2 Natural direct effect for Mi: DMi = [ E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} ] × [ E{Y |do(A = a,Mi = m(a)i + 1,Ωi = o (a) i )} − E{Y |do(A = a)} ] ,\n(2)\nwhere m(a)i is the value of Mi when setting do(A = a), Ωi = M \\Mi is the set of mediators except Mi, and o (a) i is the value of Ωi when setting do(A = a).\nRemark 3.1 From Definition 3.2, the natural direct effect for Mi is the product of the total effect of the treatment A on the mediator Mi and the direct effect of the mediator Mi on the outcome Y . The\nsecond multiplier is in line with the classical meaning of ‘natural’ in the causal inference literature (Pearl et al., 2009). Thus, the DM can be interpreted as the causal effect through a particular mediator from the treatment on the outcome that is not regulated by its descendent mediators.\nThe natural indirect effect for an individual mediator (IM ) can be defined similarly.\nDefinition 3.3 Natural indirect effect for Mi: IMi = [ E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} ] × [ E{Y |do(A = a,Mi = m(a)i + 1)} − E{Y |do(A = a,Mi = m (a) i + 1,Ωi = o (a) i )} ] .\nRemark 3.2 The second multiplier in the IMi captures the indirect effect of a particular mediator on the outcome regulated by its descendent mediators. We show the individual mediation effect (η) in Chakrabortty et al. (2018) can be decomposed into the DM and the IM in Section F in the appendix when the LSEM assumption holds, i.e., ηi = DMi + IMi, for i-th mediator.\nNext, we give explicit expressions of defined causal effects under the LSEM. Specifically, we can write the linear structural model 1 under assumptions (A1-A3) as[\nA M Y\n] = B> [ A M Y ] + = 0 0p×1 0α B>M 0 γ β> 0 [AM Y ] + [ A Mp Y ] , (3)\nwhere γ is a scalar, α, β, and 0p×1 are p× 1 vectors, BM is a p× p matrix, and ≡ [ A, >M , Y ]>. Here, γ presents the weight of the edge A → Y , the i-th element of α corresponds to the weight of the edge A → Mi, and the i-th element of β is the weight of the edge Mi → Y . Note that by assumptions (A1-A3), we have the exposure A has no parents and the outcome Y has no descendants, so equivalently, the first row and the last column of B> are all zeros (i.e., the first column and the last row of B are all zeros). Notice the exposure can be presented by its own noise, i.e., A = A, since A has no parents, so any exposure (with arbitrary noise distribution) will satisfy the LSEM assumption.\nNext, we obtain expressions of causal effects under the LSEM in the following theorem. The proof can be found in Section G.1 of the appendix.\nTheorem 3.1 Under assumptions (A1-A3) and Model 1, we have: 1). the natural direct effect is DE = γ; 2). the natural indirect effect is IE = β>(Ip −B>M )−1α, where Ip is a p× p identity matrix; 3). the total effect of A on Y is TE = γ + β>(Ip −B>M )−1α; 4). the natural direct effect of Mi on Y is DMi = βi{(Ip − B>M )−1α}i, where βi is the i-th element of β corresponding to the weight of Mi → Y , and {(Ip −B>M )−1α}i is the i-th element of (Ip −B>M )−1α as the total effect of A on Mi, i.e. E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)}.\nRemark 3.3 One may refer to section A in the appendix for the invertibility of Ip − B>M . Also, a toy example is provided in section E to illustrate how to manually compute the causal effects defined above. Note that there is no explicit expression of the IM due to the complex interaction among mediators, while we provide its theoretical form in Section G.2 with its numerical form in Section B.\nBased on the result 2) and 4) in Theorem 3.1, the IE can be presented as an additive form of DMs, as shown in Theorem 3.2. Thus, the proposed natural direct effect of individual mediators is valid in the sense that it exactly decomposes the indirect effect of the exposure on the outcome.\nTheorem 3.2 Under assumptions (A1-A3) and Model 1, the IE can be decomposed through DMs:\nIE = p∑ i=1 DMi." }, { "heading": "3.3 ANALYSIS OF CAUSAL EFFECTS TABLE", "text": "Based on the result TE = DE + IE in Pearl et al. (2009) and Theorem 3.2, we summarize the defined causal effects and their relationship in Table 1 for the analysis of causal effects (ANOCE).\nFirstly, the causal effect of A on Y has two sources, the direct effect from A and the indirect effect via p mediators M (M1, · · · ,Mp). Next, the direct source has the degree of freedom (d.f.) as 1, while the indirect source has d.f. as p from p mediators. Note the true d.f. of the indirect effect may be smaller than p, since A may not be regulated by all mediators. Then, the causal effect for the direct source is the DE and for the indirect source is the IE, where the IE can be further decomposed into p DMs and each component corresponds to the natural direct effect for a specific mediator. The last row in the table shows that the DE and the IE compose the total effect TE with d.f. as p+ 1." }, { "heading": "4 CONSTRAINED STRUCTURAL LEARNING FOR ANOCE", "text": "We next estimate the weighted adjacency matrix B with our causal framework under the LSEM to calculate causal effects. To better capture the sampling distribution faithful to the DAG, we consider a deep generative model that generalizes the LSEM instead of using a regression that heavily relies on assumptions of noise (see more discussion in Section 2.2). Specifically, the LSEM 1 can be rewritten as (Ip+2−B>)X = , where Ip+2 is a (p+2)× (p+2) identity matrix. Inversely, we have X = (Ip+2 −B>)−1 . Following the VAE architecture in Yu et al. (2019), we treat the random error as the independent latent variables to generate X , by two multilayer perceptrons as the encoder and the decoder, with weights denoted as θ. We adopt their acyclicity constraint on B as,\nh1(B) ≡ tr [ (Ip+2 + tB •B)p+2 ] − (p+ 2) = 0, (4)\nwhere tr(·) is the trace of a matrix, t is a hyperparameter that depends on an estimation of the largest eigenvalue of B, and • denotes for the element-wise square. Next, to incorporate the background knowledge of the temporal causal relationship among variables, we propose an identification constraint that indicates the topological order of the exposure and the outcome. As mentioned in Equation 3, under assumptions (A1-A3), the exposure A has no parents, i.e. PAA(G) = ∅, and the outcome Y has no descendants, i.e. Y 6∈ PAX(G). Or equivalently, we have the first column and the last row of B should equal to zero. Therefore, the matrix B must satisfy\nh2(B) ≡ p+2∑ i=1 |bi,1|+ p+2∑ j=2 |bp+2,j | = 0, (5)\nwhere bi,j is the element of the matrix B in i-th row and j-th column. The above constraint forces the topological order of the exposure as 1 while the outcome as p+ 2, under which the DAG is searched within a restricted regime. The prior knowledge in 5 can be generalized for any measured variable and on the possible set of their parents, by connecting the topological order to the weighted matrix B.\nFollowing Yu et al. (2019), the objective function is the evidence lower bound with two constraints:{ min B,θ f(B, θ) = 1p+2 ∑p+2 i=1 DKL{q( |Xi)||p( )} − Eq( |Xi){log p(Xi| )},\ns.t. h1(B) = 0 and h2(B) = 0, (6)\nwhere DKL(·||·) is the Kullback-Leibler divergence, p( ) is the prior distribution of , q( |Xi) is the reconstructed empirical posterior distribution of , and p(Xi| ) is the likelihood function. Then, we have the loss function based on the augmented Lagrangian as\nLc,d(B, θ, λ1, λ2) = f(B, θ) + λ1h1(B) + λ2h2(B) + c|h1(B)|2 + d|h2(B)|2, (7)\nwhere λ1 and λ2 are Lagrange multipliers, and c and d are penalty terms. To minimize the loss in 7 and satisfy both h1(B) = 0 and h2(B) = 0, we simultaneously update λ1 and λ2 and increase c and d to infinity, by modifying the basic technique in Yu et al. (2019). Here, the minimization can be solved using a blackbox stochastic optimization such as ‘Adam’ in Kingma & Ba (2014). Denote the estimated matrix as B̂ from the above constrained structural learning. Under Theorem 3.1, we can estimate causal effects in the ANOCE table based on the learned B̂. We name the above algorithm as ANOCE-CVAE, with a detail pseudocode provided in Section B.\nRemark 4.1 We incorporate the temporal causal relationship among variables when using an optimization approach to the causal discovery. Such constrained structural learning is not limited to the VAE framework and can be extended to any score-based algorithms. For instance, one can add constraint 5 into the objective function in Zheng et al. (2018) or the reward in Zhu & Chen (2019)." }, { "heading": "5 EXPERIMENTS", "text": "We conduct extensive simulation studies to investigate the proposed method on learning causal effects with multiple mediators, followed by a comparison to the popular structural learning algorithms. The dataset and the code are publicly available at https://github.com/anoce-cvae/ ANOCE-CVAE." }, { "heading": "5.1 SIMULATION STUDIES", "text": "Scenarios are generated as follows. In Scenario 1 to 3, we fix the dimension of M as p = 10 while increasing the complexity of the true graph to examine the sensitivity of our algorithm to sparsity. Specifically, Scenario 1 is the simplest causal graph with only one edge (A→ Y ) shown in Figure 2a; and Scenario 2 has a fully connected graph with independent mediators (corresponding to the parallel case, i.e. BM = 0p×p) illustrated in Figure 2d. In Scenario 3, we consider interacted mediators such that BM 6= 0p×p, as demonstrated in Figure 2g. For Scenario 4, we allow p = 30 with interacted mediators to examine the stability of our method under the high-dimensional setting. Here, the true DAGs in Scenarios 3 and 4 are generated from the Erdős-Reńyi (ER) model with an expected degree as 2 . Note that we consider fully identifiable models in the experiments so that it is meaningful to evaluate causal effects from the estimated graph. The synthetic datasets {A,M, Y } are generated from Model 1 with Gaussian errors in Scenario 1-4. We also set A ∈ {−1, 1} in Scenario 4 to show that our algorithm is capable to handle both discrete and continuous exposure, denoted as Scenario 4∗. The sample size n is chosen from {50, 500} to be consistent with the scale of our real data. See more details of the data generation in Section C.1 and the implementation in Section C.2 in the appendix.\nThe averaged estimated matrix B̂> over 100 replications under the proposed ANOCE-CVAE is illustrated in Figure 2. The numerical results are summarized in Table 2 (for Scenario 1 to 3) and\nTable 3 (for Scenario 4 and 4∗) in the appendix, including the bias of the estimated TE, DE, IE, DM and IM for each mediator with their standard error. It can be observed that our proposed method could correctly identify most of the edges in the causal graph when n = 500 in almost all cases. Based on Table 2 and 3, the estimated causal effects are close to the true values as the sample size increases, indicating the good performance of our proposed method on identifying the causal effects regardless of the sparsity, the distribution of the exposure, and the dimension of mediators." }, { "heading": "5.2 COMPARISON", "text": "We next compare our approach against the PC (Spirtes et al., 2000), the ICA-LiNGAM (Shimizu et al., 2006), the NOTEARS (Zheng et al., 2018), and the DAG-GNN (Yu et al., 2019). Random graphs are generated from both the ER and the Scale-Free (SF) networks with the expected degree as 1, 2, and 4, denoted as Cases ER1, ER2, ER4, SF1, SF2, and SF4, respectively. To be consistent with Section 5.1, we refer Scenario 3 (generated by the ER with the degree as 2) as ER2, and set d = 12 (i.e. p = 10) with Gaussian errors for other five cases, under n = 500. Details of the data generation and the implementation of each method are reported in Section C.1 and C.2. Here, we use a graph threshold as 0.3 (commonly used in other methods) and 0.4 to prune the noise edges for a fair comparison. The averaged estimated matrix B̂> over 100 replications under different methods is shown in Figure 3 with a graph threshold as 0.3 for Scenario 3 (i.e., Case ER2) as an illustration. See other cases in Figures 5 to 15 in the appendix. All the numerical results of six cases are reported in Tables 4 and 5 in the appendix, including the false discovery rate, the true positive rate, and the structural Hamming distance. It is shown our algorithm performs the best among five methods in most cases, followed by the NOTEARS and the DAG-GNN. The comparison studies not only support the choice of the extension on the score-based algorithm (by comparing the results of the NOTEARS and the DAG-GNN with other methods), but also validate the improvement of our method over the DAG-GNN by introducing the background knowledge in the causal discovery." }, { "heading": "6 REAL DATA ANALYSIS: COVID-19 OUTBREAK", "text": "From early Jan 2020 to late Feb 2020 (the Spring Festival period), COVID-19 spread to every province-level division of China, exacerbated by the Chinese new year migration and human to human transmission. The Chinese Government locked Hubei down on Jan 24th, which directly blocked infected people leaving from Hubei, and also indirectly control the spread of COVID-19. Here, for simplicity, we attribute the causal effects of the measures taken by cities outside of Hubei to the original and main action of interest, i.e. Hubei lockdowns. Thus, the COVID-19 example satisfies the considered causality framework for studying the causal mediator effects.\nWe collect the data from the National Health Commission (NHC) of China and Baidu Qianxi for analysis. Specifically, let the exposure A as if Hubei is on lockdown, 0 for unlocked (before and on Jan 23rd), and 1 for locked (on and after Jan 24th). We select 30 candidate cities outside Hubei that contain most potential infected people, as mediators M . The daily migration scale index (MSI) of each city is used as the value of each mediator, which is the migration magnitude of large groups of people from one geographical area to another (Chen et al., 2020) and is comparable among cities. Lastly, we use the daily increase rate of confirmed cases out of Hubei to characterize the severity of the virus spreading with a one-week delay (due to the diagnose and incubation period of COVID-19 (Lauer et al., 2020)): Yt =\nConfirmed cases out of Hubeit+8−Confirmed cases out of Hubeit+7 Confirmed cases out of Hubeit+7\n. Here, the time t starts from Jan 12th to Feb 20th, 2020, since Jan 19th, 2020 is the earliest date with an available number of\nconfirmed cases out of Hubei (to compute Yt=1 on Jan 12th), and after Feb 20th, 2020, the pandemic was under control outside Hubei with the evidence of the work resumption in China. The final dataset yields a total of 38 records. More details of data collection can be found in Section D of the appendix.\nThe proposed algorithm is applied to the COVID-19 data with 100 replications by setting different random seeds in the neural network. The estimated weighted adjacency matrix is shown in Figure 4a, with the detailed ANOCE table reported in Table 6 in the appendix. The total effect of 2020 Hubei lockdowns on the daily increase rate of confirmed cases outside Hubei as -0.497, where the direct effect is -0.078 and the indirect effect is -0.419. In other words, by locking Hubei down, China successfully reduced 49.7% of the daily new cases outside Hubei; 84% of which is the indirect effect contributed via the reduced migration of cities (the mediators) out of Hubei, and the rest 16% owes to the direct effect of Hubei lockdowns since infected people were constrained in Hubei after the lockdown. Thus, the lockdown is effective in reducing the COVID-19 spread in China.\nThe total indirect effect of the lockdown can be further broken down by cities’ direct effects (DMs, corresponds to the intensity of transmission within a particular city). We compare cities’ DMs with their associated indirect effects (IMs, describes the secondary migration from a particular city to other places) in Figure 4b, where a positive effect means spreading the virus while negative means control. Note that the selected 30 cities are ordered by their cumulative MSI during the data period. • 1). From Figure 4b, the majority of cities have a negative DM (colored in blue), which implies the infection within cities outside Hubei have been effectively controlled under the lockdown. • 2). There are more cities with a positive IMs (red), which is in line with the intuition that the secondary migration among cities may exacerbate the pandemic. • 3). The positive effects (red) are more likely located at the first 20 nodes, which corresponds to the cities with large MSI, while the last 10 cities with relatively small MSI are almost all blue. This accords with the migration peak among big cities during the Spring Festival period that aggravated the spread of the virus." }, { "heading": "7 CONCLUSION", "text": "We conclude our paper with the following discussions. First, the proposed DM can be extended beyond the LSEM assumption. A generalized definition of the DM from a graphical perspective is given in Section F.2 of the appendix without the LSEM. Second, due to possibly unmeasured confounders in our real data, such as cities’ features and periodic effect, we may consider extending our model with a new topological order that contains confounders for a wider utility, such as forcing the topological order of k confounders as 1 to k followed by the exposure as 1 + k. Third, our proposed identification constraint can be generalized to other background knowledge." }, { "heading": "8 ACKNOWLEDGMENTS", "text": "The authors are grateful to the anonymous reviewers for valuable comments and suggestions. Rui Song’s research is partially supported by a grant from the National Science Foundation DMS-1555244." }, { "heading": "A ADDITIONAL GRAPH TERMINOLOGY", "text": "Given the node set, the weighted DAG can be uniquely determined by its weighted adjacency matrix, i.e., there is a one-to-one transformation between G and B. Suppose the graph nodes X in G are sorted in its topological order (corresponding to elementary transformation of the matrix), then the matrix B is strictly upper triangular with the diagonal elements as 0. Therefore, for an identity matrix I with the same dimension as B, I −B> is invertible since all its diagonal elements are 1 (positive)." }, { "heading": "B ALGORITHM: ANOCE-CVAE", "text": "The first part of the ANOCE-CVAE algorithm is on learning causal DAG from the observational data in the constrained space, by minimizing the loss function in Equation 7 using blackbox stochastic optimization solvers. Here, to minimize the loss in 7 and satisfy both h1(B) = 0 and h2(B) = 0, we simultaneously update λ1 and λ2 and increase c and d to infinity, by modifying the basic technique in Yu et al. (2019), corresponding to Part One.II.A.b and Part One.II.B in Algorithm 1. The second part is to estimate causal effects in the ANOCE table from the learned causal structure, based on the results from Theorem 3.1. Here, we numerically calculate the natural indirect effect for mediator IM based on Corollary F.1 in step IV of the second part.\nAlgorithm 1 Analysis of Causal Effects via Constrained VAE (ANOCE-CVAE)\nGlobal: Dataset X = {A,M, Y }, sample size n, dimension of mediators p, max iteration K, number of epoch H , original learning rate r0, tolerance of constraint to zero δ, parameter update bound U , tuning parameters ρ and ω, and penalty terms c and d;\nLocal: mean and standard variance of µ and σ , mean and standard variance of X µX and σX , weights in multilayer perceptrons of encoder and decoder θ = {W (1),W (2),W (3),W (4)}, Lagrange multipliers λ1 and λ2, penalty terms c and d, (p+ 2)× (p+ 2) matrix B, Loss function L, old and new values of the first constraint hold1 and h new 1 ,\nold and new values of the second constraint hold2 and h new 2 , and learning rate r;\nOutput: estimated matrix B̂, total effect TE, natural direct and indirect effect DE and IE, and natural direct and indirect effect for mediator DM and IM .\nPart One: Generate matrix B̂ via Constrained Variational Auto-Encoder; I. Initialization: λ1 ← 0; λ2 ← 0; c← 1; d← 1; r ← r0; B = 0(p+2)×(p+2); hold1 ←∞; hold2 ←∞; II. For step k, k = 1, · · · ,K:\nA. While c× d < U : a). For epoch i, i = 1, · · · , H:\n1. Build Encoder (µ , σ )← (Ip+2 −B>)MLP{X,W (1),W (2)}; 2. Build Decoder (µX , σX)←MLP{(Ip+2 −B>)−1 ,W (3),W (4)}; 3. Calculate values of constraints hnew1 ← h1(B) and hnew2 ← h2(B),\nand the loss function L← Lc,d(B,W (1),W (2),W (3),W (4), λ1, λ2); 4. Use backward to update parameters {B,W (1),W (2),W (3),W (4)}; 5. Update learning rate r;\nb). If hnew1 > ρh old 1 and h new 2 > ρh old 2 : c← c× ω; d← d× ω;\nElseif hnew1 > ρh old 1 and h new 2 < ρh old 2 : c← c× ω; Elseif hnew1 < ρh old 1 and h new 2 > ρh old 2 : d← d× ω;\nElse: Break; B. hold1 ← hnew1 ; hold2 ← hnew2 ; λ1 ← λ1 × hnew1 ; λ2 ← λ2 × hnew2 ; C. If hnew1 < δ and h new 2 < δ: Break;\nIII. Output B̂ ← B;\nAlgorithm 2 ANOCE-CVAE (cont.)\nPart Two: Estimate causal effects in ANOCE based on matrix B̂;\nI. According to Equation 3: A. Get γ̂ as the direct effect DE; B. Get α̂ as the effect of A on M , β̂, and the inside matrix B̂M ; II. Get ζ̂ ≡ (Ip −B>M )−1α̂ that represents the causal effect of A on M ; III. Get β̂>ζ̂ that represents the total natural indirect effect IE;\nFor each mediator Mi, i = 1, · · · , p: Define the natural direct effect for Mi as DM [i] = α̂[i]ζ̂[i]; IV. Get the natural indirect effect for mediator:\nFor each mediator Mi, i = 1, · · · , p: A. Delete Mi from the matrix B̂ and get B̂′i; B. Repeat step II. with reduced matrix B̂′i and get β̂′ and ζ̂ ′; C. Calculate the effect difference as the total mediation effect β̂>ζ̂ − β̂′ > ζ̂ ′ D. Define the natural indirect effect for Mi as IM [i] = {β̂>ζ̂ − β̂′ > ζ̂ ′} −DM [i];\nV. Define the total effect TE= γ̂ + β̂>ζ̂." }, { "heading": "C ADDITIONAL SIMULATION STUDIES", "text": "In this section, we give more details on simulation studies to investigate the finite sample performance of the proposed method for learning causal effects with multiple mediators, in comparison to the popular causal discovery methods, including the PC, the ICA-LiNGAM, the NOTEARS, and the DAG-GNN. The computing infrastructure used is a virtual machine in the compute engine of Google Cloud Platform with 8 processor cores and 32GB memory. The average runtime for each result is around 1 to 2 hours." }, { "heading": "C.1 DATA GENERATION", "text": "We first generate a random DAG from the Erdős-Reńyi (ER) or the Scale-Free (SF) network (Barabási & Albert, 1999) with an expected node degree. Then, we remove all in-edges (from precedent nodes) of the first node as A and remove all out-edges (from descendent nodes) of the last node as Y , and thus, the remaining nodes are the mediators M . Edges in DAGs for all scenarios are randomly assigned with weights (w ∈ {−1, 1} with equal probability) to obtain the weighted adjacency matrix B. Specifically, the true DAGs in Scenarios 3 and 4 are generated from the Erdős-Reńyi (ER) model with an expected degree as 2, where we set number of nodes d = 12 (i.e. p = 10) in Scenario 3 and d = 32 (i.e. p = 30) in Scenario 4. Note that we consider fully identifiable models in Section 5.1 so that it is meaningful to evaluate causal effects from the estimated graph. In Section 5.2, we repeat the above generation procedure with d = 12 to generate the true graph from both the ER and the Scale-Free (SF) networks with the expected degree as 1, 2, and 4, denoted as Cases ER1, ER2, ER4, SF1, SF2, and SF4, respectively. Here, to be consistent with Section 5.1, we refer Scenario 3 (generated by the ER with the degree as 2) as Case ER2.\nThe synthetic datasets {A,M, Y } are generated from Model 1, where the error variables in ≡ [ A, > M , Y ]\n> independently follow a normal distribution with mean 0 and noise 0.5 except for the binary exposure in Scenario 4∗. Here, we add a baseline of 1.0 on the outcome Y . Note that the Gaussian exposure in Scenario 4 and the binary exposure in Scenario 4∗ have the same mean and noise and thus their results are comparable.\nC.2 IMPLEMENTATION DETAILS\nWe detail the implementation for the proposed ANOCE-CVAE and comparison partners as follows:\n• ANOCE-CVAE: The ANOCE-CVAE is implemented based on PyTorch (Paszke et al., 2017), using Adam (Kingma & Ba, 2014) to minimize the loss function in Equation 7. We set the batch\nsize as 25 for n = 50 and 100 for n = 500 with hidden nodes as p2, the initial learning rate as 0.003 with an update rule as r ← r/{log(c) + log(d) + 0.01} where c and d are penalty terms for two constraints, and the parameter update bound as U = 1020, for all settings. Following the recommendation of Yu et al. (2019), we find that their tuned parameters ρ = 0.25 and ω = 10 also work well in our settings, and we adopt the Huber-norm regularization of B for a better convergence. Here, the variational posterior and the likelihood are parameterized as Gaussian with unit noise to approximate the underlying true model. The code is publicly available at an anonymous repository at https://github.com/anoce-cvae/ANOCE-CVAE.\n• PC (Spirtes et al., 2000): We set the Fisher-z test for the PC algorithm with the p-value as 0.01 for all settings. The implementation is available through the py-causal package at https://github.com/bd2kccd/py-causal, written in highly optimized Java codes. Also see examples here https://github.com/bd2kccd/py-causal/blob/development/ example/py-causal%20-%20PC-ALL%20in%20Action.ipynb.\n• ICA-LiNGAM (Shimizu et al., 2006): The ICA-LiNGAM assumes linear non-Gaussian additive model to recover the weighted adjacency matrix. We implement the ICA-LiNGAM with default hyper-parameters through the lingam package for all settings. See their repository at https: //github.com/cdt15/lingam.\n• NOTEARS (Zheng et al., 2018): The NOTEARS estimates the weighted adjacency matrix by formulating the optimization with an acyclicity constraint. The implementation is available at their repository at https://github.com/xunzheng/notears. We set the loss function as the least square error with the L1 regularization. We find the NOTEARS is sensitive to the choice of the L1 regularization in our settings. For a fair comparison, we set the L1 penalty parameter as 0.03 (instead of the default 0.1) for all settings, which achieves an overall good performance in most cases. Note the author modified their acyclicity constraint in their codes to be the one used in Yu et al. (2019) (i.e. Equation 4). We also use the same acyclicity constraint for NOTEARS, DAGGNN, and our method for a fair comparison. Other hyper-parameters are set as default in their repository.\n• DAG-GNN (Yu et al., 2019): The DAG-GNN incorporates the variational auto-encoder into causal discovery with a modified smooth characterization on acyclicity in the evidence lower bound as the loss function. Codes are available at their repository at https://github.com/ fishmoon1234/DAG-GNN based on PyTorch (Paszke et al., 2017). We set the same hyperparameters used in our ANOCE-CVAE for a fair comparison. Specifically, we use Adam (Kingma & Ba, 2014) to minimize the loss function, and set the batch size as 25 for n = 50 and 100 for n = 500 with hidden nodes as p2. The initial learning rate is set as 0.003 with an update rule as r ← r/{log(c0) + 0.01} where c0 is penalty term for the acyclicity constraint. The rest settings are the same as the default in their codes.\nIn the comparison studies (see Section 5.2 and C.4), we use a uniform graph threshold as 0.3 (commonly used in current literature) for all algorithms to prune the noise edges for a fair comparison. In addition, we also provide the results under the graph threshold as 0.4 for additional comparison." }, { "heading": "C.3 ADDITIONAL RESULTS OF ANOCE-CVAE", "text": "In this section, we provide additional simulation results for the ANOCE-CVAE. Following Section 5.1, the numerical results are summarized in Table 2 (for Scenario 1 to 3) and Table 3 (for Scenario 4 and 4∗), including the bias of the estimated TE, DE, IE, DM and IM for each mediator with their standard error, over 100 replications. Note that due to limited space, we save the numerical results of the IM in Table 3.\nFrom the results in Table 2, it is clear that the estimated TE, DE, IE, DM and IM for each mediator are close to the true values as the sample size increases in Scenario 1 to 3, which indicates the good performance of our proposed method on identifying the causal effects regardless of the sparsity. With the expected node degree increasing, one can observe a slightly larger bias and standard error of the estimated causal effects as expected, as shown in Table 2. Based on Table 3, the results of Scenario 4 and Scenario 4∗ are merely identical under different sample sizes, indicating our proposed method can handle either discrete or continuous exposure. In addition, by comparing the results of Scenario 3 and 4 where we fix the expected node degree as 2, one can observe a slightly larger bias\nof the estimated causal effects but of a similar small scale, as the dimension of mediators p increases, which implies the stability of our method under the high-dimensional setting." }, { "heading": "C.4 ADDITIONAL COMPARISON STUDIES", "text": "This section provides more results on comparison studies against the existing methods. The data generation and implementation details are provided in Section C.1 and C.2. Following Section 5.2\nand C.2, we use a graph thresholds as 0.3 (commonly used in current literature) or 0.4 (for additional comparison) in all algorithms to prune the noise edges for a fair comparison.\nThe estimated graphs (after pruning) are evaluated by three metrics: the false discovery rate (FDR), the true positive rate (TPR), and the structural Hamming distance (SHD, the smallest number of edge additions, deletions, and reversals to convert the estimated graph into the true DAG). Here, the SHD takes into account both false positives and negatives and a lower SHD indicates a better estimate of the causal graph. The FDR, TPR, and SHD of the averaged estimated matrix B̂> with their standard deviation over 100 replications are reported in Table 4 for the graph threshold as 0.3 and in Table 5 for the graph threshold as 0.4, under different methods for all six cases with sample size n = 500.\nBesides Figure 3 for Case ER2 with a graph threshold as 0.3 shown in the main text, we also illustrate the averaged estimated matrix B̂> over 100 replications under different methods for Case ER1, ER4,\nSF1, SF2, and SF4 with a graph threshold as 0.3 in Figure 5, 6, 7, 8, 9, respectively, and for Case ER1, ER2, ER4, SF1, SF2, and SF4 with a graph threshold as 0.4 in Figure 10, 11, 12, 13, 14, 15, respectively, under n = 500.\nFrom Table 4 and 5, it is clear that our algorithm performs the best among the five methods in most cases, followed by the other two score-based methods, i.e. the NOTEARS and the DAG-GNN. While the traditional methods (the PC and the ICA-LiNGAM) perform the worst with large SHD and small TPR. This finding supports the choice of the extension on the score-based method. Moreover, by comparing our performance with the DAG-GNN, one can observe a substantial gain in terms of the SHD and the TPR in most cases with comparable FDR. This validates the improvement of our method over the DAG-GNN by introducing the background knowledge in the causal discovery. Another supports are illustrated in Figure 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 for different settings, where the averaged estimated matrix B̂> under the ANOCE-CVAE is approximately the same as the ground true graph B> when n = 500. However, the PC and the ICA-LiNGAM can hardly recognize the true causal pattern. In addition, methods have a slightly better performance in terms of FDR and SHD while a slightly worse performance in terms of TPR under the graph threshold as 0.4, in comparison to the results under the graph threshold as 0.3." }, { "heading": "D ADDITIONAL REAL DATA RESULTS", "text": "In this section, we provide additional real data analysis on the COVID-19." }, { "heading": "D.1 DATA COLLECTION", "text": "To better characterize the causality of the virus spreading under the Hubei lockdowns in China, we assume: 1) Hubei was the centre of the COVID-19 outbreak in China (Zhou et al., 2020); 2)\nthe decreased migration outside Hubei was largely stimulated by the lockdown; 3) individual who departed to one destination would not return to the original departure due to travel restrictions in China. Under the above assumptions, it is reasonable to use a temporal causal relationship to describe the spread of COVID-19 under 2020 Hubei lockdowns as in Figure 16.\nNext, we give more details on how to present components in Figure 16 with appropriate variables. First, we set the exposure A as if Hubei is on lockdown, 0 for unlocked (before and on Jan 23rd) and 1 for locked (on and after Jan 24th). To select the candidate cities that contain most potential infected people, we rank cities outside Hubei by their received Wuhan (identified over 60% cases in China reported by the NHC) migration between Jan 1st, 2020 and Jan 22nd, 2020 (before the lockdown), and choose the top 30 cities (account for 69.17% of total Wuhan migration) as mediators M , to control the noise. We use the daily migration scale index (MSI) of each city as the value of each mediator, defined in Baidu Qianxi to describe the migration magnitude. By noticing the following facts: 1) it took usually 2 days to diagnose the COVID-19; 2) the estimated median incubation period is 5 days (Lauer et al., 2020); the outcome of interest Y is defined as the daily increase rate of confirmed cases out of Hubei with a one-week (7=2+5) delay. We delete extreme data points (two outbreaks at jails), and the final dataset yields a total of 38 records.\nD.2 INITIAL DATA ANALYSIS\nFigure 17a demonstrates the location of selected cities on the Chinese map with the color representing its cumulative confirmed cases by March 1st, 2020. It can be observed that the selected mediators are either located around Hubei province or are big central cities such as Beijing that has a large population and migration scale. In addition, we provide the partial heat map of the correlation matrix between cities and the treatment A or the outcome Y as illustrated in Figure 17b. Here, one could observe that cities’ MSIs are highly positive correlated with the lockdown while are highly negative correlated with the daily increase rate of confirmed cases." }, { "heading": "D.3 ADDITIONAL RESULTS", "text": "Table 6 lists all the numerical facts of the selected cities, including the population (million), the cumulative migration scale index (MSI) during the data period (Jan 12th to Feb 20th, 2020), the ratio of received Wuhan migration between Jan 1st to Jan 22nd, 2020 (before the lockdown), the cumulative confirmed cases by March 1st, 2020, and cities’ direct (DM ) and indirect effect (IM ). Note the selected 30 cities in Table 6 are sorted according to their cumulative MSI, and its order is used as the order of the mediators in this paper. It can be seen from Table 6 that the population, the cumulative MSI, the ratio of received Wuhan migration, and the confirmed cases are highly correlated among selected cities as expected. Note that we list these factors to assist the interpretation of the results, while none of them is used to estimate the DAG of interest. Therefore, similar values of factors don’t necessarily imply similar causal effects.\nBesides the general causal pattern of cities’ estimated DMs and IMs stated in the main text, we provide more interpretation on the level of the individual city. Here, we compare three groups of cities that are of similar scale (population and migration) or geographic position, to specify our results. First, we compare the results between Beijing and Shanghai, where both cities have comparable population scale and are the center of politic or economic in China (the correlation between the MSI of Beijing and the outcome yields the same value as of Shanghai as 0.50). It can be seen from Table 6 that Beijing and Shanghai have similar positive indirect effects as 0.247 and 0.235, respectively, while Shanghai has a slightly higher direct effect on controlling the virus as -0.069, which is possibly due to the smaller MSI of Shanghai. Second, we compare three cities located at Guangdong province (see Figure 18a and 18b), including Shenzhen, Guangzhou, and Dongguan. All three cities have positive effects, among which Guangzhou yields the highest direct effect as 0.847, followed by Dongguan. The effect size of these cities agrees with their correlation coefficients with the outcome, where the correlation between Guangzhou and Y is 0.61, followed by Dongguan as 0.58 and Shenzhen as 0.50. The last comparison is among cities in southeastern China (see Figure 18a and 18b), including Suzhou, Hangzhou, and Wenzhou, all of which have a negative direct effect on the virus control and a positive indirect effect for the virus spread. Here, Wenzhou achieves the largest absolute value of the negative direct effect as -0.650, which conforms to its strict local shelter in home order after the Hubei lockdowns, where the correlation between Wenzhou and the outcome achieves the highest value as 0.83.\nWe summarize all different sources of causal effects in Table 6 as the ANOCE table of 2020 Hubei lockdowns on reducing the COVID-19 spread in China. Note that due to cities’ different levels of control measures on the coronavirus outside Hubei as well as other possible confounders, there are some inconsistency between cities’ DMs and their cumulative MSIs. We leave the extension with confounders for further investigation. Further interpretation of the real data analysis requires domain experts.\nOne can refer to Figure 18a and 18b for cities’ DMs and IMs on the Chinese map. To check the reasonability of our results, we plot the estimated weighted matrix in Figure 4a, where the first node (indexed 0) represents the Hubei lockdowns, the last node (indexed 31) is the daily increase rate out of Hubei, and the middle 30 nodes (indexed 1-30) correspond to 30 selected cities in Figure 4b. From Figure 4a, we can observe: • 1). The color of the first column is almost all blue, indicating that locking Hubei down can reduce the migration of selected cities; • 2). An approximate red upper triangular among first 20 nodes implies a migration trend with positive effects among central cities with large MSI, i.e. relatively smaller cities tend to have positive effects on other cities with relatively larger MSI; • 3). An approximate red lower triangular among last 10 nodes indicates a weaker migration trend with an opposite direction among non-central cities with small MSI; • 4). There is\nalso an almost all blue rectangle in the right top of the estimated matrix, showing non-central cities tend to have negative effects on central cities. Overall speaking, all the above finding accords with the migration trend during the Spring Festival period and intensive mutual communications among central cities in China, though there are also some noisy causal directions opposite with the main trend in each area due to possible confounders and identifiability issue in the linear Gaussian model.\nIn addition, we provide the estimated DAG for the coronavirus data in Figure 18c as the complete spreading network among major cities outside Hubei, where the exposure A, mediators M (cities’ corresponding index can be found in Table 6), and the outcome Y are colored in red, blue and green, respectively. It can be observed that the in-degree is larger than the out-degree for nodes with small index, while an opposite rule is applied for nodes with large index. This finding is consistent with the migration trend identified in our main text. Lastly, we give the spreading network among cities that received most Wuhan migration during the data period, including Beijing, Shanghai, Guangzhou, Shenzhen, Chengdu, Chongqing, Zhengzhou, Changsha, and Xinyang, plus Wuhan, in Figure 18d, to illustrate the partial interaction trend among cities. Each node refers to a city with the color of the node presenting the percentage of received Wuhan migration, ranging from 2.57% to 5.50%." }, { "heading": "E TOY EXAMPLE", "text": "Example E.1 Here, we use a toy example of a weighted DAG Gtoy under the LSEM given in Figure 19 to better demonstrate our definitions.\nIn Gtoy, p = 4 mediators are included, where M1 ← 0.2A + 1, M2 ← 0.2M1 + 0.5M4 + 2, M3 ← 0.4A+ 3, M4 ← 0.5M3 + 4, Y = 0.5A+ 1.1M2 + 0.7M3. There are 4 directed path from A to Y :\n1) ‘A→ Y ’ with length 1; 2) ‘A→M3 → Y ’ with length 2; 3) ‘A→M1 →M2 → Y ’ with length 3; 4) ‘A→M3 →M4 →M2 → Y ’ with length 4. Since there exists 3 directed path π∗ ∈ {πAY (Gtoy)} such that the length of π∗ is larger than 2, we have the mediators in Gtoy are interacted. From the weighted DAG Gtoy, we have the direct effect of A on Y is γtoy = 0.5, αtoy ≡ [0.5, 0, 0.4, 0]>, βtoy ≡ [0, 1.1, 0.7, 0]>, and\nB>M toy = 0 0 0 00.2 0 0 0.50 0 0 0 0 0 0 0.5 . Note one may recover Gtoy from the weight matrix Btoy as long as the order of the vertices in Btoy is given. Then, we have\n(Ip −B>M toy )−1 = 1 0 0 00.2 1 0.25 0.50 0 1 0 0 0 0.5 1 . Thus, the indirect effect of A on Y is\nβtoy > (Ip −B>M toy )−1αtoy\n=[0, 1.1, 0.7, 0] 1 0 0 00.2 1 0.25 0.50 0 1 0 0 0 0.5 1 [0.5, 0, 0.4, 0]> =[0, 1.1, 0.7, 0][0.5, 0.2, 0.4, 0.2]>\n=0 + 0.22 + 0.28 + 0 = 0.5.\n(E.1)\nFrom the Equation E.1 and Theorem 3.1, we have the DM of M2 and M3 is 0.22 and 0.28, respectively, while other DM are 0. Note that there is no explicit expression of the natural indirect effect through the directed path from A to Y (IM ) due to the complex interaction among mediators, while we provide its theoretical form in Equation G.6 based on the path method in Wright (1921) and Nandy et al. (2017). Specifically, for each directed path from A to Y , we have:\n1) the direct effect through ‘A→ Y ’ (DE): 0.5; 2) the effect of path ‘A→M3 → Y ’: 0.4× 0.7 = 0.28; 3) the effect of path ‘A→M1 →M2 → Y ’: 0.5× 0.2× 1.1 = 0.11; 4) the effect of path ‘A→M3 →M4 →M2 → Y ’: 0.4× 0.5× 0.5× 1.1 = 0.11; 5) so the indirect effect of A on Y (IE) is 0.11 + 0.11 + 0.28 = 0.5; 6) and the total effect of A on Y (TE) is 0.5 + 0.5 = 1.0; 7) the indirect effect for M1 (IM1) corresponds to the effect of path ‘A→M1 →M2 → Y ’, thus is 0.11; 8) the indirect effect for M2 (IM2) is zero since there is no path first goes through M2 followed by other mediators; 9) the indirect effect for M3 (IM3) corresponds to the effect of path ‘A→M3 →M4 →M2 → Y ’, thus is 0.11; 10) the indirect effect for M4 (IM4) corresponds to the effect of path ‘A→M3 →M4 →M2 → Y ’, thus is 0.11.\nWe can calculate the last edge-specific effect directly from its definition. Since there is no M1 → Y and M4 → Y in Gtoy, we have LEtoy1 = LE toy 4 = 0. By deleting M2 → Y in Gtoy, the total effect reduced by 0.11 + 0.11 = 0.22, so LEtoy2 = 0.22; similarly, after deleting M3 → Y , the total effect decreases 0.28, thus LEtoy3 = 0.28. One may notice that the last edge-specific effects are equal to the DMs, and the additive of the last edge-specific effects is exact the last step of calculation βtoy\n> (Ip −B>M toy )−1αtoy in Equation E.1." }, { "heading": "F CONNECTION TO LITERATURE", "text": "We establish the connection between our proposed method to the literature from three different angles. First, we show that the individual mediation effect defined in Chakrabortty et al. (2018) can be decomposed into our defined DM and IM when the LSEM assumption holds. Next, we give an equivalent definition of the DM through a type of special edge (last edge) in the causal graph. Lastly, we prove that the proposed DM is consistent with the interventional effect via a particular mediator defined in Vansteelandt & Daniel (2017) under the LSEM." }, { "heading": "F.1 FROM INDIVIDUAL MEDIATION VIEWPOINT", "text": "Chakrabortty et al. (2018) defined the individual mediation effect under the LSEM as follows.\nDefinition F.1 (Chakrabortty et al., 2018) Individual mediation effect for Mi: ηi = [ E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} ] × [ E{Y |do(Mi = mi + 1)} − E{Y |do(Mi = mi)} ] .\n(F.1)\nIn the following theorem, we show that the summation of ηi is strictly larger than the IE if the mediators are not parallel. The proof is given in Section G.2.\nTheorem F.1 If there exists at least one directed path π∗ ∈ {πAY (G)} such that the length of π∗ is larger than 2, and the element in B is nonnegative, then∑\nηi > IE. (F.2)\nRemark F.1 From the above theorem, it is clear that the mediator effect defined in Chakrabortty et al. (2018) is not appropriate for interpreting the decomposition of the indirect effect, when there exists interaction among mediators (a common situation as described in the introduction). Here, we keep the condition that the element in B is nonnegative, as the multiple count mediation effects in Chakrabortty et al. (2018) may cancel out in some cases and their summation would equal to the IE by chance.\nInspired by the proof of Theorem F.1, the mediator effect ηi can be decomposed into two parts, the natural direct and indirect effect for i-th mediator, as shown in the following corollary.\nCorollary F.1 Under assumptions (A1-A3) and Model 1, we have\nηi = DMi + IMi. (F.3)\nRemark F.2 Corollary F.1 together with the definition of ηi in Chakrabortty et al. (2018) provides a feasible way to numerically calculate the natural indirect effect IMi. Specifically, by deleting the mediator Mi in the causal graph, the reduced treatment effect corresponds to ηi, then IMi = ηi − DMi, where the explicit expression of the DMi is provided in Theorem 3.1. See more implementation details in Section B." }, { "heading": "F.2 FROM GRAPHICAL PERSPECTIVE", "text": "Next, we give the definition of the edge-specific effect following Avin et al. (2005). Suppose a directed edge of interest as Xi → Xj in a weighted DAG G. Define a new weighted DAG G′i,j by deleting the directed edge Xi → Xj in G, i.e. G′i,j ≡ G \\ (Xi → Xj).\nDefinition F.2 (Avin et al., 2005) Edge-specific effect:\nET (Xi, Xj) = TEG − TEG′i,j , (F.4)\nwhere TEG means the total effect in graph G.\nWe next give an equivalent definition of our proposed DM from a graphical perspective. Let the edge in G that starts with i-th mediator and ends with node Y , i.e. Mi → Y , as the i-th last edge. Denote the graph G deleting the i-th last edge (Mi → Y ) as G′i. We define the ith last edge-specific effect as\nDefinition F.3 Last edge-specific effect for Mi:\nLEi = { TEG − TEG′i , if there exists edge Mi → Y in G; 0, otherwise. (F.5)\nBy Theorem 3.1, we have (Ip −B>M )−1α is the causal effect of A on M . Let ζ ≡ (Ip −B>M )−1α, with its i-th element ζi ≡ {(Ip −B>M )−1α}i. Next, we show that the i-th last edge-specific effect can be presented as βiζi under the LSEM in the following theorem, where βi is the i-th element of the vector β and corresponds to the weight of the edge Mi → Y . The proof can be found in Section G.3.\nTheorem F.2 Under assumptions (A1-A3) and Model 1, we have\nLEi = βiζi. (F.6)\nBased on Theorem 3.1, the natural direct effect of Mi on Y can be expressed as DMi = βiζi. Thus, with the result of Theorem F.2, it is easy to show the following corollary.\nCorollary F.2 Under assumptions (A1-A3) and Model 1, the natural direct effect of Mi is equal to the i-th last edge-specific effect:\nLEi = DMi = βiζi. (F.7)\nRemark F.3 Here, both definitions describe the direct impact of one mediator Mi on the outcome. The natural direct effect of a particular mediator Mi can be understood as the influence when removing the direct edge between Mi and Y . Thus, we have the equivalence between two definitions.\nThen, we can decompose the total natural indirect effect into p last edge-specific effects or p DMs as the following additive form, based on Theorem 3.2 and Corollary F.2.\nCorollary F.3 Under assumptions (A1-A3) and Model 1, we have\nIE = β>ζ = p∑ i=1 βiζi = p∑ i=1 DMi = p∑ i=1 LEi. (F.8)\nIn fact, based on the uniqueness of each last edge, the natural indirect effect can be decomposed into p last edge-specific effect regardless of the LSEM setting through the graphical perspective. We give the following intuitive conclusion. The proof can be found in Section G.4.\nTheorem F.3 The IE can be decomposed through LEs as:\nIE = p∑ i=1 LEi. (F.9)\nRemark F.4 One can view the last edge-specific effect as the generalized definition of the natural direct effect for mediator without the LSEM assumption." }, { "heading": "F.3 FROM INTERVENTIONAL EFFECT LEVEL", "text": "Finally, we show the consistency of our defined DM to the interventional effect via a particular mediator defined in Vansteelandt & Daniel (2017) under the LSEM.\nDefinition F.4 (Vansteelandt & Daniel, 2017) Under assumptions (A1-A3), the interventional effect via Mi is\nξi = ∑\nm1∈M1\n· · · ∑\nmp∈Mp\n[ E(Y |A = a,Mi = mi,Ωi = oi)P (Ωi = oi|A = a)\n× { P (Mi = mi|A = a+ 1)− P (Mi = mi|A = a) }] ,\n(F.10)\nwhere Mi is the support of Mi, oi = [m1, · · · ,mi−1,mi+1, · · · ,mp], P (M = m|A = a) is the probability of M = m when setting A = a.\nTheorem F.4 Under assumptions (A1-A3) and Model 1, we have\nDMi = ξi,\nRemark F.5 The proof can be found in Section G.5. Based on Definition 3.2 and Equation F.10, both the proposed DM and the effect defined in Vansteelandt & Daniel (2017) contain the information of the causal effect of A on the mediator Mi, i.e. P (Mi = mi|A = a+ 1)− P (Mi = mi|A = a)." }, { "heading": "G TECHNICAL PROOFS", "text": "" }, { "heading": "G.1 PROOF OF THEOREM 3.1", "text": "Proof G.1 In this proof, we will give the explicit expressions of causal effects defined under the LSEM. First, Equation 3 is equivalent to A ≡ A,M = αA+B>MM + M ,Y = γA+ β>M + Y . (G.1) Based on M = αA+B>MM + M , by moving B > MM to the left-hand side, we have\n(Ip −B>M )M = αA+ M .\nSuppose the mediators are sorted in the topological order (a series of elementary transformation of the matrix), then the matrix B>M is strictly upper triangular with the diagonal element as 0. Thus, we have Ip − B>M is invertible, then Ip − B>M under its original order should be also invertible (any invertible matrix after elementary transformation is still invertible).\nTherefore, we can rewrite M as a purely function of A plus the error term as follows.\nM = (Ip −B>M )−1αA+ (Ip −B>M )−1 M . (G.2)\nThen we replace mediators in Equation G.1 with Equation G.2 and obtain A ≡ A, M = (Ip −B>M )−1αA+ (Ip −B>M )−1 M , Y = γA+ β>M + Y\n= γA+ {β>(Ip −B>M )−1α}A+ {β>(Ip −B>M )−1 M + Y }.\n(G.3)\nNext, we show how to get the explicit expressions of E{Y |do(A = a)} under the LSEM. Following the results in Rosenbaum & Rubin (1983), under the assumption (A2), we have P{M |do(A = a)} = P (M |A = a), and thus,\nE{M |do(A = a)} = E(M |A = a).\nSimilarly, we can get E{Y |do(A = a)} = E(Y |A = a) under the assumption (A1), and E{Y |do(A = a,M = m)} = E(Y |A = a,M = m) under the assumption (A3). Based on above results and Equation G.3, we have\nE{Y |do(A = a)} = E{Y |A = a} =E{γA+ β>M + Y |A = a} =γa+ β>E{M |A = a} =γa+ β>E{(Ip −B>M )−1αA+ (Ip −B>M )−1 M |A = a} =γa+ β>(Ip −B>M )−1αa,\n(G.4)\nwhere the first ‘=’ is held under the assumption (A1), the second and forth ‘=’ are given by Equation G.3 that Y = γA+ β>M + Y and M = (Ip −B>M )−1αA+ (Ip −B>M )−1 M . Following the same calculation procedure of E{Y |do(A = a)}, we next give the natural direct effect under assumptions (A1-A3) and Model 1 as\nDE = E{Y |do(A = a+ 1,M = m(a))} − E{Y |do(A = a)} = {γ(a+ 1) + β>m(a)} − {γa+ β>m(a)} = γ,\nwhere the first ‘=’ is given by the definition of the DE.\nSimilarly, the natural indirect effect is\nIE = E{Y |do(A = a,M = m(a+1))} − E{Y |do(A = a)} = {γa+ β>m(a+1)} − {γa+ β>m(a)} = β>(Ip −B>M )−1α(a+ 1)− β>(Ip −B>M )−1αa = β>(Ip −B>M )−1α.\nThus, the total effect of A on Y is\nTE = E{Y |do(A = a+ 1)} − E{Y |do(A = a)} = DE + IE = γ + β>(Ip −B>M )−1α.\nFinally, we give the expression for the natural direct effect of Mi on Y under the LSEM. Based on the assumption (A2) and Equation G.2, we have\nE{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} =E{Mi|A = a+ 1} − E{Mi|A = a} ={(Ip −B>M )−1α}i(a+ 1)− {(Ip −B>M )−1α}ia ={(Ip −B>M )−1α}i,\n(G.5)\nwhere {(Ip −B>M )−1α}i is the i-th element of the vector (Ip −B>M )−1α.\nThen, based on Y = γA+ β>M + Y and the assumption (A3), we have,\nE{Y |do(A = a,Mi = m(a)i + 1,Ωi = o (a) i )} − E{Y |do(A = a)}\n=E{Y |A = a,Mi = m(a)i + 1,Ωi = o (a) i } − E{Y |A = a}\n=γa+ β> m (a) 1 ... m (a) i + 1\n... m (a) p\n − γa− β> m (a) 1 ... m (a) i ...\nm (a) p\n = β>1i = βi,\nwhere 1i is a p× 1 vector with the i-th element as 1 while others qual to 0, and βi is the i-th element of the vector β.\nThus, we have DMi = [ E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} ] × [ E{Y |do(A = a,Mi = m(a)i + 1,Ωi = o (a) i )} − E{Y |do(A = a)} ] ,\n={(Ip −B>M )−1α}i × βi =βi{(Ip −B>M )−1α}i." }, { "heading": "G.2 PROOF OF THEOREM F.1", "text": "Proof G.2 1. If there is no directed path π∗ ∈ {πAY (G)} such that the length of π∗ is larger than 2, i.e. the length of π∗ ∈ {πAY (G)} is either 1 or 2. Here, the path with length 1 corresponds to A → Y , and paths with length 2 are A → Mi → Y with possibly i = 1, · · · , p. Thus, there is no interaction among mediators.\nBy the definition of the LSEM, we have BM = 0p×p, where 0p×p is a p× p zero matrix. Following the path method (the causal effect of Xi on Xj along a directed path from Xi → Xj in G can be calculated by multiplying all edge weights along the path) illustrated in Wright (1921) and Nandy et al. (2017), we could obtain ∑ ηi = ∑ i βiαi = IE. (See a toy example provided in section E to illustrate how to use the path method to manually compute the causal effects.)\n2. If there exists at least one directed path π∗ ∈ {πAY (G)} such that the length of π∗ is larger than 2, and the element in B is nonnegative, we have BM 6= 0p×p. Without loss of generality, suppose there exists Mi ∈M with a set of directed path that starts with A, contains Mi, then goes through other mediators, and ends with Y , denoted each path in such set as πi,j = {A→ · · · →Mi · · · → · · · → Y } for j = 1, · · · , ni, where ni is the size of such path set for Mi, and the weights of edges in πi,j is positive. Note the set {πi,j} excludes the paths end with Mi → Y . Let eπi,j denote the causal effect of A on Y through directed path πi,j . Based on the path method in Wright (1921) and Nandy et al. (2017) with the definition of IMi, we have its theoretical form as\nIMi = ni∑ j=1 eπi,j . (G.6)\nBy Equation G.5 and the definition of ηi, we have its first multiplier as E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} = {(Ip −B>M )−1α}i, which is also the first multiplier in both DMi and IMi.\nAnd the second multiplier of ηi can be expressed as E{Y |do(Mi = mi + 1)} − E{Y |do(Mi = mi)}\n=E{Y |do(Mi = m(a)i + 1)} − E{Y |do(Mi = m (a) i )}\n=E{Y |do(A = a,Mi = m(a)i + 1)} − E{Y |do(A = a,Mi = m (a) i )},\n=E{Y |do(A = a,Mi = m(a)i + 1)} − E{Y |do(A = a)},\nwhere m(a)i is the value of Mi when setting do(A = a). Here, the first ‘=’ is valid since mi can be arbitrary number, and the second and third ‘=’ are based on the equivalent interventions.\nBased on the technique of plus and minus the same term, we decompose the second multiplier of ηi into two parts as follows\nE{Y |do(A = a,Mi = m(a)i + 1)} − E{Y |do(A = a)} = [ E{Y |do(A = a,Mi = m(a)i + 1,Ωi = o (a) i )} − E{Y |do(A = a)} ] ︸ ︷︷ ︸\nthe second multiplier ofDMi + [ E{Y |do(A = a,Mi = m(a)i + 1)} − E{Y |do(A = a,Mi = m (a) i + 1,Ωi = o (a) i )} ] ︸ ︷︷ ︸\nthe second multiplier of IMi (G.7)\nwhere Ωi = M \\Mi is the sets of mediators except Mi, and o(a)i is the value of Ωi when setting do(A = a). Here, the first term in the above equation corresponds to the second multiplier of DMi, while the second term is the second multiplier of IMi.\nThus, the summation of ηi is∑ ηi = ∑ i {[ E{Mi|do(A = a+ 1)} − E{Mi|do(A = a)} ] × [ E{Y |do(Mi = mi + 1)} − E{Y |do(Mi = mi)} ]}\n= ∑ i {DMi + IMi} = ∑ i DMi + ∑ i IMi = IE + ∑ i ni∑ j=1 eπi,j ,\nwhere the first ‘=’ is from Definition F.1, the second ‘=’ is given by Equation G.7 and Definition 3.2 and 3.3, and the last ‘=’ comes from Theorem 3.2 and the theoretical form of IM in Equation G.6.\nHere, we have eπi,j > 0 since the weights of edges in πi,j is positive based on the path method in Wright (1921) and Nandy et al. (2017). Then, ∑ i ∑ni j=1 eπi,j is also strictly larger than 0. Therefore,\nwe have ∑ ηi > IE." }, { "heading": "G.3 PROOF OF THEOREM F.2", "text": "Proof G.3 1. If there doesn’t exist edge Mi → Y in G, then by definition we have βi = 0. Thus, LEi = βiζi = 0.\n2. If there exists edge Mi → Y in G. Suppose there is a directed path set with size mi associated to the edge Mi → Y , where each directed path π̃i,j starts with node A and ends with Mi → Y , denoted as π̃i,j = {A→ · · · → · · · →Mi → Y } for j = 1, · · · ,mi.\nLet eπ̃i,j denote the causal effect of A on Y through directed path π̃i,j , e (A,Mi) π̃i,j\nbe the causal effect of A on Mi through directed path π̃i,j , and e(Mi,Y ) is the causal effect of Mi on Y through directed edge Mi → Y . Following the path method in Wright (1921) and Nandy et al. (2017), we have eπ̃i,j = e (A,Mi) π̃i,j e(Mi,Y ).\nThen the i-th last edge-specific effect is equal to the summation of the effect through each path π̃i,j , i.e.,\nLEi = ni∑ j=1 eπ̃i,j = ni∑ j=1 e (A,Mi) π̃i,j e(Mi,Y ) = e(Mi,Y ) ni∑ j=1 e (A,Mi) π̃i,j .\nHere, by the similar argument based on the path method, we have e(Mi,Y ) = βi and ∑ni j=1 e (A,Mi) π̃i,j as the total causal effect of A on Mi.\nRecall that ζi ≡ {(Ip − B>M )−1α}i is the causal effect of A on Mi. Therefore, the i-th LE is the product of the causal effect of A on Mi and the causal effect of Mi on Y , i.e.,\nLEi = βiζi." }, { "heading": "G.4 PROOF OF THEOREM F.3", "text": "Proof G.4 Given a general DAG G with nodes {A,M, Y }, let the union of all directed paths that contain the i-th last edge as τi = {π : A → · · · → Mi → Y }, i = 1, · · · p. Here, we have τi = {π̃i,j}1≤j≤mj established in Section G.3. It is clear that the union set of τi in G is equal to the set of all directed paths start with A and end with node Y (except A→ Y ) in G as⋃\ni\nτi = {πAY (G)} \\ {A→ Y }.\nAlso, based on the uniqueness of each last edge, τi is pairwise disjoint, i.e. τi ⋂ τj = ∅, ∀i 6= j.\nSince the IE is defined as the total causal effect of A on Y that goes through mediators, we have the IE equal to the causal effect that goes through the set {πAY (G)} \\ {A→ Y }, i.e. the IE equal to the causal effect that goes through set ⋃ i τi. Based on the mutual disjoint property of τi, we have the IE is exactly the summation of the causal effect through τi. Lastly, from the definition of LEi, we have\nIE = p∑ i=1 LEi." }, { "heading": "G.5 PROOF OF THEOREM F.4", "text": "Proof G.5 The proof of the consistency of our defined DM to the interventional effect ξi can be completed based on Equation 3 under assumptions (A1-A3) and Model 1.\nRecall the definition in Equation F.10, we have\nξi = ∑\nm1∈M1\n· · · ∑\nmp∈Mp\n[ E(Y |A = a,Mi = mi,Ωi = oi)P (Ωi = oi|A = a)\n× { P (Mi = mi|A = a+ 1)− P (Mi = mi|A = a) }] .\n= ∑\nm1∈M1\n· · · ∑\nmp∈Mp\n{ E(Y |A = a,Mi = mi,Ωi = oi)P (Ωi = oi|A = a)P (Mi = mi|A = a+ 1)\n− E(Y |A = a,Mi = mi,Ωi = oi)P (Ωi = oi|A = a)P (Mi = mi|A = a) } .\nGiven A = a, the value of Mi is m (a) i and Ωi takes o (a) i ; while when setting A = a+ 1, the value of Mi is m (i) a+1. Therefore, we have P (Mi = mi|A = a) = 1 if mi = m (a) i otherwise is 0, and P (Ωi = oi|A = a) = 1 if oi = o(a)i otherwise is 0. Under assumptions (A1-A3), we have\nξi = E(Y |A = a,Mi = m(a+1)i ,Ωi = o (a) i )− E(Y |A = a,Mi = m (a) i ,Ωi = o (a) i ).\nThen, based on the LSEM that Y = γA+ β>M + Y , we can further obtain that\nξi =γa+ β > m (a) 1 ... m (a+1) i ...\nm (a) p\n − γa− β> m (a) 1 ... m (a) i ...\nm (a) p\n = βi{m(a+1)i −m (a) i }.\nFrom Equation G.2, we have\nξi = βi [ {(Ip −B>M )−1α}i(a+ 1)− {(Ip −B>M )−1α}ia ] = βi{(Ip −B>M )−1α}i.\nThus, under assumptions (A1-A3) and Model 1, we have\nDMi = ξi." } ]
2,021
null
SP:75ea5f45677f0daa8a50a6e74737cfd7afc9f817
[ "The paper presents an interesting analysis of MLP and convnets, where they show a gap between the number of required training examples to generalize well. They show that due to orthogonality invariance in MLP training, then more examples are required compare to convnet, where one example is needed. This approach, which relies on an older result, provides an intuition as to the success of resnet." ]
Convolutional neural networks often dominate fully-connected counterparts in generalization performance, especially on image classification tasks. This is often explained in terms of “better inductive bias.” However, this has not been made mathematically rigorous, and the hurdle is that the sufficiently wide fully-connected net can always simulate the convolutional net. Thus the training algorithm plays a role. The current work describes a natural task on which a provable sample complexity gap can be shown, for standard training algorithms. We construct a single natural distribution on R × {±1} on which any orthogonal-invariant algorithm (i.e. fully-connected networks trained with most gradient-based methods from gaussian initialization) requires Ω(d) samples to generalize while O(1) samples suffice for convolutional architectures. Furthermore, we demonstrate a single target function, learning which on all possible distributions leads to an O(1) vs Ω(d/ε) gap. The proof relies on the fact that SGD on fully-connected network is orthogonal equivariant. Similar results are achieved for `2 regression and adaptive training algorithms, e.g. Adam and AdaGrad, which are only permutation equivariant.
[ { "affiliations": [], "name": "FULLY-CONNECTED NETS" }, { "affiliations": [], "name": "Zhiyuan Li" }, { "affiliations": [], "name": "Yi Zhang" }, { "affiliations": [], "name": "Sanjeev Arora" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "What can resnet learn efficiently, going beyond kernels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yossi Arjevani", "Ohad Shamir" ], "title": "On the iteration complexity of oblivious first-order optimization algorithms", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Gyora M Benedek", "Alon Itai" ], "title": "Learnability with respect to fixed distributions", "venue": "Theoretical Computer Science,", "year": 1991 }, { "authors": [ "Anselm Blumer", "A. Ehrenfeucht", "David Haussler", "Manfred K. Warmuth" ], "title": "Learnability and the vapnik-chervonenkis dimension", "venue": "J. ACM,", "year": 1989 }, { "authors": [ "Simon S Du", "Yining Wang", "Xiyu Zhai", "Sivaraman Balakrishnan", "Russ R Salakhutdinov", "Aarti Singh" ], "title": "How many samples are needed to estimate a convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Philip M. Long" ], "title": "On the sample complexity of PAC learning half-spaces against the uniform distribution", "venue": "IEEE Transactions on Neural Networks,", "year": 1995 }, { "authors": [ "Zongming Ma", "Yihong Wu" ], "title": "Volume ratio, sparsity, and minimaxity under unitarily invariant norms", "venue": "IEEE Transactions on Information Theory,", "year": 2015 }, { "authors": [ "Andrew Y Ng" ], "title": "Feature selection, l 1 vs. l 2 regularization, and rotational invariance", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Stanislaw J Szarek" ], "title": "Metric entropy of homogeneous spaces", "venue": "arXiv preprint math/9701213,", "year": 1997 }, { "authors": [ "Michel Talagrand" ], "title": "Upper and lower bounds for stochastic processes: modern methods and classical problems, volume 60", "venue": "Springer Science & Business Media,", "year": 2014 }, { "authors": [ "Roman Vershynin" ], "title": "High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics", "venue": null, "year": 2018 }, { "authors": [ "Colin Wei", "Jason D Lee", "Qiang Liu", "Tengyu Ma" ], "title": "Regularization matters: Generalization and optimization of neural nets vs their induced kernel", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep convolutional nets (“ConvNets”) are at the center of the deep learning revolution (Krizhevsky et al., 2012; He et al., 2016; Huang et al., 2017). For many tasks, especially in vision, convolutional architectures perform significantly better their fully-connected (“FC”) counterparts, at least given the same amount of training data. Practitioners explain this phenomenon at an intuitive level by pointing out that convolutional architectures have better “inductive bias”, which intuitively means the following: (i) ConvNet is a better match to the underlying structure of image data, and thus are able to achieve low training loss with far fewer parameters (ii) models with fewer total number of parameters generalize better.\nSurprisingly, the above intuition about the better inductive bias of ConvNets over FC nets has never been made mathematically rigorous. The natural way to make it rigorous would be to show explicit learning tasks that require far more training samples on FC nets than for ConvNets. (Here “task”means, as usual in learning theory, a distribution on data points, and binary labels for them generated given using a fixed labeling function.) Surprisingly, the standard repertoire of lower bound techniques in ML theory does not seem capable of demonstrating such a separation. The reason is that any ConvNet can be simulated by an FC net of sufficient width, since a training algorithm can just zero out unneeded connections and do weight sharing as needed. Thus the key issue is not an expressiveness per se, but the combination of architecture plus the training algorithm. But if the training algorithm must be accounted for, the usual hurdle arises that we lack good mathematical understanding of the dynamics of deep net training (whether FC or ConvNet). How then can one establish the limitations of “FC nets + current training algorithms”? (Indeed, many lower bound techniques in PAC learning theory are information theoretic and ignore the training algorithm.)\nThe current paper makes significant progress on the above problem by exhibiting simple tasks that require Ω(d2) factor more training samples for FC nets than for ConvNets, where d is the data dimension. (In fact this is shown even for 1-dimensional ConvNets; the lowerbound easily extends to 2-D ConvNets.) The lower bound holds for FC nets trained with any of the popular algorithms\nlisted in Table 1. (The reader can concretely think of vanilla SGD with Gaussian initialization of network weights, though the proof allows use of momentum, `2 regularization, and various learning rate schedules.) Our proof relies on the fact that these popular algorithms lead to an orthogonalequivariance property on the trained FC nets, which says that at the end of training the FC net —no matter how deep or how wide — will make the same predictions even if we apply orthogonal transformation on all datapoints (i.e., both training and test). This notion is inspired by Ng (2004) (where it is named “orthogonal invariant”), which showed the power of logistic regression with `1 regularization versus other learners. For a variety of learners (including kernels and FC nets) that paper described explicit tasks where the learner has Ω(d) higher sample complexity than logistic regression with `1 regularization. The lower bound example and technique can also be extended to show a (weak) separation between FC nets and ConvNets. (See Section 4.2)\nOur separation is quantitatively stronger than the result one gets using Ng (2004) because the sample complexity gap is Ω(d2) vs O(1), and not Ω(d) vs O(1). But in a more subtle way our result is conceptually far stronger: the technique of Ng (2004) seems incapable of exhibiting a sample gap of more than O(1) between Convnets and FC nets in our framework. The reason is that the technique of Ng (2004) can exhibit a hard task for FC nets only after fixing the training algorithm. But there are infinitely many training algorithms once we account for hyperparameters associated in various epochs with LR schedules, `2 regularizer and momentum. Thus Ng (2004)’s technique cannot exclude the possibility that the hard task for “FC net + Algorithm 1” is easy for “FC net + Algorithm 2”. Note that we do not claim any issues with the results claimed in Ng (2004); merely that the technique cannot lead to a proper separation between ConvNets and FC nets, when the FC nets are allowed to be trained with any of the infinitely many training algorithms. (Section 4.2 spells out in more detail the technical difference between our technique and Ng’s idea.)\nThe reader may now be wondering what is the single task that is easy for ConvNets but hard for FC nets trained with any standard algorithm? A simple example is the following: data distribution in Rd\nis standard Gaussian, and target labeling function is the sign of ∑d/2 i=1 x 2 i − ∑d i=d/2+1 x 2 i . Figure 1 shows that this task is indeed much more difficult for FC nets. Furthermore, the task is also hard in practice for data distributions other than Gaussian; the figure shows that a sizeable performance gap exists even on CIFAR images with such a target label.\nExtension to broader class of algorithms. The orthogonal-equivariance property holds for many types of practical training algorithms, but not all. Notable exceptions are adaptive gradient methods (e.g. Adam and AdaGrad), `1 regularizer, and initialization methods that are not spherically symmetric. To prove a lower bound against FC nets with these algorithms, we identify a property, permutationinvariance, which is satisfied by nets trained using such algorithms. We then demonstrate a single\nand natural task on Rd × {±1} that resembles real-life image texture classification, on which we prove any permutation-invariant learning algorithm requires Ω(d) training examples to generalize, while Empirical Risk Minimization with O(1) examples can learn a convolutional net.\nPaper structure. In Section 2 we discuss about related works. In section 3, we define the notation and terminologies. In Section 4, we give two warmup examples and an overview for the proof technique for the main theorem. In Section 5, we present our main results on the lower bound of orthogonal and permutation equivariant algorithms." }, { "heading": "2 RELATED WORKS", "text": "Du et al. (2018) attempted to investigate the reason why convolutional nets are more sample efficient. Specifically they prove O(1) samples suffice for learning a convolutional filter and also proved a Ω(d) min-max lower bound for learning the class of linear classifiers. Their lower bound is against learning a class of distributions, and their work fails to serve as a sample complexity separation, because their upper and lower bounds are proved on different classes of tasks.\nArjevani & Shamir (2016) also considered the notion of distribution-specific hardness of learning neural nets. They focused on proving running time complexity lower bounds against so-called \"orthogonally invariant\" and \"linearly invariant\" algorithms. However, here we focus on sample complexity.\nRecently, there has been progress in showing lower bounds against learning with kernels. Wei et al. (2019) constructed a single task on which they proved a sample complexity separation between learning with neural networks vs. with neural tangent kernels. Notably the lower bound is specific to neural tangent kernels (Jacot et al., 2018). Relatedly, Allen-Zhu & Li (2019) showed a sample complexity lower bound against all kernels for a family of tasks, i.e., learning k-XOR on the hypercube." }, { "heading": "3 NOTATION AND PRELIMINARIES", "text": "We will use X = Rd, Y = {−1, 1} to denote the domain of the data and label and H = {h | h : X → Y} to denote the hypothesis class. Formally, given a joint distribution P , the error of a hypothesis h ∈ H is defined as errP (h) := Px,y∼P [h(x) 6= y]. If h is a random hypothesis, we define errP (h) := Px,y∼P,h [h(x) 6= y] for convenience. A class of joint distributions supported on X × Y is referred as a problem, P . We use ‖·‖2 to denote the spectrum norm and ‖·‖F to denote the Frobenius norm of a matrix. We use A ≤ B to denote that B − A is a semi-definite positive matrix. We also use O(d) and GL(d) to denote the d-dimensional orthogonal group and general linear group respectively. We use Bd 2\np to denote the unit Schatten-p norm ball in Rd×d.\nWe useN(µ,Σ) to denote Gaussian distribution with mean µ and covariance Σ. For random variables X and Y , we denote X is equal to Y in distribution by X d= Y . In this work, we also always use PX to denote the distributions on X and P to denote the distributions supported jointly on X × Y . Given an input distribution PX and a hypothesis h, we define PX h as the joint distribution on X × Y , such that (PX h)(S) = P ({x|(x, h(x)) ∈ S}), ∀S ⊂ X × Y . In other words, to sample (X,Y ) ∼ PX h means to first sample X ∼ PX , and then set Y = h(X). For a family of input distributions PX and a hypothesis class H, we define PX H = {PX h | PX ∈ PX , h ∈ H}. In this work all joint distribution P can be written as PX h for some h, i.e. PY|X is deterministic.\nFor set S ⊂ X and 1-1 map g : X → X , we define g(S) = {g(x)|x ∈ S}. We use ◦ to denote function composition. (f ◦ g)(x) is defined as f(g(x)), and for function classes F , G, F ◦ G = {f ◦ g | f ∈ F , g ∈ G}. For any distribution PX supported on X , we define PX ◦ g as the distribution such that (PX ◦ g)(S) = PX (g(S)). In other words, if X ∼ PX ⇐⇒ g−1(X) ∼ PX ◦ g, because\n∀S ⊆ X , P X∼PX\n[ g−1(X) ∈ S ] = P X∼PX [X ∈ g(S)] = [PX ◦ g](S).\nAlgorithm 1 Iterative algorithm A Require: Initial parameter distribution Pinit supported in W = Rm, total iterations T , training\ndataset {xi, yi}ni=1, parametric modelM :W → H, iterative update rule F (W,M, {xi, yi} n i=1)\nEnsure: Hypothesis h : X → Y . Sample W(0) ∼ Pinit. for t = 0 to T − 1 do W(t+1) = F (W(t),M, {xi, yi}ni=1).\nreturn h = sign [ M[W(T )] ] .\nFor any joint distribution P of form P = PX h, we define P ◦ g = (PX ◦ g) (h ◦ g). In other words, (X,Y ) ∼ P ⇐⇒ (g−1(X), Y ) ∼ P ◦ g. For any distribution class P and group G acting on X , we define P ◦ G as {P ◦ g | P ∈ P, g ∈ G}. Definition 3.1. A deterministic supervised Learning Algorithm A is a mapping from a sequence of training data, {(xi, yi)}ni=1 ∈ (X × Y)n, to a hypothesis A({(xi, yi)}ni=1) ∈ H ⊆ YX . The algorithmA could also be randomized, in which case the outputA({(xi, yi)}ni=1) is a distribution on hypotheses. Two randomized algorithms A and A′ are the same if for any input, their outputs have the same distribution in function space, which is denoted by A({xi, yi}ni=1) d = A′({xi, yi}ni=1). Definition 3.2 (Equivariant Algorithms). A learning algorithm is equivariant under group GX (or GX -equivariant) if and only if for any dataset {xi, yi}ni=1 ∈ (X × Y)n and ∀g ∈ GX ,x ∈ X , A({g(xi), yi}ni=1) ◦ g = A({xi, yi} n i=1), or A({g(xi), yi} n i=1)(g(x)) = [A({xi, yi} n i=1)](x). 1 Definition 3.3 (Sample Complexity). Given a problem P and a randomized learning algorithm A, δ, ε ∈ [0, 1], we define the (ε, δ)-sample complexity, denoted N (A,P, ε, δ), as the smallest number n ∈ N such that ∀P ∈ P , w.p. 1− δ over the randomness of {xi, yi}ni=1, errP (A({xi, yi} n i=1)) ≤ ε. We also define the ε-expected sample complexity for a problem P , denoted N ∗(A,P, ε), as the smallest number n ∈ N such that ∀P ∈ P , E\n(xi,yi)∼P [errP (A({xi, yi}ni=1))] ≤ ε. By definition, we\nhave N ∗(A,P, ε+ δ) ≤ N (A,P, ε, δ) ≤ N ∗(A,P, εδ), ∀ε, δ ∈ [0, 1]." }, { "heading": "3.1 PARAMETRIC MODELS AND ITERATIVE ALGORITHMS", "text": "A parametric model M : W → H is a functional mapping from weight W to a hypothesis M(·) : X → Y . Given a specific parametric modelM, a general iterative algorithm is defined as Algorithm 1. In this work, we will only use the two parametric models below, FC-NN and CNN.\nFC Nets: A L-layer Fully-connected Neural Network parameterized by its weights W = (W1,W2, . . . ,WL) is a function FC-NN[·] : Rd → R, where Wi ∈ Rdi−1×di , d0 = d, and dL = 1:\nFC-NN[W](x) = WLσ(WL−1 · · ·σ(W2σ(W1x))). Here, σ : R→ R can be any function, and we abuse the notation such that σ is also defined for vector inputs, in the sense that [σ(x)]i = σ(xi).\nConvNets (CNN): In this paper we will only use two layer Convolutional Neural Networks with one channel. Suppose d = d′r for some integer d′, r, a 2-layer CNN parameterized by its weights W = (w,a, b) ∈ Rk × Rr × R is a function CNN[·] : Rd → R:\nCNN[W](x) = r∑ i=1 arσ([w ∗ x]d′(i−1)+1:d′i) + b,\nwhere ∗ : Rk×Rd → Rd is the convolution operator, defined as [w∗x]i = ∑k j=1 wjx[i−j−1 mod d]+1, and σ : Rd′ → R is the composition of pooling and element-wise non-linearity." }, { "heading": "3.2 EQUIVARIANCE AND TRAINING ALGORITHMS", "text": "This section gives an informal sketch of why FC nets trained with standard algorithms have certain equivariance properties. The high level idea here is if update rule of the network, or more generally,\n1For randomized algorithms, the condition becomes A({g(xi), yi}ni=1) ◦ g d = A({xi, yi}ni=1), which is\nstronger than A({g(xi), yi}ni=1)(g(x)) d = [A({xi, yi}ni=1)](x), ∀x ∈ X .\nthe parametrized model, exhibits certain symmetry per step, i.e., property 2 in Theorem C.1, then by induction it will hold till the last iteration.\nTaking linear regression as an example, let xi ∈ Rd, i ∈ [n] be the data and y ∈ Rn be the labels, the GD update for L(w) = 12 ∑n i=1(x > i w − yi)2 = 12 ∥∥X>w − y∥∥2 2\nwould be wt+1 = F (wt,X,y) := wt−ηX(X>wt−y). Now suppose there’s another person trying to solve the same problem using GD with the same initial linear function, but he observes everything in a different basis, i.e., X′ = UX and w′0 = Uw0, for some orthogonal matrix U . Not surprisingly, he would get the same solution for GD, just in a different basis. Mathematically, this is because w′t = Uwt =⇒ w′t+1 = F (w ′ t, UX,y) = UF (wt,X,y) = Uwt+1. In other words, he would make the same prediction for unseen data. Thus if the initial distribution of w0 is the same under all basis (i.e., under rotations), e.g., gaussian N(0, Id), then w0 d = Uw0 =⇒ F t(w0, UX,y) = UF t(w0,X,y), for any iteration t, which means GD for linear regression is orthogonal invariant.\nTo show orthogonal equivariance for gradient descent on general deep FC nets, it suffices to apply the above argument on each neuron in the first layer of the FC nets. Equivariance for other training algorithms (see Table 1) can be derived in the exact same method. The rigorous statement and the proofs are deferred into Appendix C." }, { "heading": "4 WARM-UP EXAMPLES AND PROOF OVERVIEW", "text": "4.1 EXAMPLE 1: Ω(d) LOWER BOUND AGAINST ORTHOGONAL EQUIVARIANT METHODS\nWe start with a simple but insightful example to how equivariance alone could suffice for some non-trivial lower bounds.\nWe consider a task on Rd × {±1} which is a uniform distribution on the set {(eiy, y)|i ∈ {1, 2, . . . , d}, y = ±1}, denoted by P . Each sample from P is a one-hot vector in Rd and the sign of the non-zero coordinate determines its label. Now imagine our goal is to learn this task using an algorithm A. After observing a training set of n labeled points S := {(xi, yi)}ni=1, the algorithm is asked to make a prediction on an unseen test data x, i.e., A(S)(x). Here we are concerned with orthogonal equivariant algorithms ——the prediction of the algorithm on the test point remains the same even if we rotate every xi and the test point x by any orthogonal matrix R, i.e.,\nA({(Rxi, yi)}ni=1)(Rx) d = A({(xi, yi)}ni=1)(x)\nNow we show this algorithm fails to generalize on task P , if it observes only d/2 training examples. The main idea here is that, for a fixed training set S, the predictionA({(xi, yi)}ni=1)(x) is determined solely by the inner products between x and xi’s due to orthogonal equivariance, i.e., there exists a random function f (which may depend on S) such that2\nA({(xi, yi)}ni=1)(x) d = f(x>x1, . . . ,x >xn)\nBut the input distribution for this task is supported on 1-hot vectors. Suppose n < d/2. Then at test time the probability is at least 1/2 that the new data point (x, y) ∼ P , is such that x has zero inner product with all n points seen in the training set S. This fact alone fixes the prediction of A to the value f(0, . . . , 0) whereas y is independently and randomly chosen to be ±1. We conclude that A outputs the wrong answer with probability at least 1/4.\n2this can be made formal using the fact that Gram matrix determine a set of vectors up to an orthogonal transformation.\n4.2 EXAMPLE 2: Ω(d2) LOWER BOUND IN THE WEAK SENSE\nThe warm up example illustrates the main insight of (Ng, 2004), namely, that when an orthogonal equivariant algorithm is used to do learning on a certain task, it is actually being forced to simultaneously learn all orthogonal transformations of this task. Intuitively, this should make the learning much more sample-hungry compared to even Simple SGD on ConvNets, which is not orthogonal equivariant. Now we sketch why the obvious way to make this intuition precise using VC dimension (Theorem B.1) does not give a proper separation between ConvNets and FC nets, as mentioned in the Introduction.\nWe first fix the ground truth labeling function h∗ = sign [∑d\ni=1 x 2 i − ∑2d i=d+1 x 2 i ] . Algorithm\nA is orthogonal equivariant (Definition 3.2) means that for any task P = PX h∗, where PX is the input distribution and h∗ is the labeling function, A must have the same performance on P and its rotated version P ◦ U = (PX ◦ U) (h∗ ◦ U), where U can be any orthogonal matrix. Therefore if there’s an orthogonal equivariant learning algorithm A that learns h∗ on all distributions, then A will also learn every the rotated copy of h∗, h∗ ◦ U , on every distribution PX , simply because A learns h∗ on distribution PX ◦ U−1. Thus A learns the class of labeling functions h∗ ◦ O(d) := {h(x) = h∗(U(x)) | U ∈ O(d)} on all distributions. (See formal statement in Theorem 5.1) By the standard lower bounds with VC dimension (See Theorem B.1), it takes at least Ω( VCdim(H◦O(d))ε ) samples for A to guarantee 1 − ε accuracy. Thus it suffices to show the VC dimension VCdim(H ◦ O(d)) = Ω(d2), towards a Ω(d2) sample complexity lower bound. (Ng (2004) picks a linear thresholding function as h∗, and thus VCdim(h∗ ◦ O(d)) is only O(d).) Formally, we have the following theorem, whose proof is deferred into Appendix D.2: Theorem 4.1 (All distributions, single hypothesis). Let P = {all distributions} {h∗}. For any orthogonal equivariant algorithms A, N (A,P, ε, δ) = Ω((d2 + ln 1δ )/ε), while there’s a 2-layer ConvNet architecture, such that N (ERMCNN,P, ε, δ) = O ( 1 ε ( log 1ε + log 1 δ )) .\nAs noted in the introduction, this doesn’t imply there is some task hard for every training algorithm for the FC net. The VC dimension based lower bound implies for each algorithm A the existence of a fixed distribution PX ∈ P and some orthogonal matrix UA such that the task (PX ◦ U−1A ) h∗ is hard for it. However, this does not preclude (PX ◦U−1A ) h∗ being easy for some other algorithmA′." }, { "heading": "4.3 PROOF OVERVIEW FOR FIXED DISTRIBUTION LOWER BOUNDS", "text": "At first sight, the issue highlighted above (and in the Introduction) seems difficult to get around. One possible avenue is if the hard input distribution PX in the task were invariant under all orthogonal transformations, i.e., PX = PX ◦ U for all orthogonal matrices U . Unfortunately, the distribution constructed in the proof of lower bound with VC dimension is inherently discrete and cannot be made invariant to orthogonal transformations.\nOur proof uses a fixed PX , the standard Gaussian distribution, which is indeed invariant under orthogonal transformations. The proof also uses the Benedek-Itai’s lower bound, Theorem 4.2, and the main technical part of our proof is the lower bound for the the packing number D(H, ρ, ε) defined below (also see Equation (2)).\nFor function class H, we use ΠH(n) to denote the growth function of H, i.e. ΠH(n) := sup\nx1,...,xn∈X |{(h(x1), h(x2), . . . , h(xn)) | h ∈ H}| . Denote the VC-Dimension ofH by VCdim(H), by Sauer-Shelah Lemma, we know ΠH(n) ≤ ( en VCdim(H) )VCdim(H) for n ≥ VCdim(H).\nLet ρ be a metric on H, We define N(H, ρ, ε) as the ε-covering number of H w.r.t. ρ, and D(H, ρ, ε) as the ε-packing number of H w.r.t. ρ. For distribution PX , we use ρX (h, h′) := PX∼PX [h(X) 6= h′(X)] to denote the discrepancy between hypothesis h and h′ w.r.t. PX . Theorem 4.2. [Benedek-Itai’s lower bound] For any algorithm A that (ε, δ)-learns H with n i.i.d. samples from a fixed distribution PX , it must hold for every\nΠH(n) ≥ (1− δ)D(H, ρX , 2ε) (1) Since ΠH(n) ≤ 2n, we have N (A, PX H, ε, δ) ≥ log2D(H, ρX , 2ε) + log2(1− δ), which is the original bound from Benedek & Itai (1991). Later Long (1995) improved this bound for the regime\nn ≥ VCdim(H) using Sauer-Shelah lemma, i.e.,\nN (A, PX , ε, δ) ≥ VCdim(H)\ne ((1− δ)D(H, ρX , 2ε))\n1 VCdim(H) . (2)\nIntuition behind Benedek-Itai’s lower bound. We first fix the data distribution as PX . Suppose the 2ε-packing is labeled as {h1, . . . , hD(H,ρX ,2ε)} and ground truth is chosen from this 2ε-packing, (ε, δ)-learns the hypothesisH means the algorithm is able to recover the index of the ground truth w.p. 1− δ. Thus one can think this learning process as a noisy channel which delivers log2D(H, ρX , 2ε) bits of information. Since the data distribution is fixed, unlabeled data is independent of the ground truth, and the only information source is the labels. With some information-theoretic inequalities, we can show the number of labels, or samples (i.e., bits of information) N (A, PX H, ε, δ) ≥ log2D(H, ρX , 2ε)+log2(1−δ). A more closer look yields Equation (2), because when VCdim(H) < ∞, then only log2 ΠH(n) instead of n bits information can be delivered." }, { "heading": "5 LOWER BOUNDS", "text": "Below we first present a reduction from a special subclass of PAC learning to equivariant learning (Theorem 5.1), based on which we prove our main separation results, Theorem 4.1, 5.2, 5.3 and 5.4.\nTheorem 5.1. IfPX is a set of data distributions that is invariant under group GX , i.e.,PX ◦GX = PX , then the following inequality holds. (Furthermore it becomes an equality when GX is a compact group.)\ninf A∈AGX N ∗(A,PX H, ε) ≥ inf A∈A N ∗(A,PX (H ◦ GX ), ε) (3)\nRemark 5.1. The sample complexity in standard PAC learning is usually defined again hypothesis class H only, i.e., PX is the set of all the possible input distributions. In that case, PX is always invariant under group GX , and thus Theorem 5.1 says that GX -equivariant learning against hypothesis classH is as hard as learning against hypothesisH ◦ GX without equivariance constraint.\n5.1 Ω(d2) LOWER BOUND FOR ORTHOGONAL EQUIVARIANCE WITH A FIXED DISTRIBUTION\nIn this subsection we show Ω(d2) vs O(1) separation on a single task in our main theorem (Theorem 5.2). With the same proof technique, we further show we can get correct dependency on ε for the lower bound, i.e., Ω(d 2\nε ), by considering a slightly larger function class, which can be learnt by ConvNets with O(d) samples. We also generalize this Ω(d2) vs O(d) separation to the case of `2 regression with a different proof technique.\nTheorem 5.2. There’s a single task, PX h∗, where h∗ = sign [∑d i=1 x 2 i − ∑2d i=d+1 x 2 i ] and\nPX = N(0, I2d) and a constant ε0 > 0, independent of d, such that for any orthogonal equivariant algorithm A, we have\nN ∗(A, PX h∗, ε0) = Ω(d2), (4)\nwhile there’s a 2-layer ConvNet, such that N (ERMCNN, PX h∗, ε, δ) = O ( 1 ε ( log 1ε + log 1 δ )) . Moreover, ERMCNN could be realized by gradient descent (on the second layer only).\nProof of Theorem 5.2. Upper bound: implied by upper bound in Theorem 4.1. Lower bound: Note that the PX = N(0, I2d) is invariant under O(2d), by Theorem 5.1, it suffices to show that there’s a constant ε0 > 0 (independent of d), for any algorithm A, it takes Ω(d2) samples to learn the augmented function class h∗ ◦ O(2d) w.r.t. PX = N(0, I2d). Define hU = sign [ x>1:dU xd+1:2d ] , ∀U ∈ Rd×d, and by Lemma D.2, we haveH = {hU | U ∈ O(d)} ⊆ h∗ ◦ O(2d). Thus it suffices to a Ω(d2) sample complexity lower bound for the sub function classH, i.e.,\nN ∗(A, N(0, I2d) {sign [ x>1:dU xd+1:2d ] }, ε0) = Ω(d2). (5)\nBy Benedek&Itai’s lower bound, (Benedek & Itai, 1991) (Equation (1)), we know N (A,P, ε0, δ) ≥ log2 ((1− δ)D(H, ρX , 2ε0)) . (6) By Lemma D.4, there’s some constant C, such that D(H, ρX , ε) ≥ (Cε ) d(d−1) 2 , ∀ε > 0.\nThe high-level idea for Lemma D.4 is to first show that ρX (hU , hV ) ≥ Ω( ‖U−V ‖F√\nd ), and then we\nshow the packing number of orthogonal matrices in a small neighborhood of Id w.r.t. ‖·‖F√ d\nis roughly the same as that in the tangent space of orthogonal manifold at Id, i.e., the set of skew matrices, which is of dimension d(d−1)2 and has packing number ( C ε ) d(d−1) 2 . The advantage of working in the tangent space is that we can apply the standard volume argument.\nSetting δ = 12 , we have N ∗(A,P, ε0) ≥ N (A,P, 12 , 2ε0) ≥ d(d−1) 2 log2 C 4ε0 − 1 = Ω(d2).\nIndeed, we can improve the above lower bound by applying Equation (2), and get\nN (A,P, ε, 1 2 ) ≥ d 2 e\n( 1\n2\n) 1 d2 ( C\nε\n) 1 2− 1 2d\n= Ω(d2ε− 1 2+ 1 2d ). (7)\nNote that the dependency in ε in Equation (7) is ε− 1 2+ 1 2d is not optimal, as opposed to ε−1 in upper bounds and other lower bounds. A possible reason for this might be that Theorem 4.2 (Long’s improved version) is still not tight and it might require a tighter probabilistic upper bound for the growth number ΠH(n), at least taking PX into consideration, as opposed to the current upper bound using VC dimension only. We left it as an open problem to show a single task P with Ω(d 2\nε ) sample complexity for all orthogonal equivariant algorithms.\nHowever, if the hypothesis is of VC dimension O(d), using a similar idea, we can prove a Ω(d2/ε) sample complexity lower bound for equivariant algorithms, and O(d) upper bounds for ConvNets. Theorem 5.3 (Single distribution, multiple functions). There is a problem with single input distribution, P = {PX } H = {N(0, Id)} {sign [∑d i=1 αix 2 i ] | αi ∈ R}, such that for any orthogonal equivariant algorithms A and ε > 0, N ∗(A,P, ε) = Ω(d2/ε), while there’s a 2-layer ConvNets architecture, such that N (ERMCNN,P, ε, δ) = O( d log 1ε+log 1 δ ε ).\nInterestingly, we can show an analog of Theorem 5.3 for `2 regression, i.e., the algorithm not only observes the signs but also the values of labels yi. Here we define the `2 loss of function h : Rd → R as `P (h) = E\n(x,y)∼P\n[ (h(x)− y)2 ] and the sample complexity N ∗(A,P, ε) for `2 loss similarly as\nthe smallest number n ∈ N such that ∀P ∈ P , E (xi,yi)∼P [`P (A({xi, yi}ni=1))] ≤ ε E (x,y)∼P\n[ y2 ] . The\nlast term E (x,y)∼P\n[ y2 ] is added for normalization to avoid the scaling issue and thus any ε > 1 could\nbe achieved trivially by predicting 0 for all data. Theorem 5.4 (Single distribution, multiple functions, `2 regression). There is a problem with single input distribution, P = {PX } H = {N(0, Id)} { ∑d i=1 αix 2 i | αi ∈ R} , such that for any orthogonal equivariant algorithms A and ε > 0, N ∗(A,P, ε) ≥ d(d+3)2 (1− ε)− 1, while there’s a 2-layer ConvNet architecture, such that N ∗(ERMCNN,P, ε) ≤ d for any ε > 0.\n5.2 Ω(d) LOWER BOUND FOR PERMUTATION EQUIVARIANCE\nIn this subsection we will present Ω(d) lower bound for permutation equivariance via a different proof technique — direct coupling. The high-level idea of direct coupling is to show with constant probability over (Xn,x), we can find a g ∈ GX , such that g(Xn) = Xn, but x and g(x) has different labels, in which case no equivariant algorithm could make the correct prediction. Theorem 5.5. Let ti = ei + ei+1 and si = ei + ei+23 and P be the uniform distribution on {(si, 1)}ni=1∪{(ti,−1)}ni=1, which is the classification problem for local textures in a 1-dimensional image with d pixels. Then for any permutation equivariant algorithm A, N (A,P, 18 , 1 8 ) ≥ N ∗(A,P, 14 ) ≥ d 10 . Meanwhile, N (ERMCNN ,P, 0, δ) ≤ log2 1 δ + 2, where ERMCNN stands for ERMCNN for function class of 2-layer ConvNets. Remark 5.2. The task could be understood as detecting if there are two consecutive white pixels in the black background. For proof simplicity, we take texture of length 2 as an illustrative example. It\n3For vector x ∈ Rd, we define xi = x(i−1) mod d+1.\nis straightforward to extend the same proof to more sophisticated local pattern detection problem of any constant length and to 2-dimensional images." }, { "heading": "6 CONCLUSION", "text": "We rigorously justify the common intuition that ConvNets can have better inductive bias than FC nets, by constructing a single natural distribution on which any FC net requires Ω(d2) samples to generalize if trained with most gradient-based methods starting with gaussian initialization. On the same task, O(1) samples suffice for convolutional architectures. We further extend our results to permutation equivariant algorithms, including adaptive training algorithms like Adam and AdaGrad, `1 regularization, etc. The separation becomes Ω(d) vs O(1) in this case." }, { "heading": "A SOME BASIC INEQUALITIES", "text": "Lemma A.1. ∀x ∈ [−1, 1], arccosx√\n1− x ≥ √ 2.\nProof. Let x = cos(t), t ∈ [−π, π], we have arccos(x)√\n1− x = t√ 1− cos(t) = t√ 2 sin(t/2) ≥ √ 2.\nLemma A.2. ∃C > 0, ∀d ∈ N+,M ∈ Rd×d,\nC ‖M‖F / √ d ≤ E\nx∼Sd−1 [‖Mx‖2] ≤ ‖M‖F /\n√ d. (8)" }, { "heading": "Proof of Lemma A.2.", "text": "Upper Bound: By Cauchy-Schwarz inequality, we have\nE x∼Sd−1\n[‖Mx‖2] ≤ √\nE x∼Sd−1\n[ ‖Mx‖22 ] = √ tr [ M E\nx∼Sd−1 [xx>]M>\n] = √ tr[MM>]\nd = ‖M‖F√ d .\nLower Bound: Let M = UΣV > be the singular value decomposition of M , where U, V are orthogonal matrices and Σ is diagonal. Since ‖M‖F = ‖Σ‖F , and E\nx∼Sd−1 [‖Mx‖2] = E x∼Sd−1 [‖Σx‖2],\nw.l.o.g., we only need to prove the lower bound for all diagonal matrices.\nBy Proposition 2.5.1 in (Talagrand, 2014), there’s some constant C, such that\nC ‖Σ‖F = C √√√√ d∑ i=1 σ2i ≤ E x∼N(0,Id) √√√√ d∑ i=1 x2iσ 2 i = E x∼N(0,Id) [‖Mx‖]2 .\nBy Cauchy-Schwarz Inequality, we have E x∼N(0,Id)\n[‖x‖2] ≤ √\nE x∼N(0,Id)\n[ ‖x‖22 ] = √ d. Therefore,\nwe have C ‖Σ‖F ≤ E\nx∼N(0,Id) [‖Mx‖]2\n= E x̂∼Sd−1 [‖M x̂‖]2 E x∼N(0,Id) [‖x‖2]\n≤ E x̂∼Sd−1\n[‖M x̂‖]2 √ d,\n(9)\nwhich completes the proof.\nLemma A.1. For any z > 0, we have\nPr x∼N(0,σ) (|x| ≤ z) ≤ 2√ π z σ\nProof.\nPr x∼N(0,σ) (|x| ≤ z) = ∫ z −z 1√ 2π σ exp ( − x 2 2σ2 ) dx ≤ √ 2 π z σ" }, { "heading": "B UPPERAND LOWER BOUND FOR SAMPLE COMPLEXITY WITH VC DIMENSION", "text": "Theorem B.1. [Blumer et al. (1989)] If learning algorithm A is consistent and ranged in H, i.e. A({xi, yi}ni=1) ∈ H and A({xi, yi} n i=1)(xi) = yi, ∀i ∈ [n], then for any distribution PX and 0 < ε, δ < 1, we have\nN (A, PX H, ε, δ) = O( VCdim(H) ln 1ε + ln 1 δ\nε ). (10)\nMeanwhile, there’s a distribution PX supported on any subsets {x0, . . . , xd−1} which can be shattered byH, such that for any 0 < ε, δ < 1 and any algorithm A, it holds\nN (A, PX H, ε, δ) = Ω( VCdim(H) + ln 1δ\nε ). (11)" }, { "heading": "C EQUIVARIANCE IN ALGORITHMS", "text": "In this section, we give sufficient conditions for an iterative algorithm to be equivariant (as defined in Algorithm 1).\nTheorem C.1. Suppose GX is a group acting on X = Rd, the iterative algorithmA is GX -equivariant (as defined in Algorithm 1) if the following conditions are met: (proof in appendix)\n1. There’s a group GW acting on W and a group isomorphism τ : GX → GW , such that M[τ(g)(W)](g(x)) = M[W](x), ∀x ∈ X ,W ∈ W, g ∈ G. (One can think g as the rotation U applied on data x in linear regression and τ(U) as the rotation U applied on w.)\n2. Update rule F is invariant under any joint group action (g, τ(g)), ∀g ∈ G. In other words, [τ(g)](F (W,M, {xi, yi}ni=1)) = F ([τ(g)](W),M, {g(xi), yi} n i=1).\n3. The initialization Pinit is invariant under group GW , i.e. ∀g ∈ GW , Pinit = Pinit ◦ g−1.\nHere we want to address that the three conditions in Theorem C.1 are natural and almost necessary. Condition 1 is the minimal expressiveness requirement for modelM to allow equivariance. Condition 3 is required for equivariance at initialization. Condition 2 is necessary for induction.\nProof of Theorem C.1. ∀g ∈ GX , we sample W(0) ∼ Pinit, and W̃(0) = τ(g)(W(0)). By property (3), W̃(0) d= W(0) ∼ Pinit. Let W(t+1) = F ( W(t),M, {xi, yi}ni=1 ) and W̃(t+1) =\nF ( W̃(t),M, {g(xi), yi}ni=1 ) for 0 ≤ t ≤ T − 1, we can show W̃(t) = τ(g)W(t)) by induction using property (2). By definition of Algorithm 1, we have\nA{xi, yi}ni=1 d =M[W(T )],\nand M[W̃(T )] ◦ g d= A({g(xi), yi}ni=1) ◦ g.\nBy property (1), we have M[W̃(T )](g(x)) = M[τ(g)(W(T )](g(x)) = M[W(T )](x). Therefore, A({xi, yi}ni=1) d = M[W(T )] = M[W̃(T )] ◦ g d= A({g(xi), yi}ni=1) ◦ g, meaning A is GX -equivariant.\nRemark C.1. Theorem C.1 can be extended to the stochastic case and the adaptive case which allows the algorithm to use information of the whole trajectory, i.e., the update rule could be generalized as W(t+1) = Ft({W(s)}ts=1,M, {xi, yi} n i=1), as long as (the distribution of) each Ft is invariant under joint transformations.\nBelow are two example applications of Theorem C.1. Other results in Table 1 could be achieved in the same way.\nFor classification tasks, optimization algorithms often work with a differentiable surrogate loss ` : R→ R instead the 0-1 loss, such that `(yh(x)) ≥ 1 [yh(x) ≤ 0], and the total loss for hypothesis h and training, L(M(W); {xi, yi}ni=1) is defined as ∑n i=1 `(yi[M(W)](xi)). It’s also denoted by L(W) when there’s no confusion. Definition C.1 (Gradient Descent for FC nets). We call Algorithm 1 Gradient Descent if M = FC-NN and F = GDL , where GDL(W) = W − η∇L(W) is called the one-step Gradient Descent update and η > 0 is the learning rate.\nAlgorithm 2 Gradient Descent for FC-NN (FC networks)\nRequire: Initial parameter distribution Pinit , total iterations T , training dataset {xi, yi}ni=1, loss function ` Ensure: Hypothesis h : X → Y . Sample W(0) ∼ Pinit. for t = 0 to T − 1 do W(t+1) = W(t) − η\nn∑ i=1 ∇`(FC-NN(W(t))(xi), yi)\nreturn h = sign [ FC-NN[W(T )] ] .\nCorollary C.2. Fully-connected networks trained with (stochastic) gradient descent from i.i.d. Gaussian initialization is equivariant under the orthogonal group.\nProof of Corollary C.2. We will verify the three conditions required in Theorem C.1 one by one.\nThe only place we use the FC structure is for the first condition.\nLemma C.3. There’s a subgroup GW of O(m), and a group isomorphism τ : GX = O(d)→ GW , such that FC-NN[τ(R)(W)] ◦R = FC-NN[W], ∀W ∈ W, R ∈ GX .\nProof of Lemma C.3. By definition, FC-NN[W](x) could be written FC-NN[W2:L](σ(W1x)), which implies FC-NN[W](x) = FC-NN[W1R−1,W2:L](Rx), ∀R ∈ O(d), and thus we can pick τ(R) = O ∈ O(m), where O(W) = [W1R−1,W2:L], and GW = τ(O(d)).\nA notable property of Gradient Descent is that it is invariant under orthogonal re-parametrization. Formally, given loss function L : Rm → R and parameters W ∈ Rm, an orthogonal re-parametrization of the problem is to replace (L,W ) by (L ◦O−1, OW ), where O ∈ Rm×m is an orthogonal matrix. Lemma C.4 (Gradient Descent is invariant under orthogonal re-parametization). For any L,W and orthogonal matrix O ∈ Rm×m, we have OGDL(W ) = GDL◦O−1(OW ).\nProof of Lemma C.4. By definition, it suffices to show that for each i ∈ [n], and every W and W′ = OW,\nO∇W`(FC-NN(W)(xi), yi) = ∇W′`(FC-NN(O−1W′)(xi), yi),\nwhich holds by chain rule.\nFor any R ∈ O(d), and set O = τ(R) by Lemma C.3, [L ◦ O−1](W) =∑n i=1 `(yiFC-NN[O −1(W)](xi)) = ∑n i=1 `(yiFC-NN[W](Rxi)). The second condition in Theorem C.1 is satisfied by plugging above equality into Lemma C.4.\nThe third condition is also satisfied since the initialization distribution is i.i.d. Gaussian, which is known to be orthogonal invariant. In fact, from the proof, it suffices to have the initialization of the first layer invariant under GX .\nCorollary C.5. FC nets trained with newton’s method from zero initialization for the first layer and any initialization for the rest parameters is GL(d)-equivariant, or equivariant under the group of invertible linear transformations.\nHere, Netwon’s method means to use NT(W) = W − η(∇2L(W))−1∇L(W) as the update rule and we assume∇2L(W) is invertible. Proof is deferred into Appendix, .\nProof of Corollary C.5. The proof is almost the same as that of Corollary C.2, except the following modifications.\nCondition 1: If we replace theO(d),O(m) by GL(d),GL(m) in the statement and proof Lemma C.3, the lemma still holds.\nCondition 2:By chain rule, one can verify the update rule Newton’s method is invariant under invertible linear re-parametization, i.e. OGDL(W ) = NTL◦O−1(OW ), for all invertible matrix O.\nCondition 3: Since the first layer is initialized to be 0, it is invariant under any linear transformation.\nRemark C.2. The above results can be easily extended to the case of momentum and Lp regularization. For momentum, we only need to ensure that the following update rule, W(t+1) = GDM(W(t),W(t−1),M, {xi, yi}ni=1) = (1 + γ)W(t) − γW(t−1) − η∇L(W(t)), also satisfies the property in Lemma C.4. For Lp regularization, because ‖W‖p is independent of {xi, yi} n i=1, we only need to ensure ‖W‖p = ‖τ(R)(W)‖p , ∀R ∈ GX , which is easy to check when GX only contains permutation or sign-flip." }, { "heading": "C.1 EXAMPLES OF EQUIVARIANCE FOR NON-ITERATIVE ALGORITHMS", "text": "To demonstrate the wide application of our lower bounds, we give two more examples of algorithmic equivariance where the algorithm is not iterative. The proofs are folklore. Definition C.2. Given a positive semi-definite kernel K, the Kernel Regression algorithm REGK is defined as:\nREGK({xi, yi}ni=1)(x) := 1 [ K(x,XN ) ·K(XN ,XN )†y ≥ 0 ] where K(XN ,XN ) ∈ Rn×n, [K(XN ,XN )]i,j = K(xi,xj), y = [y1, y2, . . . , yN ]> and K(x,XN ) = [K(x,x1), . . . ,K(x,xN )].\nKernel Regression: If kernel K is GX -equivariant, i.e., ∀g ∈ GX ,x,y ∈ X , K(g(x), g(y)) = K(x,y), then algorithm REGK is GX -equivariant. ERM: If F = F ◦ GX , and argminh∈F ∑n i=1 1 [h(xi) 6= yi] is unique, then ERMF is GX - equivariant." }, { "heading": "D OMITTED PROOFS", "text": "" }, { "heading": "D.1 PROOFS OF SAMPLE COMPLEXITY REDUCTION FOR GENERAL EQUIVARIANCE", "text": "Given GX -equivariant algorithm A, by definition, N ∗(A,P, ε) = N ∗(A,P ◦ g−1, ε),∀g ∈ GX . Consequently, we have N ∗(A,P, ε) = N ∗(A,P ◦ GX , ε). (12) Lemma D.1. Let A be the set of all algorithms and AGX be the set of all GX -equivariant algorithms, the following inequality holds. The equality is attained when GX is a compact group.\ninf A∈AGX N ∗(A,P, ε) ≥ inf A∈A N ∗(A,P ◦ GX , ε) (13)\nProof of Lemma D.1. Take infimum over AGX over the both side of Equation 12, and note that AGX ⊂ A, Inequality 13 is immediate. Suppose the group GX is compact and let µ be the Haar measure on it, i.e. ∀S ⊂ GX , g ∈ GX , µ(S) = µ(g◦S). We claim for each algorithmA, the sample complexity of the following equivariant algorithm A′ is no higher than that of A on P GX :\nA′({xi, yi}ni=1) = A({g(xi), yi} n i=1) ◦ g, where g ∼ µ.\nBy the definition of Haar measure, A′ is GX -equivariant. Moreover, for any fixed n ≥ 0, we have inf P∈P E (xi,yi)∼P [errP (A′({xi, yi}ni=1))] = infP∈P Eg∼µ E(xi,yi)∼P◦g−1 [errP (A({xi, yi}ni=1))]\n≥ inf P∈P inf g∈GX E (xi,yi)∼P◦g−1 [errP (A({xi, yi}ni=1))] = infP∈P◦GX E(xi,yi)∼P [errP (A({xi, yi}ni=1))] ,\nwhich implies infA∈AGX N ∗(A,P, ε) ≤ infA∈AN ∗(A,P ◦ GX , ε).\nProof of Theorem 5.1. Simply note that (PX H)◦GX = ∪g∈GX (PX ◦g) (H◦g−1) = ∪g∈GXPX (H ◦ g−1) = PX (H ◦ GX ), the theorem is immediate from Lemma D.1." }, { "heading": "D.2 PROOF OF THEOREM 4.1", "text": "Lemma D.2. Define hU = sign [ x>1:dU xd+1:2d ] , ∀U ∈ Rd×d, we haveH = {hU | U ∈ O(d)} ⊆\nsign [∑d\ni=1 x 2 i − ∑2d i=d+1 x 2 i ] ◦ O(2d).\nProof. Note that\n[ 0 U U> 0 ] = [ Id 0 0 U> ] · [ 0 Id Id 0 ] · [ Id 0 0 U ] ,\nand\n[ 0 Id Id 0 ] = [√ 2 2 Id − √ 2 2 Id√\n2 2 Id\n√ 2 2 Id\n] · [ Id 0 0 −Id ] · [ √ 2 2 Id\n√ 2 2 Id\n− √ 2 2 Id √ 2 2 Id\n] ,\nthus for any U ∈ O(d), ∀x ∈ R2d, hU (x) = sign [ x>1:dU xd+1:2d ] = sign [ x> [ 0 U U> 0 ] x ] =sign [ gU (x) > [ Id 0 0 −Id ] gU (x) ] ∈ h∗ ◦ O(2d),\n(14)\nwhere gU (x) = [ Id 0 0 U ] · [√ 2 2 Id − √ 2 2 Id√\n2 2 Id\n√ 2 2 Id\n] · x is an orthogonal transformation on R2d.\nLemma D.3. Define hU = sign [ x>1:dU xd+1:2d ] , ∀U ∈ Rd×d, and H = {hU | U ∈ O(d)}, we have\nVCdim(H) ≥ d(d− 1) 2 .\nProof. Now we claim H shatters {ei + ed+j}1≤i<j≤d, i.e. O(d) can shatter {eie>j }1≤i<j≤d, or for any sign pattern {σij}1≤i<j≤d, there exists U ∈ O(d), such that sign [〈 U, eie > j 〉] = σij , which implies VCdim(H) ≥ d(d−1)2 .\nLet so(d) = {M |M = −M>,M ∈ Rd×d}, we know\nexp(u) = Id + u+ u2\n2 + · · · ∈ SO(d), ∀u ∈ so(d).\nThus for any sign pattern {σij}1≤i<j≤d, let u = ∑\n1≤i<j≤d σij(eie\n> j − eje>i ) and λ→ 0+,\nsign [〈\nexp(λu), eie > j\n〉] = sign [ 0 + λσij +O(λ 2) ] = sign [σij +O(λ)] = σij .\nTheorem 4.1 (All distributions, single hypothesis). Let P = {all distributions} {h∗}. For any orthogonal equivariant algorithms A, N (A,P, ε, δ) = Ω((d2 + ln 1δ )/ε), while there’s a 2-layer ConvNet architecture, such that N (ERMCNN,P, ε, δ) = O ( 1 ε ( log 1ε + log 1 δ )) .\nProof of Theorem 4.1. Lower bound: Suppose d = 2d′ for some integer d′, we construct P = PX H, where PX is the set of all possible distributions on X = R3k, and H = {sign [∑d′ i=1 x 2 i − ∑2d′ i=d′+1 x 2 i ] }. By Lemma D.2, H′ = {sign [ x>1:dU xd+1:2d ] | U ∈ O(d′)} ⊆ H ◦ O(d). By Theorem 5.1, we have\ninf A∈AGX N ∗(A,PX H, ε) ≥ inf A∈A N ∗(A,PX (H ◦ GX ), ε) ≥ inf A∈A N ∗(A,PX H′, ε) (15)\nBy the lower bound in Theorem B.1, we have infA∈AN ∗(A,PX H′, ε) ≥ VCdim(H′)+ln 1δ\nε . By\nLemma D.3 VCdim(H′) ≥ d ′(d′−1)\n2 = Ω(d 2).\nUpper Bound: Take CNN as defined in Section 3.1 with d = 2d′, r = 2, k = 1, σ : Rd′ → R, σ(x) = ∑d′ i=1 x 2 i (square activation + average pooling), we have\nFCNN = { sign [∑2 i=1 ai ∑d′ j=1 x 2 (i−1)d′+jw 2 1 + b ] |a1, a2, w1, b ∈ R } .\nNote that min h∈FCNN errP (h) = 0, ∀P ∈ P , and the VC dimension of F is 3, by Theorem B.1, we have ∀P ∈ P , w.p. 1− δ, errP (ERMFCNN({xi, yi} n i=1)) ≤ ε, if n = Ω ( 1 ε ( log 1ε + log 1 δ ) )) .\nConvergence guarantee for Gradient Descent: We initialize all the parameters by i.i.d. standard gaussian and train the second layer by gradient descent only, i.e. set the LR of w1 as 0. (Note training the second layer only is still a orthogonal-equivariant algorithm for FC nets, thus it’s a valid separation.)\nFor any convex non-increasing surrogate loss of 0-1 loss l satisfying l(0) ≥ 1, limx→∞ l(x) = 0 e.g. logistic loss, we define the loss of the weight W as (xk,i is the kth coordinate of xi)\nL(W) = n∑ i=1 l(FCNN[W](xi)yi) = n∑ i=1 l 2∑ k=1 ai d′∑ j=1 x2(k−1)d′+j,iw 2 1 + b yi ,\nwhich is convex in ai and b. Note w1 6= 0 with probability 1, which means the data are separable even with fixed first layer, i.e. mina,b L(W) = L(W) |a=a∗,b=0= 0, where a∗ is the ground truth. Thus with sufficiently small step size, GD converges to 0 loss solution. By the definition of surrogate loss, L(W) < 1 implies for xi, l(xiyi) < 1 and thus the training error is 0." }, { "heading": "D.3 PROOFS OF LEMMAS FOR THEOREM 5.2", "text": "Lemma D.4. Define hU = sign [ x>1:dU xd+1:2d ] , H = {hU | U ∈ O(d)}, and ρ(U, V ) := ρX (hU , hV ) = Px∼N(0,I2d) [hU (x) 6= hV (x)]. There exists a constant C, such that the packing\nnumber D(H, ρX , ε) = D(O(d), ρ, ε) ≥ ( C ε ) d(d−1) 2 .\nProof of Lemma D.4. The key idea here is to first lower bound ρX (U, V ) by ‖U − V ‖F / √ d and apply volume argument in the tangent space of Id in O(d). We have\nρ(hU , hV ) = P x∼N(0,I2d) [hU (x) 6= hV (x)]\n= P x∼N(0,I2d)\n[( x>1:dU xd+1:2d ) ( x>1:dV xd+1:2d ) < 0 ]\n= 1\nπ E\nx1:d∼N(0,Id)\n[ arccos ( x>1:dUV >x1:d\n‖x1:d‖2\n)]\n≥ 1 π E x1:d∼N(0,Id)\n[√ 2− 2 x>1:dUV >x1:d\n‖x1:d‖2\n] (by Lemma A.1)\n= 1\nπ E\nx∼Sd−1\n[√ 2− 2x>UV >x ] = 1\nπ E\nx∼Sd−1\n[∥∥(U> − V >)x∥∥ F ] ≥C1 ‖U − V ‖F / √ d (by Lemma A.2)\n(16)\nBelow we show it suffices to pack in the 0.4 `∞ neighborhood of Id. Let so(d) be the Lie algebra of SO(d), i.e., {M ∈ Rd×d | M = −M>}. We also define the matrix exponential mapping exp : Rd×d → Rd×d, where exp(A) = A+ A 2\n2! + A3\n3! + · · · . It holds that exp(so(d)) = SO(d) ⊆ O(d). The benefit of covering in such neighborhood is that it allows us to translate the problem into the tangent space of Id by the following lemma.\nLemma D.5 (Implication of Lemma 4 in (Szarek, 1997)). For any matrix A,B ∈ so(d), satisfying that ‖A‖∞ ≤ π 4 , ‖B‖∞ ≤ π 4 , we have\n0.4 ‖A−B‖F ≤ ‖exp(A)− exp(B)‖F ≤ ‖A−B‖F . (17)\nTherefore, we have\nD(H, ρX , ε) ≥ D(O(d), C1 ‖·‖F / √ d, ε) ≥ D(so(d) ∩ π\n4 Bd\n2 ∞ , C1 ‖·‖F / √ d, 2.5ε). (18)\nNote that so(d) is a d(d−1)2 -dimensional subspace of R d2 , by Inverse Santalo’s inequality (Lemma 3, (Ma & Wu, 2015)), we have\n( vol(so(d) ∩Bd2∞) vol(so(d) ∩Bd22 ) ) 2 d(d−1) ≥ C2 √ dim(so(d))\nE G∼N(0,Id2 )\n[∥∥Πso(d)(G)∥∥∞] .\nwhere vol(·) is the d(d−1)2 volume defined in the space of so(d) and Πso(d)(G) = G−G>\n2 is the projection operator onto the subspace so(d). We further have\nE G∼N(0,Id2 ) [∥∥Πso(d)(G)∥∥∞] = E G∼N(0,Id2 ) [∥∥∥∥G−G>2 ∥∥∥∥ ∞ ] ≤ E G∼N(0,Id2 ) [‖G‖∞] ≤ C3 √ d,\nwhere the last inequality is by Theorem 4.4.5, Vershynin (2018).\nFinally, we have\nD(so(d) ∩ π 4 Bd 2\n∞ , C1 ‖·‖F / √ d, 2.5ε)\n=D(so(d) ∩Bd 2 ∞ , ‖·‖F , 10 √ dε\nC1π )\n≥vol(so(d) ∩B d2 ∞) vol(so(d) ∩Bd22 ) × ( C1π 10 √ dε ) d(d−1) 2\n≥\nC1C2π √ d(d−1) 2\n10dε\n d(d−1) 2\n:=\n( C\nε\n) d(d−1) 2\n(19)" }, { "heading": "D.4 PROOF OF THEOREM 5.3", "text": "Theorem 5.3 (Single distribution, multiple functions). There is a problem with single input distribution, P = {PX } H = {N(0, Id)} {sign [∑d i=1 αix 2 i ] | αi ∈ R}, such that for any orthogonal equivariant algorithms A and ε > 0, N ∗(A,P, ε) = Ω(d2/ε), while there’s a 2-layer ConvNets architecture, such that N (ERMCNN,P, ε, δ) = O( d log 1ε+log 1 δ ε ). Proof of Theorem 5.3. Lower bound: Note P = {N(0, Id)} H, whereH = {sign [∑d i=1 αix 2 i ] | αi ∈ R}. Since N(0, Id) is invariant under all orthogonal transformations, by Theorem 5.1, inf\nequivariantA N ∗(A, N(0, Id) ◦ H, ε0) = inf A N ∗(A, N(0, Id) (H ◦ O(d)), ε0). Furthermore, it can be show that H ◦ O(d) = {sign [∑\ni,j βijxixj ] | βij ∈ R}, the sign functions of all quadratics in\nRd. Thus it suffices to show learning quadratic functions on Gaussian distribution needs Ω(d2/ε) samples for any algorithm (see Lemma D.6, where we assume the dimension d can be divided by 4).\nUpper bound:Take CNN as defined in Section 3.1 with d = d′, r = 1, k = 1, σ : R→ R, σ(x) = x2 (square activation + no pooling), we have FCNN = { sign [∑d i=1 aiw 2 1x 2 i + b ] |ai, w1, b ∈ R } ={\nsign [∑d\ni=1 aix 2 i + b ] |ai, b ∈ R } .\nNote that min h∈FCNN errP (h) = 0, ∀P ∈ P , and the VC dimension of F is d+ 1, by Theorem B.1, we have ∀P ∈ P , w.p. 1− δ, errP (ERMFCNN({xi, yi} n i=1)) ≤ ε, if n = Ω ( 1 ε ( d log 1ε + log 1 δ ) )) .\nConvergence guarantee for Gradient Descent: We initialize all the parameters by i.i.d. standard gaussian and train the second layer by gradient descent only, i.e. set the LR of w1 as 0. (Note training the second layer only is still a orthogonal-equivariant algorithm for FC nets, thus it’s a valid separation.)\nFor any convex non-increasing surrogate loss of 0-1 loss l satisfying l(0) ≥ 1, limx→∞ l(x) = 0 e.g. logistic loss, we define the loss of the weight W as (xk,i is the kth coordinate of xi)\nL(W) = n∑ i=1 l(FCNN[W](xi)yi) = n∑ i=1 l\n( (\nd∑ k=1 w21aix 2 k,i + b)yi\n) ,\nwhich is convex in ai and b. Note w1 6= 0 with probability 1, which means the data are separable even with fixed first layer, i.e. mina,b L(W) = L(W) |a=a∗,b=0= 0, where a∗ is the ground truth.\nThus with sufficiently small step size, GD converges to 0 loss solution. By the definition of surrogate loss, L(W) < 1 implies for xi, l(xiyi) < 1 and thus the training error is 0." }, { "heading": "D.5 PROOF OF LEMMA D.6", "text": "Lemma D.6. For A ∈ Rd×d, we define MA ∈ R2d×2d as MA = [ A 0 0 Id ] , and hA :\nR4d → {−1, 1} as hA(x) = sign [ x>1:2dMAx2d+1:4d ] . Then for H = {hA | ∀A ∈ Rd×d} ⊆\n{sign [ x>Ax]|∀A ∈ R4d×4d ] }, satisfies that it holds that for any d, algorithm A and ε > 0,\nN ∗(A, {N(0, I4d)} H, ε) = Ω( d2\nε ).\nProof of Lemma D.6. Below we will prove a Ω( ( 1 ε )d2 ) lower bound for packing number, i.e. D(H, ρX , 2ε0) = D(Rd×d, ρ, 2ε0), where ρ(U, V ) = ρX (hU , hV ). Then we can apply Long’s improved version Equation (2) of Benedek-Itai’s lower bound and get a Ω(d2/ε) sample complexity lower bound. The reason that we can get the correct rate of ε is that the VCdim(H) is exactly equal to the exponent of the packing number. (cf. the proof of Theorem 5.2)\nSimilar to the proof of Theorem 5.2, the key idea here is to first lower bound ρ(U, V ) by ‖U − V ‖F / √ d and apply volume argument. Recall for A ∈ Rd×d, we define MA ∈ R2d×2d\nas MA = [ A 0 0 Id ] , and hA : R4d → {−1, 1} as hA(x) = sign [ x>1:2dMAx2d+1:4d ] . Then for H = {hA | ∀A ∈ Rd×d} . Below we will see it suffices to lower bound the packing number of a subset of Rd×d, i.e. Id + 0.1Bd 2 ∞ , where B d2\n∞ is the unit spectral norm ball. Clearly ∀x, ‖x‖2 = 1,∀U ∈ Id + 0.1Bd 2 ∞ , 0.9 ≤ ‖Ux‖2 ≤ 1.1.\nThus ∀U, V ∈ Id + 0.1Bd 2\n∞ we have, ρX (hU , hV ) = P\nx∼N(0,I4d) [hU (x) 6= hV (x)]\n= P x∼N(0,I4d)\n[( x>1:2dMU x2d+1:4d ) ( x>1:2dMV x2d+1:4d ) < 0 ]\n= 1\nπ E\nx1:2d∼N(0,I2d)\n[ arccos ( x>1:2dMUM\n> V x1:2d∥∥M>U x1:2d∥∥2 ∥∥M>V x1:2d∥∥2\n)]\n≥ 1 π E x1:2d∼N(0,I2d)\n[√ 2− 2\nx>1:2dMUM > V x1:2d∥∥M>U x1:2d∥∥2 ∥∥M>V x1:2d∥∥2\n] (by Lemma A.1)\n≥ √ 2\n1.1π E\nx1:2d∼N(0,I2d) [√∥∥M>U x1:2d∥∥2 ∥∥M>V x1:2d∥∥2 − x>1:2dMUM>V x1:2d] = 1\n1.1π E\nx1:2d∼N(0,I2d) [√∥∥(M>U −M>V )x1:2d∥∥22 − (∥∥M>U x1:2d∥∥2 − ∥∥M>V x1:2d∥∥2)2] ≥ 1\n1.1π ( E x1:2d∼N(0,I2d) [∥∥(M>U −M>V )x1:2d∥∥2] − E\nx1:2d∼N(0,I2d) [∣∣∥∥M>U x1:2d∥∥2 − ∥∥M>V x1:2d∥∥2∣∣]) ≥ C0\n1.1π E\nx1:2d∼N(0,I2d) [∥∥(M>U −M>V )x1:2d∥∥2] (by Lemma D.7) ≥C1 ‖MU −MV ‖F / √ d (by Lemma A.2) =C1 ‖U − V ‖F / √ d\n(20)\nIt remains to lower bound the packing number. We have\nM(0.1Bd 2 ∞ , C1 ‖·‖F / √ d, ε)\n≥vol(B d2 ∞) vol(Bd22 ) × ( 0.1C1√ dε )d2\n≥ ( C\nε\n)d2 ,\n(21)\nfor some constant C. The proof is completed by plugging the above bound and VCdim(H) = d2 into Equation (2).\nLemma D.7. Suppose x,x ∼ N(0, Id), then ∀R,S ∈ Rd×d, we have\nE x [‖(R− S)x‖2]− E x,y [∣∣∣∣√‖Rx‖22 + ‖y‖22 −√‖Sx‖22 + ‖y‖22∣∣∣∣] ≥ C0 Ex [‖(R− S)x‖2] , (22) for some constants C0 independent of R,S and d.\nProof of Lemma D.7. Note that∣∣∣∣√‖Rx‖22 + ‖y‖22 −√‖Sx‖22 + ‖y‖22∣∣∣∣ = |‖Rx‖2 − ‖Sx‖2|\n‖Rx‖2 + ‖Sx‖2√ ‖Rx‖22 + ‖y‖ 2 2 + √ ‖Sx‖22 + ‖y‖ 2 2\n≤‖(R− S)x‖2 ‖Rx‖2 + ‖Sx‖2√\n‖Rx‖22 + ‖y‖ 2 2 + √ ‖Sx‖22 + ‖y‖ 2 2\nLet F (x, d) be the cdf of chi-square distribution, i.e. F (x, d) = Px [ ‖x‖22 ≤ x ] . Let z = xd , we\nhave F (zd, d) ≤ (ze1−z)d/2 ≤ (ze1−z)1/2. Thus Py [ ‖y‖22 ≤ d/2 ] < 1, which implies for any ‖x‖2 ≤ 10 √ d,\nE y\n[∣∣∣∣√‖Rx‖22 + ‖y‖22 −√‖Sx‖22 + ‖y‖22∣∣∣∣]\n≤‖(R− S)x‖2 E y ‖Rx‖2 + ‖Sx‖2√ ‖Rx‖22 + ‖y‖ 2 2 + √ ‖Sx‖22 + ‖y‖ 2 2 ≤(1− α1) ‖(R− S)x‖2 ,\nfor some 0 < α1.\nTherefore, we have\nE x [‖(R− S)x‖2]− E x,y [∣∣∣∣√‖Rx‖22 + ‖y‖22 −√‖Sx‖22 + ‖y‖22∣∣∣∣] ≥E\nx\n[ ‖(R− S)x‖2 1 [ ‖x‖ ≤ 10 √ d ]]\n− E x,y [∣∣∣∣√‖Rx‖22 + ‖y‖22 −√‖Sx‖22 + ‖y‖22∣∣∣∣1 [‖x‖2 ≤ 10√d]] ≥α1 E\nx\n[ ‖(R− S)x‖2 1 [ ‖x‖2 ≤ 10 √ d ]]\n≥α1α2 E x [‖(R− S)x‖2] ,\nfor some constant α2 > 0. Here we use the other side of the tail bound of cdf of chi-square, i.e. for z > 1, 1− F (zd, d) < (ze1−z)d/2 < (ze1−z)1/2." }, { "heading": "D.6 PROOFS OF THEOREM 5.4", "text": "Lemma D.8. Let M ∈ Rd×d, we have E x∼N(0,Id)\n[ (x>Mx)2 ] = ∥∥∥M+M>2 ∥∥∥2\nF + (tr[M ])2." }, { "heading": "Proof of Lemma D.8.", "text": "E x∼N(0,Id)\n[ (x>Mx)2 ] = E\nx∼N(0,Id) ∑ i,j,i′j′ xixjxi′xj′MijMi′j′ = ∑ i6=j (M2ij +MijMji +MiiMjj) ( E x∼N(0,1) [ x2 ])2 + ∑ i M2ii E x∼N(0,1) [ x4 ]\n= ∑ i6=j (M2ij +MijMji +MiiMjj) + 3 ∑ i M2ii\n= ∥∥∥∥M +M>2 ∥∥∥∥2 F + (tr[M ])2\nTheorem 5.4 (Single distribution, multiple functions, `2 regression). There is a problem with single input distribution, P = {PX } H = {N(0, Id)} { ∑d i=1 αix 2 i | αi ∈ R} , such that for any orthogonal equivariant algorithms A and ε > 0, N ∗(A,P, ε) ≥ d(d+3)2 (1− ε)− 1, while there’s a 2-layer ConvNet architecture, such that N ∗(ERMCNN,P, ε) ≤ d for any ε > 0.\nProof of Theorem 5.4. Lower bound: Similar to the proof of Theorem 5.3, it suffices to for any algorithmA,N ∗(A,H◦O(d), ε) ≥ d(d+3)2 (1−ε)−1. Note thatH◦O(d) = { ∑ i,j βijxixj | βij ∈ R} is the set of all quadratic functions. For convenience we denote hM (x) = x>Mx, ∀M ∈ Rd×d. Now we claim quadratic functions such that any learning algorithm A taking at most n samples must suffer d(d+1)2 −n loss if the ground truth quadratic function is sampled from i.i.d. gaussian. Moreover, the loss is at most d(d+3)2 for the trivial algorithm always predicting 0. In other words, if the expected relative error ε ≤ d(d+1) 2 −n d(d+3)\n2\n, we must have the expected sample complexity N ∗(A,P, ε) ≥ n. That\nis N ∗(A,P, ε) ≥ d(d+3)2 (1− ε)− 1. (1). Upper bound for E [ y2 ] . By Lemma D.8,\nE M∼N(0,Id2 ) E x∼PX ,y=x>Mx\n[ y2 ]\n= E M∼N(0,Id2 ) [∥∥∥∥M +M>2 ∥∥∥∥2 F + (tr[M ])2 ] = d+d+ d(d− 1) 2 = d(d+ 3) 2 .\n(2). Lower bound for expected loss.\nThe infimum of the test loss over all possible algorithms A is\ninf A E M∼N(0,Id2 )\n[ E\n(xi,yi)∼PX hM [`P (A({xi, yi}ni=1))]\n]\n= inf A E M∼N(0,Id2 )\n[ E\n(xi,yi)∼PX hM\n[ E\nx,y∼PX◦hM\n[ ([A({xi, yi}ni=1)](x)− y)\n2 ]]]\n= inf A E M∼N(0,Id2 )\n[ E\nxi∼PX\n[ E\nx∼PX\n[ ([A({xi, hM (xi)}ni=1)](x)− hM (x))2 ]]] ≥ E\nxi,x∼PX M∼N(0,Id2 )\n[ Var\nx,xi,M [hM (x) | {xi, hM (xi)}ni=1,x]\n]\n= E xi,x∼PX\nM∼N(0,Id2 )\n[ Var M [hM (x) | {hM (xi)}ni=1] ] ,\nwhere the inequality is achieved when [A({xi, yi}ni=1)](x) = E M [hM (x) | {xi, yi}ni=1].\nThus it suffices to lower bound VarM [hM (x) | {hM (xi)}ni=1], for fixed {xi}ni=1 and x. For convenience we define Sd = {A ∈ Rd×d | A = A>} be the linear space of all d× d symmetric matrices, where the inner product 〈A,B〉 := tr[A>B] and Πn : Rd×d → Rd×d as the projection operator for the orthogonal complement of the n-dimensional space spanned by xix>i in Sd. By definition, we can expand\nxx> = n∑ i=1 αixix > i + Πn(xx >).\nThus even conditioned on {xi, yi}ni=1 and x,\nhM (x) = tr[xx >] = n∑ i=1 αitr[xix > i M ] + tr[Πn(xx >)M ],\nstill follows a gaussian distribution, N(0, ∥∥Πn(xx>)∥∥2F ).\nNote we can always find symmetric matrices Ei with ‖Ei‖F = 1 and tr[E>i Ej ] = 0 such that Πn(A) = ∑k i=1Eitr[E > i A], where the rank of Πn, is at least d(d+1) 2 − n. Thus we have\nE x [∥∥Πn(xx>)∥∥2F ] =E\nx ∥∥∥∥∥ k∑ i=1 Eitr[E > i xx >] ∥∥∥∥∥ 2\nF =\nk∑ i=1 E x [∥∥Eitr[E>i xx>]∥∥2F ] =\nk∑ i=1 E x [ (x>E>i x) 2 ] (byLemma D.8)\n≥ k∑ i=1 ‖Ei‖F2 ≥ k\n≥d(d+ 1) 2 − n\nThus the infimum of the expected test loss is\ninf A E M∼N(0,Id2 )\n[ E\n(xi,yi)∼PX hM [`P (A({xi, yi}ni=1))]\n]\n≥ E xi,x∼PX\nM∼N(0,Id2 )\n[ Var M [hM (x) | {hM (xi)}ni=1] ] .\n= E xi∼PX\nM∼N(0,Id2 )\n[ E x [∥∥Πn(xx>)∥∥2F ]] . ≥d(d+ 1)\n2 − n.\nUpper bound: We use the same CNN construction as in the proof of Theorem 5.3, i.e., the function class is FCNN = {∑d i=1 aiw 2 1x 2 i + b|ai, w1, b ∈ R } = {∑d i=1 aix 2 i + b|ai, b ∈ R } . Thus given\nd+ 1 samples, w.p. 1, (x21, x 2 2, . . . , x 2 d, 1) will be linear independent, which means ERMCNN could recover the ground truth and thus have 0 loss." }, { "heading": "D.7 PROOF OF THEOREM 5.5", "text": "Theorem 5.5. Let ti = ei + ei+1 and si = ei + ei+24 and P be the uniform distribution on {(si, 1)}ni=1∪{(ti,−1)}ni=1, which is the classification problem for local textures in a 1-dimensional image with d pixels. Then for any permutation equivariant algorithm A, N (A,P, 18 , 1 8 ) ≥ N ∗(A,P, 14 ) ≥ d 10 . Meanwhile, N (ERMCNN ,P, 0, δ) ≤ log2 1 δ + 2, where ERMCNN stands for ERMCNN for function class of 2-layer ConvNets.\nProof of Theorem 5.5. Lower Bound: We further define permutation gi as gi(x) = x − (ei+1 − ei+2)\n>(ei+1 − ei+2)x for i ∈ [d]. Clearly, gi(ti) = si, gi(si) = ti. For i, j ∈ {1, 2, . . . , d}, we define d(i, j) = min{(i − j) mod d, (j − i) mod d}. It can be verified that if d(i, j) ≥ 3, then gi(sj) = sj , gi(tj) = tj . For x = si or ti, x′ = sj or tj , we define d(x,x′) = d(i, j).\nGiven Xn,yn, we define B := {d(x,xk) ≥ 3, ∀k ∈ [n]} and we have P [B] = Px [d(x,xk) ≥ 3, ∀k ∈ [n]] ≥ d− d10∗5 d = 1 2 . Therefore, we have\nerrP (A(Xn,yn)) = P x,y,A [A(Xn,yn)(x) 6= y] ≥ P x,y,A [A(Xn,yn)(x) 6= y | B]P [B]\n≥ 1 2 P x,y,A [A(Xn,yn)(x) 6= y | B]\n= 1\n4 P i,A\n[A(Xn,yn)(si) 6= 1 | B] + 1\n4 P i,A [A(Xn,yn)(ti) 6= −1 | B]\n(3.2) =\n1 4 P i,A [A(gi(Xn),yn)(gi(si)) 6= 1 | B] + 1 4 P i,A [A(Xn,yn)(ti) 6= −1 | B]\n= 1\n4 P i,A\n[A(Xn,yn)(ti) 6= 1 | B] + 1\n4 P i,A\n[A(Xn,yn)(ti) 6= −1 | B] = 1\n4 .\nThus for any permutation equivariant algorithm A, N ∗(A, {P}, 14 ) ≥ d 10 .\nUpper Bound: Take CNN as defined in Section 3.1 with d′ = d, r = 1, k = 2, σ : Rd → R, σ(x) = ∑d i=1 x 2 i , we have FCNN = { sign [ a1 ∑d i=1(w1xi + w2xi−1) 2 + b|a1, w1, w2, b ∈ R ]} .\nNote that ∀h ∈ FCNN, ∀1 ≤ i ≤ d, h(si) = a1(2w21+2w22)+b, h(ti) = a1(w21+w22+(w1+w2)2)+b, thus the probability of ERMFCNN not achieving 0 error is at most the probability that all data in the training dataset are ti or si: (note the training error of ERMFCNN is 0)\nP [ xi ∈ {sj}dj=1,∀i ∈ [n] ] +P [ xi ∈ {tj}dj=1,∀i ∈ [n] ] = 2−n × 2 = 2−n+1.\n4For vector x ∈ Rd, we define xi = x(i−1) mod d+1.\nConvergence guarantee for Gradient Descent: We initialize all the parameters by i.i.d. standard gaussian and train the second layer by gradient descent only, i.e. set the LR of w1, w2 as 0. (Note training the second layer only is still a permutation-equivariant algorithm for FC nets, thus it’s a valid separation.)\nFor any convex non-increasing surrogate loss of 0-1 loss l satisfying l(0) ≥ 1, limx→∞ l(x) = 0 e.g. logistic loss, we define the loss of the weight W as\nL(W) = n∑ i=1 l(FCNN[W](xi)yi)\n=NS × l ( a1(2w 2 1 + 2w 2 2) + b ) +Nt × l ( −a1(w21 + w22 + (w1 + w2)2) + b ) .\nNote w1w2 6= 0 with probability 1, which means the data are separable even with fixed first layer, i.e. infa1,b L(W) = 0. Further note L(W) is convex in a1 and b, which implies with sufficiently small step size, GD converges to 0 loss solution. By the definition of surrogate loss, L(W) < 1 implies for xi, l(xiyi) < 1 and thus the training error is 0." } ]
2,021
null
SP:9fd718d9cc2318a1d6306c22a45b4e90ace9fd80
[ "The paper proposes a regularizer enforcing a novel form of sparsity that authors call \"layer sparsity\". Under certain conditions on layer weights, two consecutive layers in a deep neural network (with certain nonlinear activation functions) can be represented exactly as a single layer. The authors proposed a regularizer that can lead to such layer collapse thus resulting in shallower and more compact models." ]
Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity that concerns the networks’ layers and, therefore, aligns particularly well with the current trend toward deep networks. We call this notion layer sparsity. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks.
[]
[ { "authors": [ "J. Alvarez", "M. Salzmann" ], "title": "Learning the number of neurons in deep networks", "venue": "In Adv. Neural Inf. Process Syst.,", "year": 2016 }, { "authors": [ "T. Ash" ], "title": "Dynamic node creation in backpropagation networks", "venue": "Connect. Sci.,", "year": 1989 }, { "authors": [ "S. Bakin" ], "title": "Adaptive regression and model selection in data mining problems", "venue": "PhD thesis, The Australian National University,", "year": 1999 }, { "authors": [ "A. Barron", "J. Klusowski" ], "title": "Approximation and estimation for high-dimensional deep learning networks", "venue": null, "year": 2018 }, { "authors": [ "A. Barron", "J. Klusowski" ], "title": "Complexity, statistical risk, and metric entropy of deep nets using total path variation", "venue": null, "year": 1902 }, { "authors": [ "M. Bello" ], "title": "Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks", "venue": "IEEE Trans. Neural Netw.,", "year": 1992 }, { "authors": [ "J. Bien", "I. Gaynanova", "J. Lederer", "C. Müller" ], "title": "Prediction error bounds for linear regression with the TREX", "venue": null, "year": 2019 }, { "authors": [ "S. Changpinyo", "M. Sandler", "A. Zhmoginov" ], "title": "The power of sparsity in convolutional neural networks", "venue": null, "year": 2017 }, { "authors": [ "M. Chichignoud", "J. Lederer", "M. Wainwright" ], "title": "A practical scheme and fast algorithm to tune the lasso with optimality guarantees", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "T. Clanuwat", "M. Bober-Irizar", "A. Kitamoto", "A. Lamb", "K. Yamamoto", "D. Ha" ], "title": "Deep learning for classical Japanese literature", "venue": null, "year": 2018 }, { "authors": [ "J. Feng", "N. Simon" ], "title": "Sparse-input neural networks for high-dimensional nonparametric regression and classification", "venue": null, "year": 2017 }, { "authors": [ "J. Frankle", "M. Carbin" ], "title": "The lottery ticket hypothesis: finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "X. Glorot", "A. Bordes", "Y. Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "N. Golowich", "A. Rakhlin", "O. Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": null, "year": 2017 }, { "authors": [ "R. Hahnloser" ], "title": "On the piecewise analysis of networks of linear threshold neurons", "venue": "Neural Networks,", "year": 1998 }, { "authors": [ "R. Hahnloser", "R. Sarpeshkar", "M. Mahowald", "R. Douglas", "H. Seung" ], "title": "Digital selection and analogue amplification coexist in a cortex-inspired silicon", "venue": "circuit. Nature,", "year": 2000 }, { "authors": [ "S. Han", "H. Mao", "W. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "T. Hastie", "R. Tibshirani", "M. Wainwright" ], "title": "Statistical learning with sparsity: The lasso and generalizations", "venue": "CRC press,", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Int. Conf. Comput. Vis. Pattern Recognit.,", "year": 2016 }, { "authors": [ "J. Kim", "V. Calhoun", "E. Shim", "J.-H. Lee" ], "title": "Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia", "venue": null, "year": 2016 }, { "authors": [ "M. Kohler", "S. Langer" ], "title": "On the rate of convergence of fully connected very deep neural network regression estimates", "venue": null, "year": 1908 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proc. IEEE,", "year": 1998 }, { "authors": [ "J. Lederer" ], "title": "Trust, but verify: benefits and pitfalls of least-squares refitting in high dimensions", "venue": null, "year": 2013 }, { "authors": [ "J. Lederer", "M. Vogt" ], "title": "Estimating the lasso’s effective noise", "venue": null, "year": 2004 }, { "authors": [ "S. Liang", "R. Srikant" ], "title": "Why deep neural networks for function approximation", "venue": null, "year": 2016 }, { "authors": [ "B. Liu", "M. Wang", "H. Foroosh", "M. Tappen", "M. Pensky" ], "title": "Sparse convolutional neural networks", "venue": "In IEEE Int. Conf. Comput. Vis. Pattern Recognit.,", "year": 2015 }, { "authors": [ "E. Salinas", "L. Abbott" ], "title": "A model of multiplicative neural responses in parietal cortex", "venue": "Proc. Natl. Acad. Sci. USA,", "year": 1996 }, { "authors": [ "S. Scardapane", "D. Comminiello", "A. Hussain", "A. Uncini" ], "title": "Group sparse regularization for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "J. Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "Neural Networks,", "year": 2015 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "M. Taheri", "N. Lim", "J. Lederer" ], "title": "Balancing statistical and computational precision and applications to penalized linear regression with group sparsity", "venue": null, "year": 2016 }, { "authors": [ "M. Taheri", "F. Xie", "J. Lederer" ], "title": "Statistical guarantees for regularized networks", "venue": null, "year": 2006 }, { "authors": [ "M. Telgarsky" ], "title": "Benefits of depth in neural networks", "venue": "In Annual Conference on Learning Theory, volume 49 of Proc. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "R. Tibshirani" ], "title": "Regression shrinkage and selection via the lasso", "venue": "J. R. Stat. Soc. Ser. B. Stat. Methodol.,", "year": 1996 }, { "authors": [ "W. Wen", "C. Wu", "Y. Wang", "Y. Chen", "H. Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Adv. Neural Inf. Process Syst.,", "year": 2016 }, { "authors": [ "H. Xiao", "K. Rasul", "R. Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": null, "year": 2017 }, { "authors": [ "D. Yarotsky" ], "title": "Error bounds for approximations with deep ReLU networks", "venue": "Neural Networks,", "year": 2017 } ]
[ { "heading": null, "text": "Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of sparsity that concerns the networks’ layers and, therefore, aligns particularly well with the current trend toward deep networks. We call this notion layer sparsity. We then introduce corresponding regularization and refitting schemes that can complement standard deep-learning pipelines to generate more compact and accurate networks." }, { "heading": "1 INTRODUCTION", "text": "The number of layers and the number of nodes in each layer are arguably among the most fundamental parameters of neural networks. But specifying these parameters can be challenging: deep and wide networks, that is, networks with many layers and nodes, can describe data in astounding detail, but they are also prone to overfitting and require large memory, CPU, energy, and so forth. The resource requirements can be particularly problematic for real-time applications or applications on fitness trackers and other wearables, whose popularity has surged in recent years. A promising approach to meet these challenges is to fit networks sizes adaptively, that is, to allow for many layers and nodes in principle, but to ensure that the final network is “simple” in that it has a small number of connections, nodes, or layers (Changpinyo et al., 2017; Han et al., 2016; Kim et al., 2016; Liu et al., 2015; Wen et al., 2016).\nPopular ways to fit such simple and compact networks include successively augmenting small networks (Ash, 1989; Bello, 1992), pruning large networks (Simonyan & Zisserman, 2015), or explicit sparsity-inducing regularization of the weight matrices, which we focus on here. An example is the `1-norm, which can reduce the number of connections. Another example is the `1-norm grouped over the rows of the weight matrices, which can reduce the number of nodes. It has been shown that such regularizers can indeed produce networks that are both accurate and yet have a small number of nodes and connections either in the first layer (Feng & Simon, 2017) or overall (Alvarez & Salzmann, 2016; Liu et al., 2015; Scardapane et al., 2017). Such sparsity-inducing regularizers also have a long-standing tradition and thorough theoretical underpinning in statistics (Hastie et al., 2015).\nBut while sparsity on the level of connections and nodes has been studied in some detail, sparsity on the level of layers is much less understood. This lack of understanding contrasts the current trend to deep network architectures, which is supported by state-of-the-art performances of deep networks (LeCun et al., 2015; Schmidhuber, 2015), recent approximation theory for ReLU activation networks (Liang & Srikant, 2016; Telgarsky, 2016; Yarotsky, 2017), and recent statistical theory (Golowich et al., 2017; Kohler & Langer, 2019; Taheri et al., 2020). Hence, a better understanding of sparsity on the level of layers seems to be in order.\nTherefore, we discuss in this paper sparsity with a special emphasis on the networks’ layers. Our key observation is that for typical activation functions such as ReLU, a layer can be removed if all its parameter values are non-negative. We leverage this observation in the development of a new regularizer that specifically targets sparsity on the level of layers, and we show that this regularizer can lead to more compact and more accurate networks.\nOur three main contributions are:\n1. We introduce a new notion of sparsity that we call layer sparsity. 2. We introduce a corresponding regularizer that can reduce network sizes. 3. We introduce an additional refitting step that can further improve prediction accuracies.\nIn Section 2, we specify our framework, discuss different notions of sparsity, and introduce our refitting scheme. In Section 3, we establish a numerical proof of concept. In Section 4, we conclude with a discussion." }, { "heading": "2 SPARSITY IN NEURAL NETWORKS", "text": "We first state our framework, then discuss different notions of sparsity, and finally introduce a refitting scheme." }, { "heading": "2.1 MATHEMATICAL FRAMEWORK", "text": "To fix ideas, we first consider fully-connected neural networks that model data according to yi = f 1 [ W 1f2 [ ...f l[W lxi] ]] + ui , (1)\nwhere i ∈ {1, . . . , n} indexes the n different samples, yi ∈ R is the output, xi ∈ Rd is the corresponding input with d the input dimension, l is the number of layers, W j ∈ Rpj×pj+1 for j ∈ {1, . . . , l} are the weight matrices with p1 = 1 and pl+1 = d, f j : Rpj → Rpj for j ∈ {1, . . . , l} are the activation functions, and ui ∈ R is the random noise. Extensions beyond fully-connected networks are straightforward—see Section 2.5.\nWe summarize the parameters in W := (W 1, . . . ,W l) ∈ V := {V = (V 1, . . . , V l) : V j ∈ Rpj×pj+1}, and we write for ease of notation\nfV [xi] := f 1 [ V 1f2 [ ...f l[V lxi] ]] (2)\nfor V ∈ V . Neural networks are usually fitted based on regularized estimators in Lagrange\nŴ ∈ argmin V ∈V\n{ DataFit[y1, . . . , yn,x1, . . . ,xn] + h[V ] } (3)\nor constraint form Ŵ ∈ argmin\nV ∈V h[V ]≤1\n{ DataFit[y1, . . . , yn,x1, . . . ,xn] } , (4)\nwhere DataFit : Rn × Rn×d is a data-fitting function such as least-squares ∑n\ni=1(yi − fV [xi])2, and h : V → [0,∞) is a regularizer such as the elementwise `1-norm ∑ j,k,l |(V j)kl|. We are particularly interested in regularizers that induce sparsity." }, { "heading": "2.2 STANDARD NOTIONS OF SPARSITY", "text": "We first state two regularizers that are known in deep learning and the corresponding notions of sparsity.\nConnection sparsity Consider the vanilla `1-regularizer\nhC[V ] := l∑ j=1 (rC)j |||V j |||1 := l∑ j=1 (rC)j pj∑ v=1 pj+1∑ w=1 |(V j)vw| ,\nwhere rC ∈ [0,∞)l is a vector of tuning parameters. This regularizer is the deep learning equivalent of the lasso regularizer in linear regression (Tibshirani, 1996) and has received considerable attention\nno sparsity\nconnection sparsity\nnode sparsity\nlayer sparsity\ncombined sparsity\nrecently (Barron & Klusowski, 2018; 2019; Kim et al., 2016). The regularizer acts on each individual connection, pruning a full network (first network from the left in Figure 1) to a more sparsely connected network (second network in Figure 1). We, therefore, propose to speak of connection sparsity.\nNode sparsity Consider a grouped version of the above regularizer\nhN[V ] := l∑ j=1 (rN)j |||V j |||2,1 := l∑ j=1 (rN)j pj∑ v=1 √√√√pj+1∑ w=1 |(V j)vw|2 ,\nwhere rN ∈ [0,∞)l is again a vector of tuning parameters. This regularizer is the deep learning equivalent of the group lasso regularizer in linear regression (Bakin, 1999) and has received some attention recently (Alvarez & Salzmann, 2016; Feng & Simon, 2017; Scardapane et al., 2017). The regularizer acts on all connections that go into a node simultaneously, rendering entire nodes inactive (third network in Figure 1). We, therefore, propose to speak of node sparsity." }, { "heading": "2.3 LAYER SPARSITY", "text": "We now complement the two existing regularizers and notions of sparsity with a new, third notion.\nLayer sparsity Consider the regularizer\nhL[V ] := l−1∑ j=1 (rL)j |||V j |||2,+ := l−1∑ j=1 (rL)j √√√√ pj∑ v=1 pj+1∑ w=1 ( neg[(V j)vw] )2 , (5)\nwhere rL ∈ [0,∞)l−1 is a vector of tuning parameters, and neg[a] := min{a, 0} is the negative part of a real value a ∈ R. This regularizers does not have an equivalent in linear regression, and it is also new in deep learning.\nWe argue that the regularizer can give rise to a new type of sparsity. The regularizer can be disentangled along the layers according to\nhL[V ] = l−1∑ j=1 (rL)jh L,j [V j ]\nwith\nhL,j [V j ] := √√√√ pj∑ v=1 pj+1∑ w=1 ( neg[(V j)vw] )2 for j ∈ {1, . . . , l − 1} .\nWe then focus on an individual inner layer, that is, a layer that corresponds to an index j ∈ {2, . . . , l− 1}. To fix ideas, we consider the popular ReLU activation (Glorot et al., 2011; Hahnloser, 1998; Hahnloser et al., 2000; Salinas & Abbott, 1996); in other words, we consider (f j)q[t] := fReLU[t] := max{t, 0} for all j ∈ {2, . . . , l}, q ∈ {1, . . . , pj}, and t ∈ R (the activation of the output layer can be arbitrary). It is now easy to show that the regularizer hL indeed induces sparsity on the level of layers.\nTheorem 1 (Layer Sparsity). Consider j ∈ {2, . . . , l − 1}, and define a merged weight matrix as V j−1,j := V j−1V j ∈ Rpj−1×pj+1 . It holds that\nhL,j [V j ] = 0 ⇒ f j−1 [ V j−1f j [V jz] ] = f j−1 [ V j−1,jz ] for all z ∈ [0,∞)pj+1 .\nProof of Theorem 1. If hL,j [V j ] = 0, then (V j)qm ≥ 0 for all q ∈ {1, . . . , pj}, m ∈ {1, . . . , pj+1}. Hence, it holds for all q ∈ {1, . . . , pj} that (V jz)q ≥ 0 and, therefore, that f j [V jz] = V jz. The theorem follows then by the fact that V j−1,j = V j−1V j .\nA key property of the ReLU function is positive homogeneity. That positive homogeneity can allow for moving weights between layers had been observed in Barron & Klusowski (2018); here, we use the positive homogeneity to merge layers. The idea is as follows: hL,j [V j ] = 0 means in view of the stated theorem that we can redefine the network of depth l as a network of depth l − 1 by removing the function f j , replacing the weights V j−1 by V j−1,j , and then removing the jth layer altogether.\nTheorem 1 can be applied sequentially to neighboring layers; hence, the regularization can merge not only one but many layers into one. In conclusion, our new regularizer hL acts on all nodes and connections of each layer simultaneously, rendering entire layers inactive (fourth network in Figure 1). We, therefore, propose to speak of layer sparsity.\nThe concept of layer sparsity and Theorem 1 in particular do not hinge on the exact choice of the regularizer in (5): one can take any function hL that can be disentangled along the layers as described and that ensures the fact that hL,j [V j ] = 0 implies mink,l(V j)kl ≥ 0. We illustrate layer sparsity with two examples.\nExample 1 (Identity Activation). We first highlight the meaning of layer sparsity in a simplistic setting that does not rely on Theorem 1. We consider identity activation, that is, (f j)q[t] = t for all j ∈ {2, . . . , l}, q ∈ {1, . . . , pj}, and t ∈ R. The networks in (2) can then be written as\nfV [xi] = f 1[V 1 · · ·V lxi] .\nIn other words, the initial l-layer network can be compressed into a one-layer network with activation function f1 and parameter matrix V 1 · · ·V l ∈ R1×d. This setting with identity activation is, of course, purely academic, but it motivates an important question: can parts of networks be compressed similarly in the case of ReLU?\nTheorem 1 gives an answer to this question: if hL,j [V j ] = 0, then the jth and (j − 1)th layers can be combined. In the extreme case hL,2[V 2] = · · · = hL,l[V l] = 0 and non-negative input, the network can be condensed into a one-layer network just as in the linear case. In this sense, one can understand our layer regularizer is as a measure for the networks’ “distance to linearity.” We detail this further in the following example.\nExample 2 (ReLU Activation). We now illustrate how layer sparsity compresses and, therefore, simplifies networks in the case of ReLU activation. We fix an initial network fN parameterized by N ∈ V . We identify the active layers of the network by\nS ≡ S[N ] := { j ∈ {2, . . . , l − 1} : hL,j [N j ] 6= 0 } ∪ {1, l} . (6)\nThus, S and {1, . . . , l} \\ S contain the indexes of the relevant and irrelevant layers, respectively. (We always consider the input and output layers as active.) The level of sparsity, that is, the number of active layers, is s := |S| ≤ l. Observe first that the theorem’s restriction to z’s that have non-negative elements makes sense: by the definition of fReLU, the outputs of every ReLU layer are non-negative. We now denote the indexes in S in an orderly fashion: j1, . . . , js ∈ S such that j1 < · · · < js = l. We then define scaled\nversions of the corresponding merged matrices: if ji−1 ∈ S or ji ∈ {1, l}, we do the “trivial merge” M ji := N ji ∈ Rpji×pji+1 ; otherwise, we do the “non-trivial merge”\nM ji := N ji−1+1 · · ·N ji ∈ Rpji−1+1×pji+1 .\nIn other words, we merge all irrelevant layers between the ji−1th and jith layers into the jith layer.\nWe can then compress the data-generating model in (1) into yi = f j1 [ M j1f j2 [ ...f js [M jsxi] ]] + ui\nwith M := (M j1 , . . . ,M js) ∈ VS := {V = (V 1, . . . , V s) : V i ∈ Rpji−1+1×pji+1}. Formulated differently, we can condense the original network according to\nfN [xi] = fM [xi] = f j1 [ M j1f j2 [ ...f js [M jsxi] ]] ,\nthat is, we can formulate the initial ReLU activation network with l layers as a new ReLU activation network with s layers.\nThe new network is still a ReLU activation network but has a smaller number of layers if s < l and, consequently, a smaller number of parameters in total: the total number of parameters in the initial network is ∑l j=1(pj × pj+1), while the total number of parameters in the transformed network is\nonly ∑s\ni=1(pji−1+1 × pji+1).\nOur concept for regularizing layers is substantially different from existing ones: our layer-wise regularizer induces weights to be non-negative, whereas existing layer-wise regularizers induce weights to be zero (Wen et al., 2016, Section 3.3). The two main advantages of our approach are that it (i) does not require shortcuts to avoid trivial networks and (ii) does not implicitly enforce connection or node sparsity. We thus argue that our layer sparsity is a much more natural and appropriate way to capture and regularize network depths.\nLayer sparsity more closely relates to ResNets (He et al., 2016). The recent popularity of ResNets is motivated by two observations: 1. Solvers seem to struggle with finding good minima of deep networks; even training accuracies can deteriorate when increasing the number of layers. 2. Allowing for linear mappings that short-circuit parts of the network seem to help solvers in finding better minima. From our viewpoint here, one can argue that ResNets use these linear mappings to regulate network depths adaptively and, therefore, are related to layer sparsity. But importantly, ResNets are even more complex than the networks they are based on, while our notion simplifies networks.\nSince, as one can verify again readily, all three regularizers are convex, any combination of them is also convex. Such combinations can be used to obtain networks that are sparse in two or all three aspects (last network in Figure 1). In this sense, the different notions of sparsity are not competing but rather complementing each other." }, { "heading": "2.4 REFITTING", "text": "Sparse networks can be used directly, but they can also be a basis for further optimization: one can adjust the network architecture according to the non-zero pattern of the sparse network and then re-estimate the parameters of this smaller network. Such strategies are well-known in statistics and machine learning under the name refitting (Lederer, 2013; Chzhen et al., 2019). The theoretical underpinning of these strategies is the insight that regularization creates a bias that—in certain cases—can be alleviated by a subsequent “unbiasing” step. In deep learning, refitting has been studied recently under the name lottery-ticket hypothesis (Frankle & Carbin, 2019).\nWe now formulate a version of refitting for layer sparsity. We stay in the framework of Example 2 to keep the notation light.\nExample 3 (ReLU Activation Cont.). Consider the model in (1) with the specifications of Example 2, and consider a corresponding layer-sparse estimator Ŵ of the parameters such as (3) with h = hL. In line with (6), we denote the set of the active layers by\nS = S[Ŵ ] = { j ∈ {1, . . . , l − 1} : hL,j [Ŵ j ] 6= 0 } ∪ {1, l}\nand the corresponding parameter space of the condensed network by VS ≡ VS [Ŵ ] = {V = (V 1, . . . , V s) : V i ∈ Rpji−1+1×pji+1}, where s := |S|. The least-squares refitted estimator for the parameters in the condensed network is then\nŴS ∈ argmin V ∈VS { n∑ i=1 ( yi − fV [xi] )2} . (7)\nHence, the estimator Ŵ complemented with least-squares refitting yields the network\nf ŴS [xi] = f j1 [ Ŵ j1S f j2 [ ...f js [Ŵ jsS xi] ]] . (8)\nThis strategy corresponds to the “one-shot approach” in Frankle & Carbin (2019); one could extend it along the lines of their iterative approach, but the numerical results indicate that this is not necessary. Also, in contrast to the results in Frankle & Carbin (2019), our results indicate that keeping the initialization is not necessary either." }, { "heading": "2.5 EXTENSIONS BEYOND FULLY-CONNECTED NETWORKS", "text": "We have illustrated our ideas with feedforward networks that have fully connected layers, but the principles of layer sparsity apply much more generally. Consider a fixed hidden layer with index j ∈ {2, . . . , l − 1}. In the fully-connected networks (2), this layer corresponds to a function z 7→ f j [V jz] with weights V j ∈ Rpj×pj+1 . We now generalize these functions to\nz 7→ f j [\nm∑ k=1 V j,kz + bj ] with weights V j := (V j,1, . . . , V j,m) ∈Mj,1 × · · · ×Mj,m and bias bj ∈ Bj , and with arbitrary nonempty subsetsMj,1 × · · · ×Mj,m ⊂ Rpj×pj+1 and Bj ⊂ Rpj . The corresponding layer-sparse regularizer is then\nhL,j [V j ] := √√√√ pj∑ v=1 pj+1∑ w=1 m∑ k=1 ( neg[(V j,k)vw] )2 + pj∑ u=1 ( neg[(bj)u] )2 .\nWe can confirm immediately that this regularizer has the same effect as its analog in the fullyconnected case. Indeed, under the assumptions of Theorem 1, it holds that\nhL,j [V j ] = 0 ⇒ f j−1 [ V j−1f j [ m∑\nk=1\nV j,kz + bj ]] = f j−1[V j−1,jz + bj−1,j ] for all z ∈ [0,∞)pj+1 ,\nwhere V j−1,j := ∑m\nk=1 V j−1V j,k ∈ Rpj−1×pj+1 and bj−1,j := V j−1bj ∈ Rpj−1 . In other words,\nif the value of the regularizer is zero, the jth layer can be merged into the (j − 1)th layer as before. These observations highlight the fact that layer sparsity applies very generally: it only requires the properties of ReLU-type activations and the linearities that exist within most types of layers. As a concrete application, layer sparsity can compress networks that have convolutional layers, where m specifies the number of feature maps andMj,m the non-zero patterns of the filters." }, { "heading": "3 EMPIRICAL STUDY", "text": "We now confirm in a brief empirical study the fact that layer regularization can 1. improve prediction accuracies and 2. reduce the number of active layers." }, { "heading": "3.1 ARTIFICIAL DATA", "text": "We start with artificial data." }, { "heading": "3.1.1 SIMULATION FRAMEWORK", "text": "We generate data according to the model in (2). The most outside activation function f1 is the identity function, and the coordinates of all other activation functions f2, . . . ,f l are ReLU functions. The input vectors x1, . . . ,xn are jointly independent and standard normally distributed in d dimensions; the noise random variables u1, . . . , un are independent of the input, jointly independent, and standard normally distributed in one dimension. For a given sparsity level sW ∈ [0, 1], a vector s ∈ {0, 1}l−1 with independent Bernoulli distributed entries that have success parameter sW is generated. The entries of the parameter matrix W 1 are sampled independently from the uniform distribution on (−2, 2), and the entries of the parameter matrices W 2, . . . ,W l are sampled independently from the uniform distribution on (0, 2) if sj = 0 and on (−2, 2) otherwise. Hence, the parameter sW controls the level of the layer sparsity: the smaller sW , the higher the network’s layer sparsity.\nIn concrete numbers, the input dimension is d = 2, the network widths are p2 = · · · = pl = 5, the number of hidden layers is l − 1 ∈ {10, 25}, and the sparsity level is s ∈ {0.1, 0.3, 0.9}. Our settings and values represent, of course, only a very small part of possible networks in practice, but given the generality of our concepts, any attempt of an exhaustive simulation study must fail, and the simulations at least allow us (i) to corroborate our theoretical insights and (ii) to indicate that our concepts can be very useful in practice.\nDatasets of 150 samples are generated; n = 100 of the samples are assigned to training and the rest to testing. The relevant measures for an estimate Ŵ of the network’s parameters are the empirical mean squared error\nm̂se ≡ m̂se[Ŵ ] := 1 |T | ∑ (y,x)∈T ( y − f Ŵ [x] )2 ,\nover the test set T with cardinality |T | = 50 and the level of sparsity among the hidden layers\nŝ ≡ ŝ[Ŵ ] := ∣∣{j ∈ {1, . . . , l − 1} : hL,j [Ŵ j ] 6= 0}∣∣ .\nReported are the medians (and third quantiles in paranthesis) over 30 simulation runs for each setting." }, { "heading": "3.1.2 METHODS", "text": "Our first method (SLS) is a standard least-squares complemented with the layer regularizer (5) in Lagrange form Ŵ . The baseline for this estimator is vanilla least-squares (LS). Since our estimator— in contrast to least-squares—allows for merging layers, we can also complement it with our refitting scheme of Section 2.4 (FLS). The baseline for our refitted estimator is the least-squares estimator that “knows” the relevant layers beforehand (ILS), that is, a least-squares on the relevant layers VS [W ] with W the true parameter—see Example 2. The latter estimator cannot be used in practice, but it can serve as a benchmark here in the simulations.\nThe objective functions are optimized by using mini-batch gradient descent with batch size 10, learning rate 10−2, and number of epochs 200 (for l = 10, sW = 0.1, 0.3), 300 (for l = 10, sW = 0.9), 400 (for l = 25, sW = 0.1), and 500 (otherwise). The tuning parameters (rL)j are 0.2 (for l = 10, sW = 0.1), 0.12 (for l = 10, sW = 0.3), 0.07 (for l = 10, sW = 0.9), and 0.05 (for l = 25)." }, { "heading": "3.1.3 RESULTS", "text": "The numerical results show that our layer-regularized version SLS can improve on the prediction accuracy of the standard least-squares LS considerably (m̂se-columns of the first and second rows in Table 1). The results also show that the refitting in FLS can improve the prediction accuracy further, and that refitted estimator can rival the infeasible ILS in terms of prediction (m̂se-columns of the third and fourth rows). The results finally show that the layer regularization can detect the correct number of layers (̂s-columns of the third and fourth rows). In summary, our layer-regularized estimator outmatches the standard least-squares, and the refitted version of our estimator rivals the infeasible least-squares that knows which are the relevant layers beforehand—both in terms of prediction accuracy and sparsity. Hence, layer regularization can condense networks effectively.\nThe results also reveal that the prediction accuracies of our estimators increase with sW decreasing (m̂se-columns across different sW). This trend is expected: the higher the layer sparsity, the more layer-regularization can condense the networks. This behavior is confirmed in the sparsities (̂scolumns across different sW). In other words, the layer regularization is adaptive to the true layer sparsity.\nThe tuning parameters (rL)j have been calibrated very roughly by hand. We expect that a more careful calibration of rL based on cross-validation, for example, accentuates the positive effects of layer sparsity even further. But since our goal in this section is a general proof of concept for layer sparsity rather than the optimization of a specific deep learning pipeline, we do not pursue this further here.\nThe tuning parameters of the descent algorithm, such as the batch size, number of epochs, learning rate, and so forth, have also been calibrated very roughly by hand. An observation is the fact that all methods can sometimes provide accurate prediction if the number of epochs is extremely large, but our layer-regularized methods SLS and FLS generally lead to accurate prediction after much less epochs than their unregularized counterpart LS. This observation indicates that layer regularization also impacts the algorithmic aspects of deep learning beneficially." }, { "heading": "3.2 REAL DATA", "text": "We now turn to real data. Specifically, we consider subsamples of MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), and KMNIST (Clanuwat et al., 2018)." }, { "heading": "3.2.1 SETUP", "text": "For each of the three data examples, the training data consists of n = 10 000 images sampled uniformly at random; the test data also consists of 10 000 images. The network consists of l = 10 fullyconnected layers; the hidden layers have width p2 = · · · = pl = 50. While fully-connected networks have been outperformed image classification by other, more intricate pipelines, they still provide decent results (even on the subsetted data) and are perfectly suited for illustrating our ideas.\nThe baseline is cross-entropy (CE), which is LS but with the least-squares loss replaced by the cross-entropy loss. Our layer-sparse method is refitted cross-entropy (FCE), which is the pendant of FLS, that is, cross-entropy with additional layer-sparse regularization and refitting.\nThe objective functions are optimized by using mini-batch gradient descent with batch size 100, learning rate 10−3, and number of epochs 100. In line with theoretical considerations (Lederer & Vogt, 2020), the tuning parameters are set to (rL)j = ln[p]/ √ n with p the total number of network parameters. The performances are measured in terms of average classification accuracies (denoted those by ÂC) and level of sparsity among the hidden layers ŝ." }, { "heading": "3.2.2 RESULTS", "text": "The results are summarized in Table 2. Similarly as before, we find that layer-sparse regularization can reduce the number of layers while retaining the classification accuracy or even improving it.\nTo highlight the features of layer sparsity more, we also look at the training losses and testing accuracies over the course of the optimization; Figure 2 contains these data averaged over 20 draws of the MNIST data. We find that both the initial estimator as well as the refitted version eventually\nachieve the same accuracies, but the refitted version can be trained much faster. These observations are commensurate with the observations in (Frankle & Carbin, 2019), who study refitting of connectionsparse networks.\nIn contrast to (Frankle & Carbin, 2019), however, we could not find any benefits of keeping the initializations of the original network: both the training and the accuracy curves remain virtually the same. This observation might indicate that layer sparsity might be particularly robust under refitting." }, { "heading": "4 DISCUSSION", "text": "We have shown that layer sparsity can compress layered networks effectively both in theory (Section 2) and practice (Section 3).\nWe have also shown that refitting with respect to layer sparsity can facilitate the optimization of the network. Refitting has a long-standing tradition in high-dimension statistics—see Lederer (2013), for example—but has been applied in deep learning only recently (Frankle & Carbin, 2019). Our research supports the usefulness of refitting in general, and it demonstrates that layer sparsity is particularly suited for refitting.\nRelated concepts such as ResNets add complexity to the network descriptions. This makes these networks not only unfit for refitting but also very hard to analyze statistically. Layer sparsity, in contrast, simplify network architectures and seem amenable to statistical analyses via recent techniques for regularized deep learning (Taheri et al., 2020). Statistical theory for layer sparsity, therefore, seems a feasible goal for further research.\nIn summary, layer sparsity complements other notions of sparsity that concern individual connections or nodes. All of these concepts can help to fit networks that are efficient in terms of memory and computations and easy to interpret." }, { "heading": "A TUNING-PARAMETER CALIBRATION", "text": "Regularizing layer sparsity involves the tuning parameters (rL)j . Such tuning parameters are integral to regularization in deep learning and in machine learning more generally. In sparse linear regression, there has been some progress for developing theories for calibrating these parameters (Bien et al., 2019; Chichignoud et al., 2016; Taheri et al., 2016). In sparse deep learning, however, theories for tuning-parameter calibration are missing completely.\nIn the real-data analysis of Section 3.2, we have used a theory-inspired tuning parameter. The goal of this section here is to give more insights into the calibration of the tuning parameter. Figure 3 shows the accuracies of refitting with different number of hidden layers and locates the tuning parameters selected by our approach and by cross-validation, that is, training/validation based on 10 000 training and validation samples. The results are averaged over 20 draws of MNIST data as described earlier but only over 5 epochs each for illustration. The plot shows the expected upside-down-U-shape of the accuracies, which reflects the trade-off between variance (many layers/small tuning parameters) and bias (few layers/large tuning parameters). The plot shows that cross-validation can even improve the impact of layer sparsity further for a small number of epochs. (As illustrated in the right panel of Figure 2, tuning parameters become—as far as accuracy is concerned—less important for large number of epochs.)\nTuning-parameter calibration remains a challenge not only here but in deep learning much more generally. But our observations in this section and the main body of the paper demonstrate that layer-sparse regularization can improve deep-learning pipelines substantially with data-adaptive schemes such as cross-validation as well as with a simple, theory-inspired choice of the tuning parameter." } ]
2,020
null
SP:c0072c347d78252701da4d55192f607131d97adf
[ "This paper tries to investigate and understand if and how adversarial training helps the models trained on the source domain transfer easier and faster to target domains. With extensive different configurations (such as fine-tuning strategies) in experiments, the authors show that robust models transfer better than natural models with less training data from the target domain. Also they demonstrate the intuition behind through experiments, such as capturing shapes than textures or using influence functions. " ]
Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains. This process consists of taking a neural network pre-trained on a large feature-rich source dataset, freezing the early layers that encode essential generic image properties, and then fine-tuning the last few layers in order to capture specific information related to the target situation. This approach is particularly useful when only limited or weakly labeled data are available for the new task. In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models, especially if only limited data are available for the new domain task. Further, we observe that adversarial training biases the learnt representations to retaining shapes, as opposed to textures, which impacts the transferability of the source models. Finally, through the lens of influence functions, we discover that transferred adversarially-trained models contain more human-identifiable semantic information, which explains – at least partly – why adversarially-trained models transfer better.
[ { "affiliations": [], "name": "Francisco Utrera" }, { "affiliations": [], "name": "Evan Kravitz" }, { "affiliations": [], "name": "Michael W. Mahoney" } ]
[ { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Yoshua Bengio" ], "title": "Deep learning of representations for unsupervised and transfer learning", "venue": "In International Conference of Machine learning (ICML),", "year": 2012 }, { "authors": [ "Rich Caruana" ], "title": "Learning many related tasks at the same time with backpropagation", "venue": "In Neural Information Processing Systems (NeurIPS)", "year": 1995 }, { "authors": [ "Marvin S. Cohen", "Jared T. Freeman", "Steve Wolf" ], "title": "Metarecognition in time-stressed decision making: recognizing, critiquing, and correcting", "venue": "Human Factors,", "year": 1996 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: a large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial robustness as a prior for learned representations, 2019", "venue": null, "year": 2019 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Muhammad Ghifary", "W. Bastiaan Kleijn", "Mengjie Zhang" ], "title": "Domain adaptive neural networks for object recognition", "venue": "In Pacific Rim International Conferences on Artificial Intelligence (PRICAI),", "year": 2014 }, { "authors": [ "Justin Gilmer", "Nicolas Ford", "Nicholas Carlini", "Ekin Cubuk" ], "title": "Adversarial examples are a natural consequence of test error in noise", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "venue": "In International Conference on Machine Learning (ICML),", "year": 2011 }, { "authors": [ "Boqing Gong", "Yuan Shi", "Fei Sha", "Kristen Grauman" ], "title": "Geodesic flow kernel for unsupervised domain adaptation", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing, 2018", "venue": null, "year": 2018 }, { "authors": [ "Rajiv Khanna", "Been Kim", "Joydeep Ghosh", "Oluwasanmi Koyejo" ], "title": "Interpreting black box predictions using fisher kernels", "venue": "In Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Been Kim", "Rajiv Khanna", "Oluwasanmi Koyejo" ], "title": "Examples are not enough, learn to criticize! criticism for interpretability", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Pang Wei W Koh", "Kai-Siang Ang", "Hubert Teo", "Percy S Liang" ], "title": "On the accuracy of influence functions for measuring group effects", "venue": "In Neural Information Processing Systems (NeurIPS)", "year": 2019 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V. Le" ], "title": "Do better imagenet models transfer better", "venue": "In Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Barbara Landau", "Linda B Smith", "Susan S Jones" ], "title": "The importance of shape in early lexical learning", "venue": "Cognitive development,", "year": 1988 }, { "authors": [ "Li Fei-Fei", "R. Fergus", "P. Perona" ], "title": "Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories", "venue": "In 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPR),", "year": 2004 }, { "authors": [ "E. Ceolini" ], "title": "A unified view of gradient-based attribution methods for deep neural networks", "venue": "In Neural Information Processing Systems (NeurIPS) workshops,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Allen Newell" ], "title": "Human problem solving", "venue": "Prentice-Hall, USA,", "year": 1972 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "Transactions on knowledge and data engineering (TPAMI),", "year": 2009 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do ImageNet classifiers generalize to ImageNet", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Hadi Salman", "Andrew Ilyas", "Logan Engstrom", "Ashish Kapoor", "Aleksander Madry" ], "title": "Do adversarially robust imagenet models transfer better", "venue": "arXiv preprint arXiv:2007.08489,", "year": 2020 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Aleksander Madry" ], "title": "Breeds: Benchmarks for subpopulation shift", "venue": "arXiv preprint arXiv:2008.04859,", "year": 2020 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Ali Shafahi", "Parsa Saadatpanah", "Chen Zhu", "Amin Ghiasi", "Christoph Studer", "David Jacobs", "Tom Goldstein" ], "title": "Adversarially robust transfer learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Andrew Ilyas", "Aleksander Madry" ], "title": "From imagenet to image classification: Contextualizing progress on benchmarks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Pin-Yu Chen", "Jinfeng Yi", "Dong Su", "Yupeng Gao", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Evaluating the robustness of neural networks: An extreme value theory approach", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J. Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Baoyuan Wu", "Weidong Chen", "Yanbo Fan", "Yong Zhang", "Jinlong Hou", "Jie Liu", "Tong Zhang" ], "title": "Tencent ml-images: A large-scale multi-label image database for visual representation learning", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "While deep neural networks (DNNs) achieve state-of-the-art performance in many fields, they are known to require large quantities of reasonably high-quality labeled data, which can often be expensive to obtain. As such, transfer learning has emerged as a powerful methodology that can significantly ease this burden by enabling the user to adapt a pre-trained DNN to a range of new situations and domains (Bengio, 2012; Yosinski et al., 2014). Models that are pre-trained on ImageNet (Deng et al., 2009) have excellent transfer learning capabilities after fine-tuning only a few of the last layers (Kornblith et al., 2019) on the target domain.\nEarly work in transfer learning was motivated by the observation that humans apply previously learned knowledge to solve new problems with ease (Caruana, 1995). With this motivation, learning aims to extract knowledge from one or more source tasks and apply the knowledge to a target task (Pan & Yang, 2009). The main benefits include a reduction in the number of required labeled data points in the target domain (Gong et al., 2012; Pan & Yang, 2009) and a reduction in training costs as compared to training a model from scratch. However, in practice, transfer learning remains an “art” that requires domain expertise to tune the many knobs of the transfer process. An important consideration, for example, is which concepts or features are transferable from the source domain to the target domain. The features which are unique to a domain cannot be transferred, and so an important goal of transfer learning is to hunt for features shared across domains.\nIt has recently been shown that adversarially-trained models (henceforth denoted as robust models) capture more robust features that are more aligned with human perception, compared to the seemingly patternless features (to humans, at least) of standard models (Ilyas et al., 2019). Unfortunately,\n∗Equal contribution\nthese models typically have a lower generalization performance on the source domain, as compared to non-adversarially-trained (henceforth denoted as natural, as in previous works (Tsipras et al., 2019; Shafahi et al., 2019; Salman et al., 2020)) model. Hence, Ilyas et al. (2019) hypothesize that non-robust features that are lost during adversarially training may have a significant positive impact on generalization within a given dataset or domain. This inherently different feature representation between models constructed with adversarial training and models trained with standard methods would also explain why accuracy and robustness are at odds (Tsipras et al., 2019). This leads to the question of whether models that use robust representations generalize better across domains. This is the main question we address.\nIn this work, we demonstrate that robust models transfer better to new domains than natural models. To demonstrate this, we conduct an extensive number of transfer learning experiments across multiple domains (i.e., datasets), with various numbers of fine-tuned convolutional blocks and random subset sizes from the target dataset, where the critical variable is the constraint used to adversarially train the source model. (Described in detail in Sections 3 and Appendix A.3) Importantly, note that we do not use an adversarial training procedure for the actual transfer learning process. Our findings indicate that robust models have outstanding transfer learning characteristics across all configurations, where we measure the performance in terms of model accuracy on target datasets for varying numbers of training images and epochs. Figure 1 provides a summary of our approach.\nOur focus in this work is to show that robust source models learn representations that transfer better to new datasets on image recognition tasks. While adversarial training was proposed to combat adversarial attacks, our experiments discover an unintended but useful application. Adversarial training retains the robust features that are independent of the idiosyncrasies present in the source training data. Thus, these models exhibit worse generalization performance on the source domain, but better performance when transferred. This observation is novel, and we undertake extensive empirical studies to make the following contributions:\n• We discover that adversarially-trained source models obtain higher test accuracy than natural source models after fine-tuning with fewer training examples on the target datasets and over fewer training epochs.\n• We notice that the similarity between the source and target datasets affects the optimal number of fine-tuned blocks and the robustness constraint.\n• We show that adversarial training biases the learned representations to retain shapes instead of textures, impacting the source models’ transferability.\n• We interpret robust representations using influence functions and observe that adversarially-trained source models better capture class-level semantic properties of the images, consistent with human concept learning and understanding." }, { "heading": "2 RELATED WORKS", "text": "ImageNet transfers. Our focus is on studying the transfer of all but the last few layers of trained DNNs and fine-tuning the last non-transferred layers. For ease of exposition, we restrict our attention to ImageNet models (Deng et al., 2009). Kornblith et al. (2019) study the transfer of natural models to various datasets and is thus a prequel to our work. Yosinski et al. (2014) also study transferring natural models but focus on the importance of individual neurons on transfer learning. Recht et al. (2019) study the generalization of natural and robust models to additional data generated using a process similar to that of generating ImageNet. They conclude that models trained on ImageNet overfit the data. However, they study the models’ generalization as-is without fine-tuning.\nCovariate shift. A significant challenge in transfer learning is handling the data distribution change across different domains, also called covariate shift. It’s widely recognized in successful domain adaptations (Yosinski et al., 2014; Glorot et al., 2011) that the representations in earlier layers are more “generic” and hence more transferable than the ones in later layers. This hierarchical disentanglement is attributed to the properties of the data itself, so that the later layers are more closely associated with the data and do not transfer as well. This motivated studies for shallow transfer learning (Yosinski et al., 2014; Ghifary et al., 2014) and more general studies to extract features that remain invariant across different data distributions (Arjovsky et al., 2019). In Section 5 we see that adversarial training biases the learned representations to retain shapes instead of textures, which may be a more desirable invariant across the datasets.\nTransfering adversarially-trained models. There are mainly two works directly associated with ours. First, subsequent to this paper’s initial posting (in a non-anonymized form in a public forum), Salman et al. (2020) posted a related paper. They arrived at broadly similar conclusions, confirming our main results that robust models transfer better; and they do so by focusing on somewhat different experiments, e.g., they focus on the effects of network architecture width, fixed feature transfer, and seeing if models without texture bias transfer better than robust models. Second, Shafahi et al. (2020) mainly find that models lose robustness as more layers are fine-tuned. It might seem to contradict our thesis that they also notice that an ImageNet robust model with a ‖δ‖∞ ≤ 5 constraint has lower accuracy on the target datasets, CIFAR-10 and CIFAR-100, compared to a natural ImageNet model. However, we show that the robust model transfers better than the natural one when we use a ‖δ‖2 ≤ 3 constraint to adversarially train the source model. Example based interpretability. There has been significant interest in interpreting blackbox models using salient examples from the data. A line of research focuses on using influence functions (Koh & Liang, 2017; Koh et al., 2019; Khanna et al., 2019) to choose the most indicative data points for a given prediction. In particular, Khanna et al. (2019) discuss the connection of influence functions with Fisher kernels; and Kim et al. (2016) propose using criticisms in addition to representative examples. Complimentary lines of research focus on interpretability based on human-understandable concepts (Bau et al., 2017) and feature saliency metrics (M. Ancona, 2017)." }, { "heading": "3 BRIEF OVERVIEW OF THE ADVERSARIAL TRAINING PROCESS", "text": "Adversarial training modifies the objective of minimizing the average loss across all data points by first maximizing the loss produced by each image with a perturbation (i.e., a mask) that may not exceed a specified magnitude. Here, we describe this process, similar to Madry et al. (2018).\nLet (xi, yi) be m data points for i ∈ [m], where xi ∈ Rd is the ith feature vector, and yi ∈ Y is the corresponding response value. Typically, we model the response as a parametric model hθ : Rd → Y with a corresponding loss function ` : Y ×Y → R≥0. The objective is to minimize the loss `(ŷ, y), where ŷ = hθ(x) is the predicted response. Adversarial training replaces the above minimization problem of training the model by a minimax optimization problem to make the model resilient to arbitrary perturbations of inputs. The goal of adversarial training is to solve a problem of the form\nmin θ\n1\nm m∑ i=1 max ‖δi‖p≤ `(hθ(xi + δi), yi). (1)\nThat is, the goal is to find the parameters θ of the model hθ that minimize the average maximum loss obtained by perturbing every input xi with a δi constrained such that its `p norm does not exceed\nsome non-negative . If = 0, then δi = 0, in which case there is no perturbation to the input, which is what we call natural training. As increases, the magnitude of the perturbation also increases. For more details on how we solve this problem, and a few examples, see Appendix A.2." }, { "heading": "4 TRANSFERRING ADVERSARIALLY-TRAINED MODELS", "text": "In this study, we train four ResNet50 source models on ImageNet. We train one of them naturally (non-adversarially), and train each of the remaining three adversarially with one of the following constraints: (i) ‖δ‖2 ≤ 3, (ii) ‖δ‖∞ ≤ 4255 , (iii) ‖δ‖∞ ≤ 8 255 . Next, we fine-tune some convolutional blocks in the source models to each of the six target datasets separately using a subset of the training data. We repeat each of these trials for various seed values and report the mean and 95% confidence interval. Altogether, we have a comprehensive and replicable experimental setup that considers four ImageNet source models, four fine-tuning configurations, six target datasets, ten random subset sizes, and an average of fifteen random seeds for a total of 14,400 fine-tuned models. For more details, see Appendix A.3 and A.4.\nAdversarially-trained models transfer better and faster. For ease of comparison, we select the robust and natural models that transfer with the highest test accuracy across all datasets (finetuning three convolutional blocks and the robust model using the ‖δ‖2 ≤ 3 constraint), as shown in Figures 2 and 3. See Appendix A.5 for additional results. Figure 2(b) shows that the test accuracy delta between robust and natural models is above zero for all six target datasets. Thus, robust models obtain higher test accuracy on the target dataset than the natural model, especially with less training data in the target domain. Robust models also learn faster, as shown by the positive test accuracy delta in Figure 3(b) for all target datasets after only 11 and 21 fine-tuning epochs. See Appendix A.6 for additional information on different random subset sizes. Fine-tuning cost is the same for both robust and natural models, but training the source model is considerably more expensive. For more detail on computational complexity see A.8. Also, our code is available at https://github.com/utrerf/robust transfer learning.git\nBest results achieved with `2 constraint and fine-tuning one to three convolutional blocks. Robust models achieve the highest test accuracy on the target datasets when an optimal number of convolutional blocks are fine-tuned, and when these models are trained with an appropriate constraint type. In particular, fine-tuning zero (only the fully-connected layer) or nine convolutional blocks leads to lower test accuracy than fine-tuning one or three blocks, as shown in Figure 4(a) for all six target datasets. The natural model and the other two robust models exhibit the same behavior, as shown in Appendix A.7. To analyze the best constraint type, we select the fine-tuning configuration that yields the highest test accuracy on the target datasets (fine-tuning three convolutional blocks). We see that the `2 constraint outperforms the `∞ constraint, as shown by the positive accuracy delta between the `2 and `∞ models in Figures 5(d) and (e), respectively.\nSimilarity effect on transfer learning configurations. Besides noticing that robust models achieved better performance on the target dataset than natural models, we also observe trends in how well they transfer to different datasets. When transferring from ImageNet, we find that CIFAR-10 and CIFAR-100 have interesting transfer properties, compared to the other datasets. In particular, even though all other datasets transfer better when fine-tuning one or three blocks, it seems that models transfer better to CIFAR-10 and CIFAR-100 when fewer blocks are fine-tuned, as shown in Figure 4(b). This suggests that because these datasets are close to ImageNet, fine-tuning of early blocks is unnecessary (Yosinski et al., 2014). Along similar lines, it is better to use a smaller for CIFAR-10 and CIFAR-100 datasets than the other datasets when transferring from ImageNet, as seen from Figure 5(c). This is because a larger perturbation would destroy low-level features, learned from ImageNet, which are useful to discriminate between labels in CIFAR-10 and CIFAR100. Finally, for datasets that are most distinct from ImageNet (SVHN and KMNIST), we find that robustness yields the largest benefit to classification accuracy and learning speed, as seen in Figure 2(b) and Figure 3(b), respectively. These discrepancies are even more noticeable when smaller fractions of the target dataset are used." }, { "heading": "5 BIAS TOWARDS RECOGNIZING SHAPES AS OPPOSED TO TEXTURES", "text": "In this section, we explore the effect of texture and shape bias, as described by Geirhos et al. (2019), on the robust models’ transferability. As pointed out by Geirhos et al. (2019), natural models are more biased towards recognizing textures than shapes. This is in stark contrast to the human bias of recognizing shapes over textures (Landau et al., 1988). However, Engstrom et al. (2019) showed that robust models encode humanly-aligned representations, and we observe (e.g see Figure 1(b)) that these representations persist even after fine-tuning on CIFAR-10.\nAdversarially-trained models are less sensitive to texture variations. Table 1a shows that the robust model outperforms the natural one when only tested on Stylized Imagenet (SIN) and also after fine-tuning only the last fully-connected layer to SIN. Both models are ResNet50s pre-trained on ImageNet (IN), and the robust model uses a ‖δ‖2 ≤ 3 constraint. Models trained on standard and stylized ImageNet are less sensitive to adversarial attacks. Table 1b shows that the ResNet50 model trained on both IN and SIN (IN+SIN) outperforms the models trained on just IN on a PGD(3) adversarial test accuracy on IN for various levels.\nAdversarially-trained models are biased towards low resolution and low frequencies. We observe that the transferability of robust models is also affected by two input perturbations that destroy, or at least damage, textures. Namely, lowering the resolution of images and applying low pass filters. To demonstrate this, we use the Caltech101 dataset (Li Fei-Fei et al., 2004). This dataset has 101 labels with 30 high-resolution (224x224 pixels or more) images per label. The results in Table 2 support our conjecture that robust models use shapes more than textures for classification by\nshowing that the robust model obtains a higher test accuracy, in both the low-resolution and low-pass versions of Caltech101, than the natural one." }, { "heading": "6 INTERPRETING REPRESENTATIONS USING INFLUENCE FUNCTIONS", "text": "In this section, we use influence functions (Koh & Liang, 2017) to show that robust representations hold semantic information, i.e., robust DNNs classify images like a human would, through similar-looking examples. Engstrom et al. (2019) observed that moving the image in carefully chosen directions in the latent space allows for high-level human-understandable feature manipulation in the pixel space. They suggest that the bias introduced by adversarial training can be viewed as a human prior on the representations, so that these representations are extractors of high-level human-interpretable features. It has long been established that humans learn new concepts through concept-representative or similar-looking examples (Cohen et al., 1996; Newell, 1972). Our focus in the present work is to study whether these representations aid the neural network to learn new concepts (namely image labels) akin to how humans learn concepts.\nTo study this, we use influence functions as described by Koh & Liang (2017) (see Appendix A.9 for an overview). For each test image in the CIFAR-10 dataset, influence functions allow us to answer the following: What is the influence of each training image on the model prediction for a given test image? We ask this question for both the robust and natural models, and compare the results. In our experiments, we fine-tune the last three blocks with the same 3,200 randomly selected training images. Also, the robust model uses ‖δ‖2 ≤ 3 as the constraint. Adversarially-trained models have more similar-looking influential images. Figure 6 shows that the robust models’ most influential image is often more perceptibly similar to the test image than the natural models’ most influential image. Consider, for example, the test image of the blue car (on the second column). The robust models’ corresponding top influential training image is a similar-looking blue car, while the natural model has a red truck. As a second example, the robust\nmodels’ top influential training image for the orange truck (on the far right) is a similar-looking orange truck, while the natural model has a blue and white truck.\nInfluential image labels match test image labels more often in adversarially-trained models. To quantify the visual similarity described above, we show the influence values (standardized by their matrix norm) of each training image on each test image, sorted by their label as in Figure 6, for both the natural (left) and robust (right) models in Figure 7(a). Darker and better-defined blocks across the diagonal signal that the influence values are more consistent with the test image label index in the y-axis because darker colors represent higher influence values. The robust model (right) has a slight advantage over the natural model.\nFigure 7(b) further accentuates the difference between the robust model and the natural model. It displays the percentage of times that the label of the top-k influential image in the training set matches the label in the test image evaluated. To better understand this figure, consider the leftmost point in Figure 7(b) for both models. This point represents the proportion of the training images corresponding to the darkest dots in each horizontal line (i.e., top-1 influential training image) in (a) that match the label of the given test image, for robust and natural models separately. 78.6% of the robust model’s top-1 influential images match the label of the given test image vs 55.1% for the natural counterpart. We also consider the case when the category of at least three of the top-5 influential training images match that of the test image. This happens in 77.3% of the cases for the robust model, but only for 53.8% of the cases for the natural model. This vast gap is not explainable solely from only ∼5% difference in target test accuracy, shown in Table 7 in Appendix A.5. From the qualitative and quantitative analysis, we see that the robust model has learned representations with more human-identifiable semantic information than the natural model, while the latter relies on less interpretable representations. In other words, the robust neural network has learned the image labels by creating strong associations to semantically-similar examples (akin to examplebased concept learning in human beings) in its internal representations. Thus, reinforcing the human prior bias hypothesis in robust representations observed by Engstrom et al. (2019)." }, { "heading": "7 DO OTHER ADVERSARIAL ATTACKS IMPROVE TRANSFERRABILITY?", "text": "Prior works show that there is a connection between the sensitivity of a neural network to Gaussian noise and its robustness to adversarial perturbations (Weng et al., 2018; Gilmer et al., 2019). It has also been suggested that Gaussian perturbations can improve or even replace adversarial training (Kannan et al., 2018). Further, it has been shown that often only a few PGD iterations are sufficient to obtain a robust model (Madry et al., 2018; Shafahi et al., 2019; Wong et al., 2020).\nTo better understand these trade-offs, in this section we further explore the transferrability of models trained on ImageNet with random Gaussian noise and one-step of PGD (i.e., PGD(1)) using the same methodology as described in Section 4. For all models, including the Gaussian one, we contraint the\nperturbation, namely δ, to be ‖δ‖2 ≤ 3 in order make a fair comparison across models. See A.10 for more details on our experimental setup.\nIn the following we discuss our experimental results, summarized in Figure 8, which shows the test accuracy delta of each of the three adversarially-trained models versus the natural one.\nTraining with more steps is marginally better. Figure 8 (a) and (b) show that more PGD iterations (i.e., PGD(20) vs PGD(1)) slightly improve transferability. This is evidenced by the slightly higher test accuracy delta in (a), relative to (b) across all target datasets. Our emperical result agrees with previous works that are showing that more attacker steps typically only improve adversarial robustness slightly (Madry et al., 2018; Shafahi et al., 2019; Wong et al., 2020).\nA targeted adversary is better than a random one. Comparing Figure 8 (a) and (b) to (c) shows that a targeted adversarial attacks (i.e. PGD vs Gaussian) significantly improves transferability relative to a random perturbation. This is evidenced by the slightly higher test accuracy delta in (a), and (b) relative to (c) across all target datasets.\nA random adversary is better than no adversary. Figure 8 (c) shows us that a random adversarial attack can improve transferability. This is evidenced by the significantly positive accuracy delta in (c) across all target datasets. Our results agree with prior works showing that training a model by perturbing inputs with Gaussian noise can improve adversarial robustness (Kannan et al., 2018)." }, { "heading": "8 CONCLUSION AND FUTURE WORKS", "text": "We show that robust models transfer very well to new domains, even outperforming natural models. This may be surprising since robust models generalize worse than natural models within the source domain, and since they were originally designed to protect against adversarial attacks. We show that robust DNNs can be transferred both faster and with higher accuracy, while also requiring fewer images to achieve suitable performance on the target domain. We observe that adversarial training biases the learned representations to retain shapes instead of textures, which impacts the source models’ transferability. We also show that the improved classification accuracy is due to the fact that robust models have an implicit bias that enables them to comprehend human-aligned features. Given the widespread use of DNNs, there is great potential for robust networks to be applied to a variety of high-tech areas such as facial recognition, self-driving cars, and healthcare, but understanding the issues we have addressed is crucial to deliver upon that potential. Please see Appendix A.12 for details on future works." }, { "heading": "ACKNOWLEDGMENTS", "text": "We are grateful to the generous support from Amazon AWS and Google Cloud. NBE and MWM would like to acknowledge IARPA (contract W911NF20C0035), NSF, ONR and CLTC for providing partial support of this work. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ON FIGURE 1(A)", "text": "We generated Figure 1(a) by maximizing the activations in the penultimate layer on out of distribution images (i.e., images that are are not part of the training and test data) as our starting seeds, following the “feature visualization” technique as implemented in Engstrom et al. (2019). The natural and robust models are pre-trained ResNet-50 models on ImageNet after fine-tuning only the last fully-connected layer on CIFAR-10. The robust model used the ‖δ‖2 ≤ 3 constraint." }, { "heading": "A.2 MORE DETAILS ON ADVERSARIAL TRAINING", "text": "In practice, we solve equation 1 using stochastic gradient descent (SGD) over θ. More concretely, we include a random sample of training examples in a set B, specify a non-negative learning rate α, calculate the gradient with respect to the parameters of the model∇θ, and update θ as follows:\nθ := θ − α |B| ∑ (xi,yi)∈B ∇θ max ‖δi‖p≤ `(hθi(xi + δi), yi). (2)\nThis training process has a sequential nature: it first finds the worst possible perturbation δ∗i for each training example (xi, yi) in B before updating the parameters of the model θ, where\nδ∗i = argmax ‖δi‖p≤ `(hθ(xi + δi), yi). (3)\nProblem equation 3 is typically solved using projected gradient descent with k update steps, which is what we call PGD(k). In this work, we use PGD(20), which means that we take 20 update steps to solve (3). Each step iteratively updates δi by projecting the update onto the `p ball of interest:\nδi := P(δi + α∇δi`(hθ(xi + δi), yi)). (4)\nAs an example, consider the case of the `2 norm and let f(δi) = `(hθ(xi + δi), yi). If we want to meet the restriction that ‖δi‖2 ≤ , we can pick an update value for δi whose `2 norm will be at most the learning rate. This yields the problem:\nargmax ‖v‖2≤α v>∇δif(δi) = ∇δif(δi) ‖∇δif(δi)‖2 . (5)\nAnd in the case of the `∞ norm we have\nargmax ‖v‖∞≤α\nv>∇δif(δi) = · sign(∇δif(δi))\nThus, we set the learning rate to be equal to α = c · num. of steps for 1.5 < c < 4 in order to ensure that we reach the boundary condition for δi. Also we must clip δi according to the `p norm in case that it exceeds the boundary condition." }, { "heading": "A.3 TRANSFER LEARNING EXPERIMENTAL SETUP DETAILS", "text": "Source models. For all of our experiments, we use four residual networks (ResNet-50) (He et al., 2016) pre-trained on the ImageNet dataset (Deng et al., 2009), one was naturally-trained (without\nan adversarial constraint), and the others use PGD(20) and the following adversarial constraints: (1) ‖δ‖2 ≤ 3, (2) ‖δ‖∞ ≤ 4255 , (3) ‖δ‖∞ ≤ 8 255 , where δ is a matrix that contains represents the perturbation applied to the input image as described in Section 3, equation (1). In addition to the natural ResNet-50, considered as baseline, we also use three robust networks with various constraints. For speed, transparency and reproducibility, we do not re-train the source models ourselves.1\nFine-tuning procedure. To transfer our models we copy the entire source model to the target model, freeze the all but the last k convolutional blocks, re-initialize the last fully-connected (FC) layer for the appropriate number of labels, and only fine-tune (re-train) the last FC layer plus 0, 1, 3, or 9 convolutional blocks. Freezing layers in the neural networks entails permitting forward propagation, but disabling the back-propagation of gradients used during SGD training. We have four different fine-tuning configurations, one for each number of fine-tuned convolutional blocks. Note that the ResNet model that we consider has residual blocks that are composed of three convolutional layers, i.e., we fine-tune 27 layers plus the fully connected layer when the number of fine-tuned blocks is equal to 9. (See Section A.4 for a visualization of our fine-tuning process).\nRandom subsets. One of the most interesting parts of our experimental setup is that we also explore the test accuracy of our fine-tuned models using randomly chosen subsets of 100, 200, 400, ... , and 25,600 images from the target dataset. These subsets are constructed using random sampling without replacement, with a minor constraint: all labels must have at least one training image. For each run of model training, we fix the training data to be a randomized subset of the entire training data. As the number of images in a random subset decreases, the variance in the validation accuracy of the transferred models increases. Thus, we repeat the fine-tuning procedure using 20 seeds for every subset with at most 1,600 images, and using 10 seeds for all larger subsets. We reduce the number of seeds for larger subsets because the inherently lower variance in the validation accuracy doesn’t justify paying the computational cost associated to fine-tuning more seeds.\nTarget datasets. We transfer our models to a broad set of target datasets, including (1) CIFAR-100, (2) CIFAR-10, (3) SVHN, (4) Fashion MNIST (Xiao et al., 2017), (5) KMNIST and (6) MNIST. Since all of these datasets have images at a lower resolution than ImageNet, we up-scale our images with bi-linear interpolation. In addition, we use common data transform techniques such as random cropping and rotation that are well-known to produce high-quality results with certain datasets." }, { "heading": "A.4 FINE-TUNING DETAILS", "text": "Figure 9 illustrates all four fine-tuning configurations in our experiments. Notice how in Subfigure 9(d) we unfreeze more than half of the ResNet-50 architecture, thereby testing what occurs as we fine-tune a lot of blocks.\nAll source models are fine-tuned to all datasets using stochastic gradient descent with momentum using the hyperparameters described in Table 3.\nThe learning rate decays to a tenth of it’s current value every 33 or 50 epochs, which corresponds to 1/3 of the total fine-tuning epochs, as shown in Table 4. Also, the test accuracy frequency refers to how often is the test accuracy computed, in epochs. So, for example, if the test accuracy frequency is 20, then we check the test accuracy after epoch 1, 21, 41, ..., 81, and 100.\nWith regards to the random seeds, we have the following formula to define the set of seeds used, Sk, as a function of the total number of random seeds used, k:\n1The models that we use are provided as part of by the following repository: https://github.com/ MadryLab/robustness.\nThus, when we use 20 seeds, as it is the case for the subset of 100 images, we use seeds 20000000, 20100000, . . . , 21900000. Large numbers were used to avoid numerical instability issues that arise with small numbers where their binary representation has too many zeroes.\nSee Table 5 for additional detail with regards to the source models. Notice that although adversarially-trained models do worse on the source dataset, they outperform naturally-trained models on the target datasets, as shown in Table 7.\nSee Table 6 for a high-level overview of all datasets used. This should serve as a reminder that our source dataset is ImageNet, with an extensive 1.2 million training images and 1,000 labels, it serves as a great starting point in our experiments. All other target datasets have a considerably lower number of training and test images." }, { "heading": "A.5 ADDITIONAL RESULTS", "text": "Table 7 reports the test accuracy of all of our source models after fine-tuning three blocks using different numbers of training images on each of the six target datasets. The rightmost column shows the non-transferred model trained only on the target dataset and trained on the entire network. The average test accuracy is reported for all cases where the model is fine-tuned with less than the entire training set. The bolded numbers represent the highest test accuracy among source models. From this table, we can see that the robust models consistently outperform the natural models." }, { "heading": "A.6 ADDITIONAL DETAIL ON LEARNING FASTER", "text": "The following subsection contains the additional charts that were omitted in Figure 3 in Section 4. Figure 11 shows the same behavior is observed in all three figures: It’s sub-optimal to fine-tune either 0 or nine convolutional blocks, as opposed to one or three. Consistent with our methodology for Figure 4, we adversarially-train models on ImageNet and then fine-tune various numbers of convolutional blocks using a random sample of images in the target dataset." }, { "heading": "A.7 ADDITIONAL DETAIL ON THE EFFECT OF THE NUMBER OF FINE-TUNED BLOCKS", "text": "The following subsection contains the additional charts that were omitted in Figure 4 in Section 4. Figure 11 shows the same behavior is observed in all three figures: It’s sub-optimal to fine-tune either 0 or nine convolutional blocks, as opposed to one or three. Consistent with our methodology for Figure 4, we adversarially-train models on ImageNet and then fine-tune various numbers of convolutional blocks using a random sample of images in the target dataset." }, { "heading": "A.8 COMPUTATIONAL COST", "text": "In general, a PGD(k) adversarial training process as described by Madry, is k orders of magnitude more expensive than natural training. This can be seen directly from the fact that there are k more iterations in the inner maximization loop of the risk minimization procedure. However, since only the source model must be trained adversarially, and these source models can be downloaded from publicly available repositories, the marginal computational cost of fine-tuning to a target dataset is\nperhaps more important. Fortunately, the computational cost of fine-tuning both robust or natural models is the same." }, { "heading": "A.9 ADDITIONAL DETAILS ON INFLUENCE FUNCTIONS", "text": "Suppose that ` : Rm × Rd → R is a smooth loss function and x1, . . . , xn ∈ Rm are our given data. The empirical risk minimization (ERM) problem takes the form\nmin θ∈Rd\nf(θ) = 1\nn n∑ j=1 `(xj , θ). (7)\nSay θ? is the argmin solution of the above optimization problem. Let’s now consider upweighing a data point xtrain with ∈ R. This modifies the learning problem as:\nmin θ∈Rd\n1\nn n∑ j=1 `(xj , w) + `(xtrain, θ). (8)\nLet θ? be the solution of the upweighted problem above. The influence of a training data point xtrain on a test data point xtest approximates the change in the function value `(xtest, θ?) → `(xtest, θ? ) with respect to an infinitesimally small , i.e., when → 0 when xtrain is upweighed by . This can be calculated in closed form Koh & Liang (2017) as:\n−g1H−1g2, (9)\nwhere g1 = ∇`(xtrain, θ?)>, g2 = ∇`(xtest, θ?) and H is the Hessian of the loss function ∇2f(θ?). In particular, we first compute the Hessian of the source model fine-tuned with 3,200 CIFAR-10 images as the sum of the Hessians of the loss of batches of five images. Note that we only use the 3,200 images that were used in the fine-tuning process, since it accurately reflects the Hessian of the model. Then we get H−1 using the (Moore-Penrose) pseudo-inverse, using its singular-value decomposition and including all singular values larger than 1e-20.\nKoh et. al. Koh & Liang (2017) discuss optimization speedup techniques to determine the most influential xtrain for a given xtest at scale. However, finding top-k influential images is a combinatorial problem for k > 1. So, typically a greedy selection of the next top influential image is made iteratively k times. Further, selecting multiple images also requires consideration of interaction and group effects. As such, the top-5 influential images are likely to be less representative of actual influence being asserted than one would expect.\nFisher kernels and influence functions Khanna et. al. Khanna et al. (2019) recently discovered an interesting relationship between Fisher Kernels and Influence functions: if the loss function `(·) can be written as a negative log-likelihood, then at the optimum w?, the Fisher dot product between two points is exactly the same as the influence of those points on each other (note that the influence is a symmetric function). In other words, finding the most influential data point to a given data point is equivalent to finding the nearest neighbor of the point in the space induced by the Fisher kernel. As observed in Section 6 for robust training, most influential points for a data point tend to be largely the ones belonging to the same label. This implies that the in the Fisher space, the points with the same label tend to be grouped together." }, { "heading": "A.10 ADDITIONAL DETAILS FOR ADVERSARIAL ATTACKS COMPARISON SECTION", "text": "Both the PGD(1) and Gaussian models are ResNet-50’s trained on ImageNet-1K using stochastic gradient descent with momentum and the following hyperparameters: 0.1 learning rate, 128 batch size, 0.9 momentum, 10x learning rate decay, and an equally-spaced (i.e. linear) learning rate decay schedule. The test accuracy of each of these models is 60.29% and 74.02% for the PGD(1) and Gaussian model, respectively. Both models use the ‖δ‖2 ≤ 3 adversarial constraint. The PGD(1) model uses one attacker step, with a step size of 6 and the perturbation is initialized at zero. The Gaussian model adds a perturbation for each pixel drawn from a standard Normal distribution N (µ = 0, σ2 = 1)." }, { "heading": "A.11 CODEBASE OVERVIEW", "text": "The starting point requires downloading the source ImageNet models, and installing the appropriate libraries. Next, the user can decide how to fine-tune the source models: individually or in batches.\nThe train.py file will allow individual training, while the tools/batch.py file allows training in batches.\nThe train.py file contains 9 parameters that are explained by running the following command: python train.py --help. Also, the helpers.py and delete big files.py files under the tools folder contain the logic that supports the train.py file. This includes the random subset generator, the fine-tuning procedure, and the data transforms.\nSeparately, note that when running the batch.py file, the fine-tuned models won’t be saved into the results/logs directory. This is due to the fact that models can occupy a significant amount of memory and we do not plan to use these fine-tuned models in the future. However, if the user wants to save the fine-tuned models, then he or she can do so by commenting our line 60 in the batch.py file: deleteBigFilesFor1000experiment().\nLastly, all results are stored into the results/logs folder by default and can be compiled easily into a csv file using the log extractor.py script." }, { "heading": "A.12 DETAILED FUTURE WORKS", "text": "Even though we support our main thesis with extensive empirical evidence and analyze this phenomenon through the lens of texture bias and influence functions, why, when, and how robust models transfer better deserves further investigation. In this section we’d like to provide some ideas that we hope will spark research interest.\nDifferent adversarial training constraint type. Prior work only considers `2 and `∞ adversarial constraint types. However, different adversarial constraints could allow models to retain more transferable features from the source dataset. Two possibilities are (i) constraining on the Fischer Kernel with influence functions, and (ii) constraining on the Fourier space instead of the pixel space. For (i), as shown by Koh & Liang (2017) for the ith image xi we can compute the perturbation δi at each step of PGD by starting with δi = 0, and then δi = P(δi + αsignIpert,loss(xi + δi, xi)). For (ii), we could instead calculate the gradient w.r.t. each one of the frequencies and constrain the model to only use a subset of all of its frequencies to represent the input image.\nDifferent source datasets. ImageNet might not be the best source dataset for two reasons. First, we as shown by Tsipras et al. (2020) there are many labels that overlap with each other, such as rifle and assault rifle. Second, there are many training images containing objects from more than one label, referred to in Tsipras et al. (2020) as multi-object images. Thus, we think it would be worthwhile to use Tencent’s Large-Scale Multi-Label Image Database from Wu et al. (2019) as a source dataset instead of ImageNet.\nDecision-boundary bias. In line with Section 5, it might be worth looking at the transferability of robust models as a function of how closely related are the labels in the target dataset. Our hypothesis is that if the labels in the target dataset are closely related, then the robust model might transfer slightly worse than if the labels were further apart from each other. Although measuring the closeness of labels within a dataset is challenging, this could be an interesting extension to Santurkar et al. (2020).\nNew use-cases. As shown in Section 5, robust models are biased towards low resolutions and low frequencies. Thus, it’s possible that robust models have a lower facial recognition bias than naturally trained models." } ]
2,021
ADVERSARIALLY-TRAINED DEEP NETS TRANSFER BETTER: ILLUSTRATION ON IMAGE CLASSIFICATION
SP:ffc8e46a5dbbcd0906458c0e302190997dfe8b5e
[ "This paper proposed a method, called TaylorGLO, to learn the loss functions, for training deep neural network, by meta-learning. Specifically, the authors proposed to parameterize the loss function with multivariate Taylor polynomial, and then learn the parameters in the polynomial using evolutionary algorithm within the meta-learning framework. The experiments showed improved performance of the TaylorGLO over cross-entropy baseline on several datasets and with different network architectures." ]
Metalearning of deep neural network (DNN) architectures and hyperparameters has become an increasingly important area of research. Loss functions are a type of metaknowledge that is crucial to effective training of DNNs, however, their potential role in metalearning has not yet been fully explored. Whereas early work focused on genetic programming (GP) on tree representations, this paper proposes continuous CMA-ES optimization of multivariate Taylor polynomial parameterizations. This approach, TaylorGLO, makes it possible to represent and search useful loss functions more effectively. In MNIST, CIFAR-10, and SVHN benchmark tasks, TaylorGLO finds new loss functions that outperform the standard cross-entropy loss as well as novel loss functions previously discovered through GP, in fewer generations. These functions serve to regularize the learning task by discouraging overfitting to the labels, which is particularly useful in tasks where limited training data is available. The results thus demonstrate that loss function optimization is a productive new avenue for metalearning.
[]
[ { "authors": [ "M. Abadi", "P. Barham", "J. Chen", "Z. Chen", "A. Davis", "J. Dean", "M. Devin", "S. Ghemawat", "G. Irving", "M. Isard", "M. Kudlur", "J. Levenberg", "R. Monga", "S. Moore", "D.G. Murray", "B. Steiner", "P. Tucker", "V. Vasudevan", "P. Warden", "M. Wicke", "Y. Yu", "X. Zheng" ], "title": "TensorFlow: A system for large-scale machine learning", "venue": "In 12th USENIX Symposium on Operating Systems Design and Implementation", "year": 2016 }, { "authors": [ "W. Banzhaf", "P. Nordin", "R.E. Keller", "F.D. Francone" ], "title": "Genetic programming: An introduction, volume 1", "venue": null, "year": 1998 }, { "authors": [ "G. Bingham", "W. Macke", "R. Miikkulainen" ], "title": "Evolutionary optimization of deep learning activation functions", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference,", "year": 2020 }, { "authors": [ "J. Chisholm" ], "title": "Rational approximants defined from double power series", "venue": "Mathematics of Computation,", "year": 1973 }, { "authors": [ "T. DeVries", "G.W. Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "H. Dong", "S. Yu", "C. Wu", "Y. Guo" ], "title": "Semantic image synthesis via adversarial learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "T. Elsken", "J.H. Metzen", "F. Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "R. Gao", "K. Grauman" ], "title": "2.5D visual sound", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "A.S. Golatkar", "A. Achille", "S. Soatto" ], "title": "Time matters in regularizing deep networks: Weight decay and data augmentation affect early learning dynamics, matter little near convergence", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "S. Gonzalez", "R. Miikkulainen" ], "title": "Improved training speed, accuracy, and data utilization through loss function optimization", "venue": "In Proceedings of the IEEE Congress on Evolutionary Computation (CEC),", "year": 2020 }, { "authors": [ "S. Gonzalez", "J. Landgraf", "R. Miikkulainen" ], "title": "Faster training by selecting samples using embeddings", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "P. Graves-Morris" ], "title": "The numerical calculation of Padé approximants", "venue": "In Padé approximation and its applications,", "year": 1979 }, { "authors": [ "P. Graves-Morris", "D. Roberts" ], "title": "Calculation of Canterbury approximants", "venue": "Computer Physics Communications,", "year": 1975 }, { "authors": [ "J.J. Grefenstette", "J.M. Fitzpatrick" ], "title": "Genetic search with approximate function evaluations", "venue": "In Proceedings of an International Conference on Genetic Algorithms and Their Applications,", "year": 1985 }, { "authors": [ "D. Han", "J. Kim" ], "title": "Deep pyramidal residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "N. Hansen", "S. Kern" ], "title": "Evaluating the CMA evolution strategy on multimodal test functions", "venue": "In International Conference on Parallel Problem Solving from Nature,", "year": 2004 }, { "authors": [ "N. Hansen", "A. Ostermeier" ], "title": "Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation", "venue": "In Proceedings of IEEE international conference on evolutionary computation,", "year": 1996 }, { "authors": [ "N. Hansen", "A. Ostermeier" ], "title": "Completely derandomized self-adaptation in evolution strategies", "venue": "Evolutionary computation,", "year": 2001 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "G.E. Hinton", "N. Srivastava", "A. Krizhevsky", "I. Sutskever", "R.R. Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "P.J. Huber" ], "title": "Robust estimation of a location parameter", "venue": "The Annals of Mathematical Statistics,", "year": 1964 }, { "authors": [ "Y. Jin" ], "title": "Surrogate-assisted evolutionary computation: Recent advances and future challenges", "venue": "Swarm and Evolutionary Computation, 1:61–70,", "year": 2011 }, { "authors": [ "N.S. Keskar", "D. Mudigere", "J. Nocedal", "M. Smelyanskiy", "P.T.P. Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In Proceedings of the Fifth International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "D. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In Proceedings of the Second International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "A. Lacoste", "A. Luccioni", "V. Schmidt", "T. Dandres" ], "title": "Quantifying the carbon emissions of machine learning", "venue": "arXiv preprint arXiv:1910.09700,", "year": 2019 }, { "authors": [ "C. Lemke", "M. Budka", "B. Gabrys" ], "title": "Metalearning: a survey of trends and technologies", "venue": "Artificial Intelligence Review,", "year": 2015 }, { "authors": [ "H. Li", "Z. Xu", "G. Taylor", "C. Studer", "T. Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "L. v. d. Maaten", "G. Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "R. Miikkulainen", "J. Liang", "E. Meyerson", "A. Rawal", "D. Fink", "O. Francon", "B. Raju", "H. Shahrzad", "A. Navruzyan", "N. Duffy" ], "title": "Evolving deep neural networks", "venue": "In Artificial Intelligence in the Age of Neural Networks and Brain Computing,", "year": 2019 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "Neural Information Processing Systems, Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "E. Real", "C. Liang", "D.R. So", "Q.V. Le" ], "title": "Automl-zero: Evolving machine learning algorithms from scratch", "venue": null, "year": 2003 }, { "authors": [ "D.E. Rumelhart", "G.E. Hinton", "R.J. Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "G.D. Ruxton" ], "title": "The unequal variance t-test is an underused alternative to Student’s t-test and the Mann–Whitney U test", "venue": "Behavioral Ecology,", "year": 2006 }, { "authors": [ "L. Sagun", "U. Evci", "V.U. Guney", "Y. Dauphin", "L. Bottou" ], "title": "Empirical analysis of the Hessian of over-parametrized neural networks", "venue": "arXiv preprint arXiv:1706.04454,", "year": 2017 }, { "authors": [ "J. Schmidhuber" ], "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-.", "venue": "hook. PhD thesis, Technische Universität München,", "year": 1987 }, { "authors": [ "L.N. Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "J.T. Springenberg", "A. Dosovitskiy", "T. Brox", "M.A. Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "CoRR, abs/1412.6806,", "year": 2015 }, { "authors": [ "C. Szegedy", "W. Liu", "Y. Jia", "P. Sermanet", "S. Reed", "D. Anguelov", "D. Erhan", "V. Vanhoucke", "A. Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "A.N. Tikhonov" ], "title": "Solution of incorrectly formulated problems and the regularization method", "venue": "In Proceedings of the USSR Academy of Sciences,", "year": 1963 }, { "authors": [ "B.L. Welch" ], "title": "The generalization of Student’s problem when several different population variances are involved", "venue": null, "year": 1947 }, { "authors": [ "S. Yun", "D. Han", "S.J. Oh", "S. Chun", "J. Choe", "Y. Yoo" ], "title": "CutMix: Regularization strategy to train strong classifiers with localizable features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "S. Zagoruyko", "N. Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Y. Zhou", "C. Liu", "Y. Pan" ], "title": "Modelling sentence pairs with tree-structured attentive encoder", "venue": "In Proceedings of the 26th International Conference on Computational Linguistics (COLING), Technical Papers,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "As deep learning systems have become more complex, their architectures and hyperparameters have become increasingly difficult and time-consuming to optimize by hand. In fact, many good designs may be overlooked by humans with prior biases. Therefore, automating this process, known as metalearning, has become an essential part of the modern machine learning toolbox. Metalearning aims to solve this problem through a variety of approaches, including optimizing different aspects of the architecture from hyperparameters to topologies, and by using different methods from Bayesian optimization to evolutionary computation (Schmidhuber, 1987; Elsken et al., 2019; Miikkulainen et al., 2019; Lemke et al., 2015).\nRecently, loss-function discovery and optimization has emerged as a new type of metalearning. Focusing on neural network’s root training goal it aims to discover better ways to define what is being optimized. However, loss functions can be challenging to optimize because they have a discrete nested structure as well as continuous coefficients. The first system to do so, Genetic Loss Optimization (GLO; Gonzalez & Miikkulainen, 2020) tackled this problem by discovering and optimizing loss functions in two separate steps: (1) representing the structure as trees, and evolving them with Genetic Programming (GP; Banzhaf et al., 1998); and (2) optimizing the coefficients using Covariance-Matrix Adaptation Evolutionary Strategy (CMA-ES; Hansen & Ostermeier, 1996). While the approach was successful, such separate processes make it challenging to find a mutually optimal structure and coefficients. Furthermore, small changes in the tree-based search space do not always result in small changes in the phenotype, and can easily make a function invalid, making the search process ineffective.\nIn an ideal case, loss functions would be mapped into fixed-length vectors in a Hilbert space. This mapping should be smooth, well-behaved, well-defined, incorporate both a function’s structure and coefficients, and should by its very nature exclude large classes of infeasible loss functions. This paper introduces such an approach: Multivariate Taylor expansion-based genetic loss-function optimization (TaylorGLO). With a novel parameterization for loss functions, the key pieces of information that affect a loss function’s behavior are compactly represented in a vector. Such vectors are then optimized for a specific task using CMA-ES. Special techniques can be developed to narrow down the search space and speed up evolution.\nLoss functions discovered by TaylorGLO outperform the standard cross-entropy loss (or log loss) on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets with several different network architectures. They also outperform the Baikal loss, discovered by the original GLO technique, and do it with significantly fewer function evaluations. The reason for the improved performance is that evolved functions discourage overfitting to the class labels, thereby resulting in automatic regularization. These improvements are particularly pronounced with reduced datasets where such regularization matters the most. TaylorGLO thus further establishes loss-function optimization as a promising new direction for metalearning." }, { "heading": "2 RELATED WORK", "text": "Applying deep neural networks to new tasks often involves significant manual tuning of the network design. The field of metalearning has recently emerged to tackle this issue algorithmically (Schmidhuber, 1987; Lemke et al., 2015; Elsken et al., 2019; Miikkulainen et al., 2019). While much of the work has focused on hyperparameter optimization and architecture search, recently other aspects, such activation functions and learning algorithms, have been found useful targets for optimization (Bingham et al., 2020; Real et al., 2020). Since loss functions are at the core of machine learning, it is compelling to apply metalearning to their design as well.\nDeep neural networks are trained iteratively, by updating model parameters (i.e., weights and biases) using gradients propagated backward through the network (Rumelhart et al., 1985). The process starts from an error given by a loss function, which represents the primary training objective of the network. In many tasks, such as classification and language modeling, the cross-entropy loss (also known as the log loss) has been used almost exclusively. While in some approaches a regularization term (e.g. L2 weight regularization; Tikhonov, 1963) is added to the the loss function definition, the core component is still the cross-entropy loss. This loss function is motivated by information theory: It aims to minimize the number of bits needed to identify a message from the true distribution, using a code from the predicted distribution.\nIn other types of tasks that do not fit neatly into a single-label classification framework different loss functions have been used successfully (Gonzalez et al., 2019; Gao & Grauman, 2019; Kingma & Welling, 2014; Zhou et al., 2016; Dong et al., 2017). Indeed, different functions have different properties; for instance the Huber Loss (Huber, 1964) is more resilient to outliers than other loss functions. Still, most of the time one of the standard loss functions is used without a justification; therefore, there is an opportunity to improve through metalearning.\nGenetic Loss Optimization (GLO; Gonzalez & Miikkulainen, 2020) provided an initial approach into metalearning of loss functions. As described above, GLO is based on tree-based representations with coefficients. Such representations have been dominant in genetic programming because they are flexible and can be applied to a variety of function evolution domains. GLO was able to discover Baikal, a new loss function that outperformed the cross-entropy loss in image classification tasks. However, because the structure and coefficients are optimized separately in GLO, it cannot easily optimize their interactions. Many of the functions created through tree-based search are not useful because they have discontinuities, and mutations can have disproportionate effects on the functions. GLO’s search is thus inefficient, requiring large populations that are evolved for many generations. Thus, GLO does not scale to the large models and datasets that are typical in modern deep learning.\nThe technique presented in this paper, TaylorGLO, aims to solve these problems through a novel loss function parameterization based on multivariate Taylor expansions. Furthermore, since such representations are continuous, the approach can take advantage of CMA-ES (Hansen & Ostermeier, 1996) as the search method, resulting in faster search." }, { "heading": "3 LOSS FUNCTIONS AS MULTIVARIATE TAYLOR EXPANSIONS", "text": "Taylor expansions (Taylor, 1715) are a well-known function approximator that can represent differentiable functions within the neighborhood of a point using a polynomial series. Below, the common univariate Taylor expansion formulation is presented, followed by a natural extension to arbitrarily-multivariate functions.\nGiven a Ckmax smooth (i.e., first through kmax derivatives are continuous), real-valued function, f(x) : R→ R, a kth-order Taylor approximation at point a ∈ R, f̂k(x, a), where 0 ≤ k ≤ kmax, can be constructed as\nf̂k(x, a) = k∑ n=0 1 n! f (n)(a)(x− a)n. (1)\nConventional, univariate Taylor expansions have a natural extension to arbitrarily high-dimensional inputs of f . Given a Ckmax+1 smooth, real-valued function, f(x) : Rn → R, a kth-order Taylor approximation at point a ∈ Rn, f̂k(x,a), where 0 ≤ k ≤ kmax, can be constructed. The stricter smoothness constraint compared to the univariate case allows for the application of Schwarz’s theorem on equality of mixed partials, obviating the need to take the order of partial differentiation into account.\nLet us define an nth-degree multi-index, α = (α1, α2, . . . , αn), where αi ∈ N0, |α| = ∑n i=1 αi,\nα! = ∏n i=1 αi!. x α = ∏n i=1 x αi i , and x ∈ Rn. Multivariate partial derivatives can be concisely written using a multi-index\n∂αf = ∂α11 ∂ α2 2 · · · ∂αnn f =\n∂|α|\n∂xα11 ∂x α2 2 · · · ∂x αn n . (2)\nThus, discounting the remainder term, the multivariate Taylor expansion for f(x) at a is f̂k(x,a) = ∑\n∀α,|α|≤k\n1\nα! ∂αf(a)(x− a)α. (3)\nThe unique partial derivatives in f̂k and a are parameters for a kth order Taylor expansion. Thus, a kth order Taylor expansion of a function in n variables requires n parameters to define the center, a, and one parameter for each unique multi-index α, where |α| ≤ k. That is: #parameters(n, k) = n+ ( n+k k ) = n+ (n+k)!n! k! .\nThe multivariate Taylor expansion can be leveraged for a novel loss-function parameterization. Let an n-class classification loss function be defined as LLog = − 1n ∑n i=1 f(xi, yi). The function f(xi, yi) can be replaced by its kth-order, bivariate Taylor expansion, f̂k(x, y, ax, ay). More sophisticated loss functions can be supported by having more input variables beyond xi and yi, such as a time variable or unscaled logits. This approach can be useful, for example, to evolve loss functions that change as training progresses.\nFor example, a loss function in x and y has the following third-order parameterization with parameters θ (where a = 〈θ0, θ1〉):\nL(x,y) = − 1 n n∑ i=1 [ θ2 + θ3(yi − θ1) + 12θ4(yi − θ1) 2 + 16θ5(yi − θ1) 3 + θ6(xi − θ0)\n+θ7(xi − θ0)(yi − θ1) + 12θ8(xi − θ0)(yi − θ1) 2 + 12θ9(xi − θ0) 2\n+ 12θ10(xi − θ0) 2(yi − θ1) + 16θ11(xi − θ0)\n3 ] (4)\nNotably, the reciprocal-factorial coefficients can be integrated to be a part of the parameter set by direct multiplication if desired.\nAs will be shown in this paper, the technique makes it possible to train neural networks that are more accurate and learn faster than those with tree-based loss function representations. Representing loss functions in this manner confers several useful properties:\n• It guarantees smooth functions; • Functions do not have poles (i.e., discontinuities going to infinity or negative infinity) within\ntheir relevant domain; • They can be implemented purely as compositions of addition and multiplication operations; • They can be trivially differentiated; • Nearby points in the search space yield similar results (i.e., the search space is locally\nsmooth), making the fitness landscape easier to search; • Valid loss functions can be found in fewer generations and with higher frequency; • Loss function discovery is consistent and not dependent on a specific initial population; and • The search space has a tunable complexity parameter (i.e., the order of the expansion).\nThese properties are not necessarily held by alternative function approximators. For instance:\nFourier series are well suited for approximating periodic functions (Fourier, 1829). Consequently, they are not as well suited for loss functions, whose local behavior within a narrow domain is important. Being a composition of waves, Fourier series tend to have many critical points within the domain of interest. Gradients fluctuate around such points, making gradient descent infeasible. Additionally, close approximations require a large number of terms, which in itself can be injurious, causing large, high-frequency fluctuations known as “ringing”, due to Gibb’s phenomenon (Wilbraham, 1848). Padé approximants can be more accurate approximations than Taylor expansions; indeed, Taylor expansions are a special case of Padé approximants where M = 0 (Graves-Morris, 1979). However, unfortunately Padé approximants can model functions with one or more poles, which valid loss functions typically should not have. These problems still exist, and are exacerbated, for Chisholm approximants (a bivariate extension; Chisholm, 1973) and Canterbury approximants (a multivariate generalization; Graves-Morris & Roberts, 1975). Laurent polynomials can represent functions with discontinuities, the simplest being x−1. While Laurent polynomials provide a generalization of Taylor expansions into negative exponents, the extension is not useful because it results in the same issues as Padé approximants. Polyharmonic splines can represent continuous functions within a finite domain, however, the number of parameters is prohibitive in multivariate cases.\nThe multivariate Taylor expansion is therefore a better choice than the alternatives. It makes it possible to optimize loss functions efficiently in TaylorGLO, as will be described next.\n4 THE TAYLORGLO METHOD Candidate Evaluation0 0 00 0 0[ ]0 0 Build TaylorGLO Loss FunctionCMA-ES\nMean Vector Covariance\nMatrix Sampler\nPartial Model Training (Few Epochs)\nℒ = − 1 n n\n∑ i=1\nf(xi, yi)\n1.1 0.8 1.41.2 1 1.2[ ]1.4 0.8\nBuild TaylorGLO Loss Function\nInitial Solution Mean Vector\nBest Solution Validation Set Evaluation\nFigure 1: The TaylorGLO method. Starting with a population of initially unbiased loss functions, CMA-ES optimizes their Taylor expansion parameters in order to maximize validation accuracy after partial training. The candidate with the highest accuracy is chosen as the final, best solution.\nTaylorGLO (Figure 1) aims to find the optimal parameters for a loss function represented as a multivariate Taylor expansion. The parameters for a Taylor approximation (i.e., the center point and partial derivatives) are referred to as θf̂ : θf̂ ∈ Θ, Θ = R\n#parameters . TaylorGLO strives to find the vector θ∗\nf̂ that parameter-\nizes the optimal loss function for a task. Because the values are continuous, as opposed to discrete graphs of the original GLO, it is possible to use continuous optimization methods.\nIn particular, Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES Hansen & Ostermeier, 1996) is a popular population-based, black-box optimization technique for rugged, continuous spaces. CMA-ES functions by maintaining a covariance matrix around a mean point that represents a distribution of solutions. At each generation, CMA-ES adapts the distribution to better fit evaluated objective values from sampled\nindividuals. In this manner, the area in the search space that is being sampled at each step grows, shrinks, and moves dynamically as needed to maximize sampled candidates’ fitnesses. TaylorGLO uses the (µ/µ, λ) variant of CMA-ES (Hansen & Ostermeier, 2001), which incorporates weighted rank-µ updates (Hansen & Kern, 2004) to reduce the number of objective function evaluations needed.\nIn order to find θ∗ f̂ , at each generation CMA-ES samples points in Θ. Their fitness is determined by training a model with the corresponding loss function and evaluating the model on a validation dataset. Fitness evaluations may be distributed across multiple machines in parallel and retried a limited number of times upon failure. An initial vector of θf̂ = 0 is chosen as a starting point in the search space to avoid bias.\nFully training a model can be prohibitively expensive in many problems. However, performance near the beginning of training is usually correlated with performance at the end of training, and therefore it is enough to train the models only partially to identify the most promising candidates. This type of\napproximate evaluation is common in metalearning (Grefenstette & Fitzpatrick, 1985; Jin, 2011). An additional positive effect is that evaluation then favors loss functions that learn more quickly.\nFor a loss function to be useful, it must have a derivative that depends on the prediction. Therefore, internal terms that do not contribute to ∂∂yLf (x,y) can be trimmed away. This step implies that any term t within f(xi, yi) with ∂∂yi t = 0 can be replaced with 0. For example, this refinement simplifies Equation 4, providing a reduction in the number of parameters from twelve to eight:\nL(x,y) = − 1 n n∑ i=1 [ θ2(yi − θ1) + 12θ3(yi − θ1) 2 + 16θ4(yi − θ1) 3 + θ5(xi − θ0)(yi − θ1)\n+ 12θ6(xi − θ0)(yi − θ1) 2 + 12θ7(xi − θ0) 2(yi − θ1) ] . (5)" }, { "heading": "5 EXPERIMENTAL SETUP", "text": "This section presents the experimental setup that was used to evaluate the TaylorGLO technique.\nDomains: MNIST (LeCun et al., 1998) was included as simple domain to illustrate the method and to provide a backward comparison with GLO; CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), and SVHN (Netzer et al., 2011) were included as more modern benchmarks. Improvements were measured in comparison to the standard cross-entropy loss function LLog = − 1n ∑n i=1 xi log(yi), where x is sampled from the true distribution, y is from the predicted distribution, and n is the number of classes.\nEvaluated architectures: A variety of architectures were used to evaluate TaylorGLO: the basic CNN architecture evaluated in the GLO study (Gonzalez & Miikkulainen, 2020), AlexNet (Krizhevsky et al., 2012), AllCNN-C (Springenberg et al., 2015), Preactivation ResNet-20 (He et al., 2016a), which is an improved variant of the ubiquitous ResNet architecture (He et al., 2016b), and Wide ResNets of different morphologies (Zagoruyko & Komodakis, 2016). Networks with Cutout (DeVries & Taylor, 2017) and CutMix (Yun et al., 2019) were also evaluated, to show that TaylorGLO provides a different, complementary approach to regularization.\nTaylorGLO setup: CMA-ES was instantiated with population size λ = 28 on MNIST and λ = 20 on all other datasets, and an initial step size σ = 1.2. These values were found to work well in preliminary experiments. The candidates were third-order (i.e., k = 3) TaylorGLO loss functions (Equation 5). Such functions were found experimentally to have a better trade-off between evolution time and performance compared to second- and fourth-order TaylorGLO loss functions, although the differences were relatively small.\nFurther experimental setup and implementation details are provided in Appendix A.\n6 RESULTS\nThis section illustrates the TaylorGLO process and demonstrates how the evolved loss functions can improve performance over the standard cross-entropy loss function, especially on reduced datasets. A summary of results on three datasets across a variety of models are shown in Table 1." }, { "heading": "6.1 THE TAYLORGLO DISCOVERY PROCESS", "text": "Figure 2 illustrates the evolution process over 60 generations, which is sufficient to reach convergence on the MNIST dataset. TaylorGLO is able to discover highly-performing loss functions quickly, i.e. within 20 generations. Generations’ average validation accuracy approaches generations’ best accuracy as evolution progresses, indicating that population as a whole is improving. Whereas\nGLO’s unbounded search space often results in pathological functions, every TaylorGLO training session completed successfully without any instabilities.\nFigure 3 shows the shapes and parameters of each generation’s highest-scoring loss function. In Figure 3a the functions are plotted as if they were being used for binary classification, i.e. the loss for an incorrect label on the left and for a correct one on the right (Gonzalez & Miikkulainen, 2020). The functions have a distinct pattern through the evolution process. Early generations include a wider variety of shapes, but they later converge towards curves with a shallow minimum around y0 = 0.8. In other words, the loss increases near the correct output—which is counterintuitive. This shape is also strikingly different from the cross-entropy loss, which decreases monotonically from left to right, as one might expect all loss functions to do. The evolved shape is effective most likely because can provide an implicit regularization effect: it discourages the model from outputting unnecessarily extreme values for the correct class, and therefore makes overfitting less likely (Gonzalez & Miikkulainen, 2020). This is a surprising finding, and demonstrates the power of machine learning to create innovations beyond human design." }, { "heading": "6.2 PERFORMANCE COMPARISONS", "text": "Over 10 fully-trained models, the best TaylorGLO loss function achieved a mean testing accuracy of 0.9951 (stddev 0.0005) in MNIST. In comparison, the cross-entropy loss only reached 0.9899 (stddev 0.0003), and the \"BaikalCMA\" loss function discovered by GLO, 0.9947 (stddev 0.0003) (Gonzalez & Miikkulainen, 2020); both differences are statistically significant (Figure 5). Notably, TaylorGLO achieved this result with significantly fewer generations. GLO required 11,120 partial evaluations (i.e., 100 individuals over 100 GP generations plus 32 individuals over 35 CMA-ES generations),\nwhile the top TaylorGLO loss function only required 448 partial evaluations, i.e. 4.03% as many. Thus, TaylorGLO achieves improved results with significantly fewer evaluations than GLO.\nDue to the very large number evaluations required by GLO, TaylorGLO is only compared to GLO on MNIST. GLO is not practically applicable to deeper models with longer training times. For example, even a relatively small deep network, PreResNet-20 He et al. (2016a), would require over 171 GPU days of computation, assuming the same number of evaluations as above on MNIST.\nThe large reduction in evaluations during evolution compared to GLO allows TaylorGLO to tackle harder problems, including models that have millions of parameters. On CIFAR-10, CIFAR-100, and SVHN, TaylorGLO was able to outperform cross-entropy baselines consistently on a variety models, as shown in Table 1. These increases in accuracy are greater than what is possible through implicit learning rate adjustment alone (detailed in Appendix E). TaylorGLO also provides further improvement on architectures that use Cutout (DeVries & Taylor, 2017), suggesting that its mechanism of avoiding overfitting is different from other regularization techniques.\nIn addition, TaylorGLO loss functions result in more robust trained models. In Figure 4, accuracy basins for two AllCNN-C models, one trained with the TaylorGLO loss function and another with the cross-entropy loss, are plotted along a two-dimensional slice [−1, 1] of the weight space (a technique due to Li et al., 2018). The TaylorGLO loss function results in a flatter, lower basin. This result suggests that the model is more robust, i.e. its performance is less sensitive to small perturbations in the weight space, and it also generalizes better (Keskar et al., 2017)." }, { "heading": "6.3 PERFORMANCE ON REDUCED DATASETS", "text": "The performance improvements that TaylorGLO provides are especially pronounced with reduced datasets. For example, Figure 6 compares accuracies of models trained for 20,000 steps on different portions of the MNIST dataset (similar results were obtained with other datasets and architectures). Overall, TaylorGLO significantly outperforms the cross-entropy loss. When evolving a TaylorGLO loss function and training against 10% of the training dataset, with 225 epoch evaluations, TaylorGLO reached an average accuracy across ten models of 0.7595 (stddev 0.0062). In contrast, only four out of ten cross-entropy loss models trained successfully, with those reaching a lower average accuracy of 0.6521. Thus, customized loss functions can be especially useful in applications where only limited data is available to train the models, presumably because they are less likely to overfit to the small number of examples." }, { "heading": "7 DISCUSSION AND FUTURE WORK", "text": "TaylorGLO was applied to the benchmark tasks using various standard architectures with standard hyperparameters. These setups have been heavily engineered and manually tuned by the research community, yet TaylorGLO was able to improve them. Interestingly, the improvements were more substantial with wide architectures and smaller with narrow and deep architectures such as the Preactivation ResNet. While it may be possible to further improve upon this result, it is also possible that loss function optimization is more effective with architectures where the gradient information travels through fewer connections, or is otherwise better preserved throughout the network. An important direction of future work is therefore to evolve both loss functions and architectures together, taking advantage of possible synergies between them.\nAs illustrated in Figure 3a, the most significant effect of evolved loss functions is to discourage extreme output values, thereby avoiding overfitting. It is interesting that this mechanism is apparently different from other regularization techniques such as dropout (as shown by Gonzalez & Miikkulainen, 2020) and data augmentation with Cutout (as seen in Table 1). Dropout and Cutout improve performance over the baseline, and loss function optimization improves it further. This result suggests that regularization is a multifaceted process, and further work is necessary to understand how to best take advantage of it.\nAnother important direction is to incorporate state information into TaylorGLO loss functions, such as the percentage of training steps completed. TaylorGLO may then find loss functions that are best suited for different points in training, where, for example, different kinds of regularization work best (Golatkar et al., 2019). Unintuitive changes to the training process, such as cycling learning rates\n(Smith, 2017), have been found to improve performance; evolution could be used to find other such opportunities automatically. Batch statistics could help evolve loss functions that are more well-tuned to each batch; intermediate network activations could expose information that may help tune the function for deeper networks like ResNet. Deeper information about the characteristics of a model’s weights and gradients, such as that from spectral decomposition of the Hessian matrix (Sagun et al., 2017), could assist the evolution of loss functions that adapt to the current fitness landscape. The technique could also be adapted to models with auxiliary classifiers (Szegedy et al., 2015) as a means to touch deeper parts of the network." }, { "heading": "8 CONCLUSION", "text": "This paper proposes TaylorGLO as a promising new technique for loss-function metalearning. TaylorGLO leverages a novel parameterization for loss functions, allowing the use of continuous optimization rather than genetic programming for the search, thus making it more efficient and more reliable. TaylorGLO loss functions serve to regularize the learning task, outperforming the standard cross-entropy loss significantly on MNIST, CIFAR-10, CIFAR-100, and SVHN benchmark tasks with a variety of network architectures. They also outperform previously loss functions discovered in prior work, while requiring many fewer candidates to be evaluated during search. Thus, TaylorGLO results in higher testing accuracies, better data utilization, and more robust models, and is a promising new avenue for metalearning." }, { "heading": "A EXPERIMENTAL SETUP", "text": "The following subsections cover specific experimental setup details. The three evaluated datasets are detailed in how they were used, along with implementation details.\nA.1 MNIST\nThe first domain was MNIST Handwritten Digits, a widely used dataset where the goal is to classify 28 × 28 pixel images as one of ten digits. MNIST has 55,000 training samples, 5,000 validation samples, and 10,000 testing samples. The dataset is well understood and relatively quick to train, and forms a good foundation for understanding how TaylorGLO evolves loss functions.\nThe basic CNN architecture evaluated in the GLO study (Gonzalez & Miikkulainen, 2020) can also be used to provide a direct point of comparison with prior work on MNIST. Importantly, this architecture includes a dropout layer (Hinton et al., 2012) for explicit regularization. As in GLO, training is based on stochastic gradient descent (SGD) with a batch size of 100, a learning rate of 0.01, and, unless otherwise specified, occurred over 20,000 steps.\nA.2 CIFAR-10 AND CIFAR-100\nTo validate TaylorGLO in a more challenging context, the CIFAR-10 (Krizhevsky & Hinton, 2009) dataset was used. It consists of small 32 × 32 pixel color photographs of objects in ten classes. CIFAR-10 traditionally consists of 50,000 training samples, and 10,000 testing samples; however 5,000 samples from the training dataset were used for validation of candidates, resulting in 45,000 training samples.\nModels were trained with their respective hyperparameters from the literature. Inputs were normalized by subtracting their mean pixel value and dividing by their pixel standard deviation. Standard data augmentation techniques consisting of random horizontal flips and croppings with two pixel padding were applied during training.\nCIFAR-100 is a similar, though significantly more challenging, dataset where a different set of 60,000 images is divided into 100 classes, instead of 10. The same splits for training, validation, and testing were used for CIFAR-100 as for CIFAR-10, and evaluate TaylorGLO further.\nA.3 SVHN\nThe Street View House Numbers (SVHN; Netzer et al., 2011) dataset is another image classification domain that was used to evaluate TaylorGLO, consisting of 32 × 32 pixel images of numerical digits from Google Street View. SVHN consists of 73,257 training samples, 26,032 testing samples, and 531,131 supplementary, easier training samples. To reduce computation costs, supplementary examples were not used during training; this fact explains why presented baselines may be lower than other SVHN baselines in the literature. Since a validation set is not in the standard splits, 26,032 samples from the training dataset were used for validation of candidates, resulting in 47,225 training samples.\nAs with CIFAR-10, models were trained with their respective hyperparameters from the literature and with the same data augmentation pipeline.\nA.4 CANDIDATE EVALUATION DETAILS\nDuring candidate evaluation, models were trained for 10% of a full training run on MNIST, equal to 2,000 steps (i.e., four epochs). An in-depth analysis on the technique’s sensitivity to training steps during candidate evaluation is provided in Appendix D—overall, the technique is robust even with few training steps. However, on more complex models with abrupt learning rate decay schedules, greater numbers of steps provide better fitness estimates.\nA.5 STATISTICAL TESTING\nStatistical significance tests define a null hypothesis and reject it if a p-value is below a predefined significance level, typically 0.05. A p-value is the probability of obtaining extreme results at the same level or greater than the results observed given that the null hypothesis is true.\nWhen comparing results in this paper, a one-tailed null hypothesis is typically used:\nH0 : ¬ (µ1 < µ2) , (6)\nwhere µ1 and µ2 are mean values from two separate sets of training sessions. The rejection of this null hypothesis implies that µ2 is statistically significantly larger than µ1. Thus, the change between the two sets of training sessions is robust to training stochasticity, such as from varying weight initializations.\nThroughout this paper, Welch’s t-Test Welch (1947) is used to determine statistical significance when comparing sets of results which may not have equal variances. It is also a better fit than Student’s t-Test due to its higher robustness and statistical power Ruxton (2006).\nA.6 IMPLEMENTATION DETAILS\nDue to the number of partial training sessions that are needed to evaluate TaylorGLO loss function candidates, training was distributed across the network to a cluster, composed of dedicated machines with NVIDIA GeForce GTX 1080Ti GPUs. Training itself was implemented with TensorFlow (Abadi et al., 2016) in Python. The primary components of TaylorGLO (i.e., the genetic algorithm and CMAES) were implemented in the Swift programming language which allows for easy parallelization. These components run centrally on one machine and asynchronously dispatch work to the cluster.\nTraining for each candidate was aborted and retried up to two additional times if validation accuracy was below 0.15 at the tenth epoch. This method helped reduce computation costs.\nB ILLUSTRATING THE EVOLUTIONARY PROCESS\nThe TaylorGLO search process can be illustrated with t-SNE dimensionality reduction (Maaten & Hinton, 2008) on every candidate loss function within a run (Figure 7). The initial points (i.e. loss functions) are initially widespread on the left side, but quickly migrate and spread to the right as CMA-ES explores the parameter space, and eventually concentrate in a smaller region of dark red points. This pattern is consistent with the convergence and settling in Figure 3." }, { "heading": "C TOP MNIST LOSS FUNCTION", "text": "The best loss function obtained from running TaylorGLO on MNIST was found in generation 74. This function, with parameters θ = 〈11.9039,−4.0240, 6.9796, 8.5834,−1.6677, 11.6064, 12.6684, −3.4674〉 (rounded to four decimal-places), achieved a 2k-step validation accuracy of 0.9950 on its\nsingle evaluation, higher than 0.9903 for the cross entropy loss. This loss function was a modest improvement over the previous best loss function from generation 16, which had a validation accuracy of 0.9958." }, { "heading": "D MNIST EVALUATION LENGTH SENSITIVITY", "text": "200-step TaylorGLO is surprisingly resilient when evaluations during evolution are shortened to 200 steps (i.e., 0.4 epochs) of training. With so little training, returned accuracies are noisy and dependent on each individual network’s particular random initialization. On a 60-generation run with 200-step evaluations, the best evolved loss function had a mean testing accuracy of 0.9946 across ten samples, with a standard deviation of 0.0016. While slightly lower, and significantly more variable, than the accuracy for the best loss function that was found on the main 2,000-step run, the accuracy is still significantly higher than that of the cross-entropy baseline, with a p-value of 6.3 × 10−6. This loss function was discovered in generation 31, requiring 1,388.8 2,000-step-equivalent partial evaluations. That is, evolution with 200-step partial evaluations is over three-times less sample efficient than evolution with 2,000-step partial evaluations.\n20,000-step On the other extreme, where evaluations consist of the same number of steps as a full training session, one would expect better loss functions to be discovered, and more reliably, because the fitness estimates are less noisy. Surprisingly, that is not the case: The best loss function had a mean testing accuracy of 0.9945 across ten samples, with a standard deviation of 0.0015. While also slightly lower, and also significantly more variable, than the accuracy for the best loss function that was found on the main 2,000-step run, the accuracy is significantly higher than the cross-entropy baseline, with a p-value of 5.1× 10−6. This loss function was discovered in generation 45, requiring 12,600 2,000-step-equivalent partial evaluations. That is, evolution with 20,000-step full evaluations is over 28-times less sample efficient than evolution with 2,000-step partial evaluations.\nThese results thus suggest that there is an optimal way to evaluate candidates during evolution, resulting in lower computational cost and better loss functions. Notably, the best evolved loss functions from all three runs (i.e., 200-, 2,000-, and 20,000-step) have similar shapes, reinforcing the idea that partial-evaluations can provide useful performance estimates." }, { "heading": "E LEARNING RATE SENSITIVITY", "text": "Loss functions can embody different learning rates implicitly. This section shows that TaylorGLO loss functions’ benefits come from more than just metalearning such learning rates. Increases in performance that result from altering the base learning rate with cross-entropy loss are significantly smaller than those that TaylorGLO provides.\nMore specifically, Figure 8 quantifies the effect of varying learning rates on the final testing accuracy of AllCNN-C models trained on CIFAR-10. AllCNN-C was chosen for this analysis since it exhibits the largest variations in performance, making this effect more clear. While learning rates larger than 0.01 (the standard learning rate for AllCNN-C) reach slightly higher accuracies, this effect comes at the cost of less stable training. The majority of models trained with these higher learning rates failed to train. Thus, the standard choice of learning rate for AllCNN-C is appropriate for the cross-entropy loss, and TaylorGLO loss functions are able to improve upon it." }, { "heading": "F TAYLOR APPROXIMATIONS OF THE CROSS-ENTROPY LOSS", "text": "While TaylorGLO’s performance originates primarily from discovering better loss functions, it is informative to analyze what role the accuracy of the Taylor approximation plays in it. One way to characterize this effect is to analyze the performance of various Taylor approximations of the cross-entropy loss.\nTable 2 provides results from such a study. Bivariate approximations to the cross-entropy loss, centered at a = 〈0.5, 0.5〉, with different orders k were used to train AllCNN-C models on CIFAR-10. Third-order approximations and above are trainable. Approximations’ performance is within a few\npercentage points of the cross-entropy loss, with higher-order approximations yielding progressively better accuracies, as expected.\nThe results thus show that third-order TaylorGLO loss functions cannot represent the cross-entropy baseline loss accurately. One possibility for improving TaylorGLO is thus to utilize higher order approximations. However, it is remarkable that TaylorGLO can still find loss functions that outperform the cross-entropy loss. Also, the increase in the number of parameters—and the corresponding increase in computational requirements—may in practice outweigh the benefits from a finer-grained representation. This effect was seen in preliminary experiments, and the third-order approximations (used in this paper) deemed to strike a good balance." }, { "heading": "G TAYLORGLO EXPERIMENT DURATIONS AND ENVIRONMENTAL IMPACT", "text": "The infrastructure that ran the experiments in this paper is located in California, which is estimated to have had an estimated carbon dioxide equivalent total output emission rate of 226.21 kgCO2eq/kWh in 2018 (epa, 2020). This quantity can be used to calculate the climate impact of compute-intensive experiments.\nTable 3 provides estimates of durations and total emissions for various TaylorGLO experiments. Emissions were calculated using the Machine Learning Impact calculator (Lacoste et al., 2019), assuming that no candidates failed evaluation (which would result in slightly lower estimates). Presented values can thus be thought of as being an upper bound.\nOverall, experiment durations are short enough that TaylorGLO can be practically applied to different tasks to find customized loss functions." } ]
2,020
OPTIMIZING LOSS FUNCTIONS THROUGH MULTI- VARIATE TAYLOR POLYNOMIAL PARAMETERIZATION
SP:6a0a4a33a8023f2bed39d64f92a054e494ecdb74
[ "The paper proposes an efficient long-range convolution method for point clouds by using the non-uniform Fourier transform. The long-range convolutional (LRC)-layer mollifies the point cloud to an adequately sized regular grid, computes its Fourier transform, multiplies the results by a set of trainable Fourier multipliers, computes the inverse Fourier transform, and finally interpolates the result back to the point cloud. The method is demonstrated to be effective by a N-body problem." ]
The efficient treatment of long-range interactions for point clouds is a challenging problem in many scientific machine learning applications. To extract global information, one usually needs a large window size, a large number of layers, and/or a large number of channels. This can often significantly increase the computational cost. In this work, we present a novel neural network layer that directly incorporates long-range information for a point cloud. This layer, dubbed the long-range convolutional (LRC)-layer, leverages the convolutional theorem coupled with the non-uniform Fourier transform. In a nutshell, the LRC-layer mollifies the point cloud to an adequately sized regular grid, computes its Fourier transform, multiplies the result by a set of trainable Fourier multipliers, computes the inverse Fourier transform, and finally interpolates the result back to the point cloud. The resulting global all-to-all convolution operation can be performed in nearly-linear time asymptotically with respect to the number of input points. The LRC-layer is a particularly powerful tool when combined with local convolution as together they offer efficient and seamless treatment of both short and long range interactions. We showcase this framework by introducing a neural network architecture that combines LRC-layers with short-range convolutional layers to accurately learn the energy and force associated with a N -body potential. We also exploit the induced two-level decomposition and propose an efficient strategy to train the combined architecture with a reduced number of samples.
[]
[ { "authors": [ "M. Aubry", "U. Schlickewei", "D. Cremers" ], "title": "The wave kernel signature: A quantum mechanical approach to shape analysis", "venue": "IEEE International Conference on Computer Vision Workshops (ICCV),", "year": 2011 }, { "authors": [ "A.H. Barnett", "J. Magland", "L. af Klinteberg" ], "title": "A parallel nonuniform fast Fourier transform library based on an “exponential of semicircle\" kernel", "venue": "SIAM J. Sci. Comput.,", "year": 2019 }, { "authors": [ "J. Behler", "M. Parrinello" ], "title": "Generalized neural-network representation of high-dimensional potentialenergy surfaces", "venue": "Phys. Rev. Lett.,", "year": 2007 }, { "authors": [ "T. Bereau", "R.A. DiStasio", "A. Tkatchenko", "O.A. von Lilienfeld" ], "title": "Non-covalent interactions across organic and biological subsets of chemical space: Physics-based potentials parametrized from machine learning", "venue": "J. Chem. Phys.,", "year": 2018 }, { "authors": [ "D. Chen", "X. Tian", "Y. Shen", "O. Ming" ], "title": "On visual similarity based 3D model retrieval", "venue": "Computer Graphics Forum,", "year": 2003 }, { "authors": [ "X. Chen", "H. Ma", "J. Wan", "B. Li", "T. Xia" ], "title": "Multi-view 3D object detection network for autonomous driving", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "J.W. Cooley", "J.W. Tukey" ], "title": "An algorithm for the machine calculation of complex", "venue": "Fourier series. Math. Comput.,", "year": 1965 }, { "authors": [ "M. Defferrard", "X. Bresson", "P. Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "H. Deng", "T. Birdal", "S. Ilic" ], "title": "PPF-FoldNet: Unsupervised learning of rotation invariant 3D local descriptors", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Z. Deng", "C. Chen", "X.G. Li", "S.P. Ong" ], "title": "An electrostatic spectral neighbor analysis potential for lithium nitride", "venue": "NPJ Comput. Mater.,", "year": 2019 }, { "authors": [ "A. Dutt", "V. Rokhlin" ], "title": "Fast fourier transforms for nonequispaced data", "venue": "SIAM J. Sci. Comput.,", "year": 1993 }, { "authors": [ "Y Fan", "L. Ying" ], "title": "Solving optical tomography with deep learning", "venue": null, "year": 2019 }, { "authors": [ "Y. Fan", "J. Feliu-Fabà", "L. Lin", "L. Ying", "L. Zepeda-Núñez" ], "title": "A multiscale neural network based on hierarchical nested bases", "venue": "Res. Math. Sci.,", "year": 2019 }, { "authors": [ "Y. Fan", "L. Lin", "L. Ying", "L. Zepeda-Núñez" ], "title": "A multiscale neural network based on hierarchical matrices", "venue": "Multiscale Model. & Sim.,", "year": 2019 }, { "authors": [ "M. Gadelha", "R. Wang", "S. Maji" ], "title": "Multiresolution tree networks for 3d point cloud processing", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "L. Greengard", "J. Lee" ], "title": "Accelerating the nonuniform fast fourier transform", "venue": "SIAM Review,", "year": 2004 }, { "authors": [ "L. Greengard", "V. Rokhlin" ], "title": "A fast algorithm for particle simulations", "venue": "J. Comput. Phys.,", "year": 1987 }, { "authors": [ "A. Grisafi", "M. Ceriotti" ], "title": "Incorporating long-range physics in atomic-scale machine learning", "venue": "J. Chem. Phys.,", "year": 2019 }, { "authors": [ "A. Grisafi", "J. Nigam", "M. Ceriotti" ], "title": "Multi-scale approach for the prediction of atomic scale properties. arXiv:2008.12122", "venue": null, "year": 2008 }, { "authors": [ "M. Hirn", "S. Mallat", "N. Poilvert" ], "title": "Wavelet scattering regression of quantum chemical energies", "venue": "Multiscale Model Simul.,", "year": 2017 }, { "authors": [ "V. Jampani", "M. Kiefel", "P.V. Gehler" ], "title": "Learning sparse high dimensional filters: Image filtering, dense CRFs and bilateral neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Y. Khoo", "L. Ying" ], "title": "SwitchNet: A neural network model for forward and inverse scattering problems", "venue": "SIAM J. Sci. Comput.,", "year": 2019 }, { "authors": [ "D. Kingma", "J. Ba" ], "title": "Adam: a method for stochastic optimization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "R. Klokov", "V. Lempitsky" ], "title": "Escape from cells: Deep Kd-networks for the recognition of 3D point cloud models", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "T.W. Ko", "J.A. Finkler", "S. Goedecker", "J. Behler" ], "title": "A fourth-generation high-dimensional neural network potential with accurate electrostatics including non-local charge transfer", "venue": null, "year": 2009 }, { "authors": [ "R. Kondor", "N. Teneva", "V. Garg" ], "title": "Multiresolution matrix factorization", "venue": "Proceedings of Machine Learning Research,", "year": 2014 }, { "authors": [ "Y. Li", "R. Bu", "M. Sun", "W. Wu", "X. Di", "B. Chen" ], "title": "PointCNN: Convolution on x-transformed points", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Z. Li", "N. Kovachki", "K. Azizzadenesheli", "B. Liu", "K. Bhattacharya", "A. Stuart", "A. Anandkumar" ], "title": "Multipole graph neural operator for parametric partial differential equations", "venue": null, "year": 2006 }, { "authors": [ "Y. Liu", "B. Fan", "S. Xiang", "C. Pan" ], "title": "Relation-shape convolutional neural network for point cloud analysis", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "D. Maturana", "S. Scherer" ], "title": "Voxnet: A 3D convolutional neural network for real-time object recognition", "venue": "In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2015 }, { "authors": [ "J. Nigam", "S. Pozdnyakov", "M. Ceriotti" ], "title": "Recursive evaluation and iterative contraction of N-body equivariant features", "venue": "J. Chem. Phys.,", "year": 2020 }, { "authors": [ "Y.J. Oh", "Y. Watanabe" ], "title": "Development of small robot for home floor cleaning", "venue": "In Proceedings of the 41st SICE Annual Conference,", "year": 2002 }, { "authors": [ "Y. Park", "V. Lepetit", "W. Woo" ], "title": "Multiple 3D object tracking for augmented reality", "venue": "In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality,", "year": 2008 }, { "authors": [ "C.R. Qi", "H. Su", "K. Mo", "L.J. Guibas" ], "title": "Pointnet: Deep learning on point sets for 3D classification and segmentation", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "C.R. Qi", "L. Yi", "H. Su", "L.J. Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "G. Riegler", "A.O. Ulusoy", "A. Geiger" ], "title": "Octnet: Learning deep 3D representations at high resolutions", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "O. Ronneberger", "P. Fischer", "T. Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention", "venue": "MICCAI", "year": 2015 }, { "authors": [ "K. Rossi", "V. Jurásková", "R. Wischert", "L. Garel", "C. Corminboeuf", "M. Ceriotti" ], "title": "Simulating solvation and acidity in complex mixtures with first-principles accuracy: the case of CH3SO3H and H2O2 in phenol", "venue": "J. Chem. Theory Comput.,", "year": 2020 }, { "authors": [ "M. Rupp", "A. Tkatchenko", "K. Müller", "OA" ], "title": "Von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning", "venue": "Phys. Rev. Lett.,", "year": 2012 }, { "authors": [ "R.B. Rusu", "N. Blodow", "Z.C. Marton", "M. Beetz" ], "title": "Aligning point cloud views using persistent feature histograms", "venue": "In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2008 }, { "authors": [ "R.B. Rusu", "N. Blodow", "M. Beetz" ], "title": "Fast point feature histograms (FPFH) for 3D registration", "venue": "In Proceedings of the 2009 IEEE International Conference on Robotics and Automation,", "year": 2009 }, { "authors": [ "M. Savva", "F. Yu", "H. Su", "A. Kanezaki", "T. Furuya", "R. Ohbuchi", "Z. Zhou", "R. Yu", "S. Bai", "X. Bai", "M. Aono", "A. Tatsuma", "S. Thermos", "A. Axenopoulos", "G. Th. Papadopoulos", "P. Daras", "X. Deng", "Z. Lian", "B. Li", "H. Johan", "Y. Lu", "S. Mk" ], "title": "Large-scale 3D shape retrieval from shapenet core55", "venue": "In Eurographics Workshop on 3D Object Retrieval,", "year": 2016 }, { "authors": [ "H. Su", "V. Jampani", "D. Sun", "S. Maji", "E. Kalogerakis", "M. Yang", "J. Kautz" ], "title": "SPLATNet: Sparse lattice networks for point cloud processing", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "L. Wang", "Y. Huang", "Y. Hou", "S. Zhang", "J. Shan" ], "title": "Graph attention convolution for point cloud semantic segmentation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Z. Wu", "S. Song", "A. Khosla", "F. Yu", "L. Zhang", "X.Tang", "J. Xiao" ], "title": "3D shapenets: A deep representation for volumetric shapes", "venue": "In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "M. Xu", "W. Dai", "Y. Shen", "H. Xiong" ], "title": "MSGCNN: Multi-scale graph convolutional neural network for point cloud segmentation", "venue": "IEEE Fifth International Conference on Multimedia Big Data (BigMM),", "year": 2019 }, { "authors": [ "Y. Xu", "T. Fan", "M. Xu", "L. Zeng", "Y. Qiao" ], "title": "SpideCNN: Deep learning on point sets with parameterized convolutional filters", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Y. Yang", "C. Feng", "Y. Shen", "D. Tian" ], "title": "FoldingNet: Point cloud auto-encoder via deep grid deformation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "K. Yao", "J.E. Herr", "D.W. Toth", "R. Mckintyre", "J. Parkhill" ], "title": "The tensorMol-0.1 model chemistry: a neural network augmented with long-range physics", "venue": "Chem. Sci.,", "year": 2018 }, { "authors": [ "X. Ye", "J. Li", "H. Huang", "L. Du", "X. Zhang" ], "title": "3D recurrent neural networks with context fusion for point cloud semantic segmentation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "W. Zeng", "T. Gevers" ], "title": "3DContextNet: K-d tree guided hierarchical learning of point clouds using local and global contextual cues", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV) Workshops,", "year": 2018 }, { "authors": [ "Z. Zhai", "X. Zhang", "L. Yao" ], "title": "Multi-scale dynamic graph convolution network for point clouds classification", "venue": "IEEE Access,", "year": 2020 }, { "authors": [ "L. Zhang", "J. Han", "H. Wang", "R. Car", "W. E" ], "title": "Deep potential molecular dynamics: A scalable model with the accuracy of quantum mechanics", "venue": null, "year": 2018 }, { "authors": [ "L. Zhang", "J. Han", "Ha. Wang", "W. Saidi", "R. Car", "W. E" ], "title": "End-to-end symmetry preserving interatomic potential energy model for finite and extended systems", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "L. Zhang", "M. Chen", "X. Wu", "H. Wang", "W. E", "R. Car" ], "title": "Deep neural network for the dielectric response of insulators", "venue": null, "year": 1906 }, { "authors": [ "Y. Zhou", "O. Tuzel" ], "title": "Voxelnet: End-to-End learning for point cloud based 3D object detection", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Point-cloud representations provide detailed information of objects and environments. The development of novel acquisition techniques, such as laser scanning, digital photogrammetry, light detection and ranging (LIDAR), 3D scanners, structure-from-motion (SFM), among others, has increased the interest of using point cloud representation in various applications such as digital preservation, surveying, autonomous driving (Chen et al., 2017), 3D gaming, robotics (Oh & Watanabe, 2002), and virtual reality (Park et al., 2008). In return, this new interest has fueled the development of machine learning frameworks that use point clouds as input. Historically, early methods used a preprocessing stage that extracted meticulously hand-crafted features from the point cloud, which were subsequently fed to a neural network (Chen et al., 2003; Rusu et al., 2008; Rusu et al., 2009; Aubry et al., 2011), or they relied on voxelization of the geometry (Savva et al., 2016; Wu et al., 2015; Riegler et al., 2017; Maturana & Scherer, 2015). The PointNet architecture (Qi et al., 2017) was the first to handle raw point cloud data directly and learn features on the fly. This work has spawned several related approaches, aiming to attenuate drawbacks from the original methodology, such as PointNet++ (Qi et al., 2017), or to increase the accuracy and range of application (Wang et al., 2019; Zhai et al., 2020; Li et al., 2018; Liu et al., 2019).\nEven though such methods have been quite successful for machine learning problems, they rely on an assumption of locality, which may produce large errors when the underlying task at hand exhibits long-range interactions (LRIs). To capture such interactions using standard convolutional layers, one can use wider window sizes, deeper networks, and/or a large number of features, which may increase the computational cost significantly. Several approaches have been proposed to efficiently capture such interactions in tasks such as semantic segmentation, of which the ideas we briefly summarize below. In the multi-scale type of approaches, features are progressively processed and merged. Within this family, there exist several variants, where the underlying neural networks can\nbe either recursive neural networks (Ye et al., 2018), convolutional layers (Xu et al., 2019; Xu et al., 2018) or autoencoders (Yang et al., 2018; Deng et al., 2018). Some works have proposed skip connections, following an U-net (Ronneberger et al., 2015) type architecture (Zhou & Tuzel, 2018; Qi et al., 2017), while others have focused on using a tree structure for the clustering of the points (Klokov & Lempitsky, 2017; Zeng & Gevers, 2018; Gadelha et al., 2018), or using an reference permutohedral lattices to compute convolutions (Jampani et al., 2016) whose results are interpolated back to the point cloud (Su et al., 2018). Although these methods have been shown to be successful in a range of applications, when the task at hand presents symmetries, such as rotation, translation, and permutation invariance, there is no systematic framework to embed those symmetries into the algorithmic pipelines. Another line of work, relies on interpreting the point cloud as a graph and use spectral convolutions (Bruna et al.; Defferrard et al., 2016), whose cost can scale super-linearly when dealing with LRIs.\nIn applications of machine learning to scientific computing, several classical multilevel matrix factorizations have been rewritten in the context of machine learning (Kondor et al., 2014), which have been adapted to handle long-range interactions in the context of end-to-end maps using voxelized geometries in (Fan et al., 2019b;a; Khoo & Ying, 2019; Fan & Ying, 2019) resulting in architectures similar to U-nets (Ronneberger et al., 2015), which have been extended to point clouds in (Li et al., 2020). Due to underlying voxelization of the geometry, it may be difficult for these networks to generalize when the resolution of the voxelization changes.\nThe efficient treatment of LRI for point clouds is also a prominent problem in many physical applications such as molecular modeling and molecular dynamics simulation. While long-range electrostatic interactions are omnipresent, it has been found that effectively short-ranged models can already describe the N -body potential and the associated force field (Behler & Parrinello, 2007; Zhang et al., 2018a;b) for a wide range of physical systems. There have also been a number of recent works aiming at more general systems beyond this regime of effective short-range interactions, such as the work of Ceriotti and co-workers (Grisafi & Ceriotti, 2019; Grisafi et al.; Nigam et al., 2020; Rossi et al., 2020), as well as the works of (Yao et al., 2018; Ko et al., 2009; Hirn et al., 2017; Rupp et al., 2012; Huo & Rupp; Deng et al., 2019; Bereau et al., 2018; Zhang et al., 2019). The general strategy is to build parameterized long-range interactions into the kernel methods or neural network models, so that the resulting model can characterize both short-range, as well as long-range electrostatic interactions. In the neural network context, the computational cost of treating the LRIs using these methods can grow superlinearly with the system size.\nThe idea of this work is aligned with the approaches in the molecular modeling community, which constructs a neural network layer to directly describe the LRI. In particular, we present a new longrange convolutional (LRC)-layer, which performs a global convolutional operation in nearly-linear time with respect to number of units in the layer. By leveraging the non-uniform Fourier transform (NUFFT) (Dutt & Rokhlin, 1993; Greengard & Lee, 2004; Barnett et al., 2019) technique, the LRC-layer implements a convolution with a point-wise multiplication in the frequency domain with trainable weights known as Fourier multipliers. The NUFFT is based on the regular fast Fourier transform (FFT) (Cooley & Tukey, 1965) with a fast gridding algorithms, to allow for fast convolution on unstructured data. This new LRC-layer provides a new set of descriptors that can seamlessly satisfy relevant symmetries. For instance, when the kernel of the LRI is rotationally invariant, such symmetry can be directly built into the parameterization of the Fourier kernel. Such descriptors can be used in tandem with the descriptors provided by short-range convolutional layers to improve the performance of the neural network.\nEfficient training of a neural network with the LRC-layer for capturing the information of LRIs is another challenging problem. Short-range models can often be trained with data generated with a relatively small computational box (called the small-scale data), and they can be seamlessly deployed in large-scale systems without significantly increasing the generalization error. On the other hand, long-range models need to be trained directly with data generated in a large computational box (called the large-scale data), and the generation process of such large-scale data can be very expensive. For instance, in molecular modeling, the training data is often generated with highly accurate quantum mechanical methods, of which the cost can scale steeply as O(Nα), where N is the system size and α ≥ 3. Therefore it is desirable to minimize the number of samples with a large system size. In many applications, the error of the effective short-range model is already modestly small. This motivates us to propose a two-scale training strategy as follows. We first generate many small-scale data (cheaply\nand possibly in parallel), and train the network without the LRC-layer. Then we use a small number of large-scale data, and perform training with both the short- and long-range convolutional layers.\nIn order to demonstrate the effectiveness of the LRC-layer and the two-scale training procedure, we apply our method to evaluate the energy and force associated with a model N -body potential that exhibit tunable short- and long-range interactions in one, two and three dimensions. The input point cloud consists of the atomic positions, and the output data include the N -body potential, local potential, and the force (derivative of the N -body potential with respect to atomic positions). In particular, the local potential and the force can be viewed as point clouds associated with the atomic positions. The evaluation of the N -body potential is a foundational component in molecular modeling, and LRI plays an important role in the description of ionic systems, macroscopically polarized interfaces, electrode surfaces, and many other problems in nanosciences (French et al., 2010). Our result verifies that the computational cost of the long-range layer can be reduced from O(N2) using a direct implementation, to O(N) (up to logarithmic factors) using NUFFT. Furthermore, we demonstrate that the force, i.e. the derivatives of the potential with respect to all inputs can be evaluated with O(N) cost (up to logarithmic factors). In terms of sample efficiency, we find that for the model problem under study here, the two-scale training strategy can effectively reduce the number of large-scale samples by over an order of magnitude to reach the target accuracy. This can be particularly valuable in the context of molecular modeling, where accurate data are often obtained from first principle electronic structure calculations. Such calculations are often very expensive for large scale systems, and the number of large-scale samples is thus limited." }, { "heading": "2 LONG-RANGE CONVOLUTIONAL LAYER", "text": "Convolutional layers are perhaps the most important building-block in machine learning, due to their great success in image processing and computer vision. A convolutional layer convolves the input, usually an array, with a rectangular mask containing the trainable parameters. When the mask can be kept small (for example while extracting localized features), the convolution layer is highly efficient and effective. A different way for computing a convolution is to use the convolutional theorem as follows: (1) compute the Fourier transform of the input, (2) multiply with the Fourier transform of the mask, i.e.m the Fourier multiplier, and (3) inverse Fourier transform back. In this case, the trainable parameters are the DOFs of the Fourier multipliers and the Fourier transforms are computed using the fast Fourier transform (FFT). This alternative approach is particularly attractive for smooth kernels with large support (i.e., smooth long-range interactions) because the computational cost does not increase with the size of the mask. To the best of our knowledge, this direction has not been explored for LRIs and below we detail now to apply this to point clouds.\nGiven a point cloud {xi}Ni=1 ⊂ Rd and scalar weights {fi}Ni=1, we consider the problem of computing the quantity ui := ∑N j=1 φθ(xi − xj)fj at each xi. Here the function φθ(·) is the kernel with a generic trainable parameter θ. At first glance the cost of this operation scales as O(N2): we need to evaluate ui for each point xi, which requiresO(N) work per evaluation. By introducing a generalized function f(y) = ∑ i fi · δ(y − xi) and defining a function u(x) = ∫ φθ(x− y)f(y)dy, one notices that ui is the value of u(x) at x = xi. The advantage of this viewpoint is that one can now invoke the connection between convolution and Fourier transform\nû(k) = φ̂θ(k) · f̂(k), (1) where φ̂θ(k) is a trainable Fourier multiplier. This approach is suitable for point clouds since the trainable parameters are decoupled from the geometry of the point cloud. To make this approach practical, one needs to address two issues: (1) the non-uniform distribution of the point cloud and (2) how to represent the multiplier φ̂θ(k).\nNon-uniform distribution of the point cloud Equation 1 suggests that one can compute the convolution directly using the convolution theorem, which typically relies on the FFT to obtain a low-complexity algorithm. Unfortunately, {xi}Ni=1 do not form a regular grid, thus FFT can not be directly used. We overcome this difficulty by invoking the NUFFT1 (Dutt & Rokhlin, 1993), which serves as the corner-stone of our instance of the LRC-layer2.\n1See Appendix C.2 for further details. 2We point out, that one could in practice use an fast summation algorithm, such as the fast multipole method (FMM) introduced by Greengard & Rokhlin (1987), to evaluate ui. This would results in the same complexity if\nAlgorithm 1 Long-range convolutional layer Input: {xi}Ni=1, {fi}Ni=1 Output: {xi}Ni=1, {ui}Ni=1, where ui = ∑N j=1 fjφθ(xi − xj).\n1: Define the generalized function: f(x) = ∑N j=1 fjδ(x− xj) 2: Mollify the Dirac deltas: fτ (x) = ∑N j=1 fjgτ (x− xj), where gτ is defined in Appendix C.2 3: Sample in a regular grid: fτ (x`) = ∑N j=1 gτ (x` − xj) for x` in grid of size LFFT in each dim 4: Compute FFT: Fτ (k) = FFT(fτ )(k) 5: Re-scale the signal: F (k) = √ π τ e k2τFτ (k) 6: Multiply by Fourier multipliers: v̂(k) = φ̂θ(k) · F (k) 7: Re-scale the signal: v̂−τ (k) = √ π τ e k2τ v̂(k) 8: Compute IFFT: u−τ (x`) = IFFT(v̂−τ )(x) for x` on the regular grid 9: Interpolate to the point cloud: ui = u(xi) = u−τ ∗ gτ (xi)\nThe LRC-layer is summarized in Alg. 1, where τ is chosen following Dutt & Rokhlin (1993). The inputs of this layer are the point cloud {xi}Ni=1 and the corresponding weights {fi}Ni=1. The outputs are ui ≡ u(xi) for i = 1, ..., N . The number of elements in the underlying grid NFFT = LdFFT is chosen such that the kernel is adequately sampled and the complexity remains low. As shown in Appendix C.5, one only needs a relatively small LFFT. Even though the precise number is problemspecific, given that the goal is to approximate LRIs that are supposedly smooth, it can be captured with a relatively small number of Fourier modes.\nThe LRC-layer is composed of three steps: (1) It computes the Fourier transform from the point cloud to a regular grid using the NUFFT algorithm (lines 2− 5 in Alg. 1 and showcased in Fig. 2). (2) It multiplies the result by a set of trainable Fourier multipliers (line 6 in Alg. 1). (3) It computes the inverse Fourier transform from the regular grid back to the point cloud (lines 7− 9 in Alg. 1). Within the LRC-layer in Alg. 1, the only trainable component is the parameter θ of the Fourier multiplier φ̂θ(k). The remaining components, including the mollifier gτ (·) and the Cartesian grid size, are taken to be fixed. One can, in principle, train them as well, but it comes with a much higher cost. Among the steps of Alg. 1, the sampling operator, the rescaling operator, the interpolation operator, and the Fourier transforms, are all linear and non-trainable. Therefore, derivative computations of backpropagation just go through them directly.\nAlg. 1 is presented in terms of only one single channel or feature dimension, i.e., fj ∈ R and ui ∈ R. However, it can be easily generalized to multiple channels, for example fj ∈ Rd1 and ui ∈ Rd2 . In this case, the Fourier multiplier φ̂θ(k) at each point k is a d2 × d1 matrix, and all Fourier transforms are applied component-wise.\nRepresentation of the Fourier multiplier A useful feature of the LRC-layer is that it is quite easy to impose symmetries on the Fourier multipliers. For example, if the convolution kernel φθ(·) is constrained to have parity symmetry, rotational symmetry, smoothness or decay properties, these constraints can be imposed accordingly on the coefficients of the Fourier multipliers φ̂θ(k). When the size of the training data is limited, it is often necessary to reduce the number of trainable parameters in order to regularize the kernel. For example, we may parameterize the Fourier multiplier as a linear combination of several predetermined functions on the Fourier grid. This is the procedure used in molecular modeling (Grisafi & Ceriotti, 2019; Yao et al., 2018; Ko et al., 2009), and also in our numerical examples in equation 7. We also remark that the LRC-layer described here can be applied to point clouds a way similar to a standard convolution layer applied to images and multiple LRC-layers can be composed on top of each other.\nthe kernel is fixed. However, in order for the kernel to be trainable, this would require a different algorithm for each iteration, including the computation of the derivatives, thus increasing the computational cost and rendering the implementation significantly more cumbersome.\nUnder review as a conference paper at ICLR 2021\n<latexit sha1_base64=\"CzgOsCb5dn9rKCMHR0qbPmNYaK8=\">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEsceCF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipGQ7KFbfqLkDWiZeTCuRoDMpf/WHM0gilYYJq3fPcxPgZVYYzgbNSP9WYUDahI+xZKmmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNWHNz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6redfWmeV2p1/I4inAG53AJHtxCHe6hAS1ggPAMr/DmPDovzrvzsWwtOPnMKfyB8/kDyoeM6A==</latexit>\n<latexit sha1_base64=\"cf2V4AnIhRim/3JeHlwyQomsm9U=\">AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHYNRo4kXjxCIo8ENmR26IWR2dnNzKyREL7AiweN8eonefNvHGAPClbSSaWqO91dQSK4Nq777eQ2Nre2d/K7hb39g8Oj4vFJS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj27nffkSleSzvzSRBP6JDyUPOqLFS46lfLLlldwGyTryMlCBDvV/86g1ilkYoDRNU667nJsafUmU4Ezgr9FKNCWVjOsSupZJGqP3p4tAZubDKgISxsiUNWai/J6Y00noSBbYzomakV725+J/XTU1Y9adcJqlByZaLwlQQE5P512TAFTIjJpZQpri9lbARVZQZm03BhuCtvrxOWldlr1K+blRKtWoWRx7O4BwuwYMbqMEd1KEJDBCe4RXenAfnxXl3PpatOSebOYU/cD5/AOXPjPo=</latexit>\n<latexit sha1_base64=\"9sT+zsi99ZHLmWbkwsJA3sBXKbw=\">AAACHHicbVDLSgMxFM34rPU16tJNsAgVpMxoxW6EghtXUsE+oFNLJs20oZkHyR2xDPMhbvwVNy4UceNC8G/MtF1o64VwDufcy809biS4Asv6NhYWl5ZXVnNr+fWNza1tc2e3ocJYUlanoQhlyyWKCR6wOnAQrBVJRnxXsKY7vMz85j2TiofBLYwi1vFJP+AepwS01DVPHcE8cJIxFL1uwtNj/JCBI3l/AEcTcFItXdjpXXKd5rtmwSpZ48LzxJ6SAppWrWt+Or2Qxj4LgAqiVNu2IugkRAKngqV5J1YsInRI+qytaUB8pjrJ+LgUH2qlh71Q6hcAHqu/JxLiKzXyXd3pExioWS8T//PaMXiVTsKDKAYW0MkiLxYYQpwlhXtcMgpipAmhkuu/YjogklDQeWYh2LMnz5PGSckul85uyoVqZRpHDu2jA1RENjpHVXSFaqiOKHpEz+gVvRlPxovxbnxMWheM6cwe+lPG1w9426Iw</latexit>\n<latexit sha1_base64=\"OigvCa5o4zDt9IMj06BI+xVCTFQ=\">AAACGnicbVDLSgMxFM3UV62vUZdugkVoF5YZqdhNoeDGlVSwD2jHIZNm2rSZB0lGWob5Djf+ihsXirgTN/6NaTuIth4IHM45l5t7nJBRIQ3jS8usrK6tb2Q3c1vbO7t7+v5BUwQRx6SBAxbwtoMEYdQnDUklI+2QE+Q5jLSc0eXUb90TLmjg38pJSCwP9X3qUoykkmzddO24K1GUFMZFWIVdEXl2PKyayV18nUDXHsL+T+AUju1h0dbzRsmYAS4TMyV5kKJu6x/dXoAjj/gSMyRExzRCacWIS4oZSXLdSJAQ4RHqk46iPvKIsOLZaQk8UUoPugFXz5dwpv6eiJEnxMRzVNJDciAWvan4n9eJpFuxYuqHkSQ+ni9yIwZlAKc9wR7lBEs2UQRhTtVfIR4gjrBUbeZUCebiycukeVYyy6Xzm3K+VknryIIjcAwKwAQXoAauQB00AAYP4Am8gFftUXvW3rT3eTSjpTOH4A+0z2+P5Z/q</latexit>\n<latexit sha1_base64=\"xBA9/PTg3eHQ5avpSkVxfJfaLOc=\">AAACJXicbVDLSgMxFM3UV62vUZdugkVoF5YZqdiFhYIbV1LBPqAzDpk006bNPEgy0jLMz7jxV9y4sIjgyl8xfSy09UDgcM653NzjRowKaRhfWmZtfWNzK7ud29nd2z/QD4+aIow5Jg0cspC3XSQIowFpSCoZaUecIN9lpOUOb6Z+64lwQcPgQY4jYvuoF1CPYiSV5OjXnpNYEsVpYaQIYSwtwiq0ROw7yaBqpo/JXQp7yxl4DkfOoOjoeaNkzABXibkgebBA3dEnVjfEsU8CiRkSomMakbQTxCXFjKQ5KxYkQniIeqSjaIB8IuxkdmUKz5TShV7I1QsknKm/JxLkCzH2XZX0keyLZW8q/ud1YulV7IQGUSxJgOeLvJhBGcJpZbBLOcGSjRVBmFP1V4j7iCMsVbE5VYK5fPIqaV6UzHLp8r6cr1UWdWTBCTgFBWCCK1ADt6AOGgCDZ/AK3sFEe9HetA/tcx7NaIuZY/AH2vcPOzWlCQ==</latexit>\n<latexit sha1_base64=\"0Pdhr03PRt6u7uFQmsGCy911yOY=\">AAACFXicbVDLSgMxFM3UV62vqks3wSK0IGVGKnYjFITiskJf0JYhk2ba0MyD5I5YhvkJN/6KGxeKuBXc+Tdm2i609UDC4ZxzSe5xQsEVmOa3kVlb39jcym7ndnb39g/yh0dtFUSSshYNRCC7DlFMcJ+1gINg3VAy4jmCdZzJTep37plUPPCbMA3ZwCMjn7ucEtCSnT+v230gUXFSwte4D+wB4tkNENfrzSQpunacBpKSjtj5glk2Z8CrxFqQAlqgYee/+sOARh7zgQqiVM8yQxjERAKngiW5fqRYSOiEjFhPU594TA3i2VYJPtPKELuB1McHPFN/T8TEU2rqOTrpERirZS8V//N6EbjVQcz9MALm0/lDbiQwBDitCA+5ZBTEVBNCJdd/xXRMJKGgi8zpEqzllVdJ+6JsVcqXd5VCrbqoI4tO0CkqIgtdoRq6RQ3UQhQ9omf0it6MJ+PFeDc+5tGMsZg5Rn9gfP4AXkueTw==</latexit>\n<latexit sha1_base64=\"s7++NogcTuVwzYAFvVpPjIhc0y4=\">AAACIHicbVDLSgMxFM34rPVVdekmWIS6KTNF0Y1QFIrLCrYVOrVk0kwbmnmY3BFKmE9x46+4caGI7vRrTNtZaOuBwMk595Dc48WCK7DtL2thcWl5ZTW3ll/f2NzaLuzsNlWUSMoaNBKRvPWIYoKHrAEcBLuNJSOBJ1jLG16O/dYDk4pH4Q2MYtYJSD/kPqcEjNQtnNZKw6NzV91L0K4vCdVuzFPtAknSFLM7PbzTlRRP7rjWnRom0i0U7bI9AZ4nTkaKKEO9W/h0exFNAhYCFUSptmPH0NFEAqeCpXk3USwmdEj6rG1oSAKmOnqyYIoPjdLDfiTNCQFP1N8JTQKlRoFnJgMCAzXrjcX/vHYC/llH8zBOgIV0+pCfCAwRHreFe1wyCmJkCKGSm79iOiCmJjCd5k0JzuzK86RZKTvH5ZPr42L1Iqsjh/bRASohB52iKrpCddRAFD2iZ/SK3qwn68V6tz6mowtWltlDf2B9/wBJ3KOs</latexit>" }, { "heading": "3 LEARNING THE N -BODY POTENTIAL", "text": "To demonstrate the effectiveness of the LRC-layer, we consider the problem of learning the energy and force associated with a model N -body potential in the context of molecular modeling. As mentioned in Section 1, the potential evaluation often invokes expensive ab-initio calculations that one would like to bypass for efficiency reasons.\nThe setup of this learning problem is as follows. First, we assume access to a black-box model potential, which consists of both short- and long-range interactions. However, internal parameters of the potential are inaccessible to the training architecture and algorithm. A set of training samples are generated by the model, where each sample consists of a configuration of the points {xi} along with the potential and force. Second, we set up a deep neural network that includes (among other components) the LRC-layer for addressing the long-range interaction. This network is trained with stochastic gradient type of algorithms using the collected dataset and the trained network can be used for predicting the potential and forces for new point cloud configurations. These two components are described in the following two subsections in detail." }, { "heading": "3.1 MODEL PROBLEM AND DATA GENERATION", "text": "Model We suppose that Ω = [0, L]d, and we denote the point cloud by x = {xi}Ni=1 ⊂ Ω ⊂ Rd, for d = 1, 2, or 3. We define the total energy, the local potential and the forces acting on particle j by\nU = ∑\n1≤i<j≤N\nψ(xi − xj), Uj(x) = ∑\ni 6=j\nψ(xi − x), and Fj = −∂xUj(x)|x=xj , (2)\nrespectively, where the interaction kernel ψ(r) is a smooth function, besides a possible singularity at the origin and decreases as ‖r‖ → ∞.\nSampling We define a snapshot as one configuration3 of particles, x` = {x[`]j }Nj=1, together with the global energy U [`] and the forces F [`], where ` is the index representing the number in the training/testing set. We sample the configuration of particles x` randomly, with the restriction that two particles can not be closer than a predetermined value δmin in order to avoid the singularity. After an admissible configuration is computed we generate the energy and forces following Appendix B. This process is repeated until obtaining Nsample snapshots.\n3For the sake of clarity, we suppose that the number of particles at each configuration is the same." }, { "heading": "3.2 ARCHITECTURE", "text": "Our network architecture consists of separate descriptors for the short- interactions and long-range interactions, respectively. To capture the short-range interaction, we compute a local convolution using for each point only its neighboring points within a ball of predetermined radius. For the long-range interactions, we compute an all-to-all convolution using the LRC-layer introduced in Section 2, whose output is distributed to each particle and then fed to a sequence of subsequent layers.\nShort-range descriptor For a given particle xi, and an interaction radius R, we define Ii, the interaction list of xi, as the indices j such that ‖xi−xj‖ < R, i.e., the indices of the particles that are inside a ball of radius R centered at xi. Thus for each particle xi we build the generalized coordinates si,j = xi − xj , and the short-range descriptor\nDisr = ∑\nj∈Ii\nfθ(si,j), (3)\nwhere fθ : Rd → Rmsr is a function represented by a neural network specified in Appendix C.1, where msr is the number of short-range features. By construction fθ(s) is smooth and it satisfies fθ(s) = 0 for ‖s‖ > R. Long-range descriptor We feed the LRC-layer with the raw point cloud represented by {xi}Ni=1 with weights {fi}Ni=1, which for simplicity can be assumed to be equal to one here, i.e., fi = 1 for i = 1, ..., N . The output of the layer is a two-dimensional tensor uk(xi) with i = 1, . . . , N and k = 1, . . . ,Kchnls. Then for each xi, its corresponding slice given by the vector [u1(xi), u2(xi), · · · , uKchnls(xi)], is fed to a function gθ : RKchnls → Rmlr , which is represented by a neural network with non-linear activation functions. Here θ is a generic set of trainable parameters and mlr is the number of long-range features. The descriptor for particle xi, which depends on all the other particles thanks to the LRC-layer, is defined by\nDilr = gθ(u1(xi), u2(xi), · · · , uKchnls(xi)) (4)\nShort-range network When only the short-range interaction is present, the short-range descriptor for each particle is fed particle-wise to a fitting network Fsr : Rmsr → R. In this case Fsr(Disr) only depends on particle xi and its neighbors. Finally, the contributions from each particle are accumulated so the short-range neural network (NN) energy and forces are given by\nUNNsr =\nN∑\ni=1\nFsr(Disr) and (F NNsr )j = −∂xjUNNsr (5)\nrespectively (see Fig. 2(left)). The derivatives are computed using Tensorflow (Abadi et al., 2015) directly. This network as shown by (Zhang et al., 2018b) is rotation, translation, and permutation invariant (Zaheer et al., 2017).\nWe point out that this architecture can be understood as a non-linear local convolution: for each particle i one applies the same function fθ to each of its neighbors. The result is then pooled into the descriptor Disr, then processed locally by Fsr (akin to a non-linear convolution with a filter of width one), and finally pooled globally into UNNsr .\nFull-range network When both the short-range and long-range interactions are present, the long range descriptor and the local descriptor are combined and fed particle-wise to a fitting network F : Rmsr+mlr → R to produce the overall neural network (NN) energy and forces\nUNN =\nN∑\ni=1\nF(Disr,Dilr), and (F NN)j = −∂xjUNN (6)\nrespectively (see Fig. 2(right)). Following Section 2, the long-range descriptor is translation invariant by design and can be easily made rotation invariant. Furthermore, it is well known (Zaheer et al., 2017) that this construction is permutation invariant. Further details on the implementation of the network can be found in Appendix C.3. From the structures shown in Fig. 24, it is clear that we can recover the first architecture from the second, by zeroing some entries at the fitting network, and removing the LRC-layer.\n4We provide more detailed schematics in Fig. 6 and Fig. 7 in Appendix C.1\nFinally, let us comment on the inference complexity of the proposed network where, for simplicity we assume that O(Kchnls) = O(msr) = O(mlr) = O(1), and that the depth of the neural networks is O(1). The cost for computing UNNsr is O(N), provided that each particle has a bounded number of neighbors. The complexity for computing the forces also scales linearly in N , albeit with higher constants. The complexity of computing both UNN and associated forces5 is O(N +NFFT logNFFT)." }, { "heading": "4 NUMERICAL RESULTS", "text": "The loss function is the mean squared error of the forces 1Nsample ∑Nsample `=1 ∑N i=1 ∥∥F NNθ (x [`] i )−F [`] i ∥∥2, where the i-index runs on the points of each snapshot, and ` runs on the test samples. We also generate 100 snapshots of data to test the performance of network. This particular loss could lead to shift the potential energy by up to a global constant, which can be subsequently fixed by including the error of the energy in the loss (Zhang et al., 2018b). For the testing stage of we use the relative `2 error of the forces as metric, which is defined as rel := √∑ `,i ‖F [`] i − F NNθ (x [`] i )‖2/ ∑ `,i ‖F [`] i ‖2. The standard training parameters are listed in Appendix C.4.\nThe experiments shown in the sequel are designed to provide a fair comparison with state-of-the-art methods for localized interactions. They showcase that, by adding a single LRC-layer, one can outperform these methods significantly.\nThe kernels ψ used in the experiment typically exhibit two interaction lengths: ψ(·) ≡ α1ψµ1(·) + α2ψ\nµ2(·), where each of ψµ1 and ψµ2 is either a simple exponential kernel or screened-Coulomb kernel (also known as the Yukawa kernel). For each of ψµ1 and ψµ2 , the superscripts denote the reciprocal of the interaction length, i.e., length scale ∼ µ−11 or ∼ µ−12 . Without loss of generality, µ1 > µ2, so that µ1 corresponds to the short-range scale and µ2 the long-range scale. We also assume that 0 ≤ α2 ≤ α1 and α1 + α2 = 1, so that the effect of the long-range interaction can be smaller in magnitude compared to that of the short-range interaction. In the special case of α2 = 0, the kernel exhibits only a single scale ∼ µ−11 . The precise definition of the kernel depends on the spatial dimension and boundary conditions, which are explained in Appendix B.\nFor a fixed set of kernel parameters (µ1, µ2, α1, α2), we consider two types of data: large- and small-scale data, generated in the domains Ωlr and Ωsr respectively (details to be defined in each experiment).\nThe Fourier multiplier within the LRC-layer is parameterized as\nφ̂β,λ(k) = 4πβ\n|k|2 + λ2 , (7)\nwhere β and λ are trainable parameters. This is a simple parameterization, and a more complex model can be used as well with minimal changes to the procedure. For all experiments shown below, two kernel channels are used and as a result there are only four trainable parameters in the LRC-layer.\nThe numerical results aim to show namely two properties: i) the LRC-layer is able to efficiently capture LRIs, and ii) the two-scale training strategy can reduce the amount of large-scale data significantly. To demonstrate the first property, we gradually increase the interaction length of the kernel. The accuracy of the short-range network with a fixed interaction radius is supposed to decrease rapidly, while using the LRC-layer improves the accuracy significantly. To show the second property, we generate data with two interaction lengths and train the full-range network using the one- and\n5See Appendix C.2 and C.3 for further details.\nTable 1: Relative testing error for trained screened-Coulomb type 1D models with α1 = 1, α2 = 0, and varying µ1. Notice that µ2 can be arbitrary here given that α2 = 0.\nµ1 0.5 1.0 2.0 5.0 10.0 short-range network 0.05119 0.02919 0.00597 0.00079 0.00032 full-range network 0.00828 0.00602 0.00336 0.00077 0.00054\nFigure 3: (left) Testing error of the trained 1D model with respect to the number of snapshots using the one- and two-scale training strategies using data generated with the screened-Coulomb potential and parameters µ1 = 5.0, µ2 = 0.5 (right) normalized wall-time for the LRC and the direct all-to-all computation.\ntwo-scale strategies. Finally, we also aim to demonstrate that the LRC-layer is competitive against a direct convolution in which the all-to-all computation is performed explicitly.\n1D In the first set of experiments, the domain Ω = [0, 5], N = 20 and Nsample = 1000, where Nsample is the number of snapshots and N is the total number of points in each snapshot. For the kernel, we set α2 and vary µ1 to generate datasets at different interaction lengths. For each dataset we train both short-range and full-range networks using the one-scale data. The results are summarized in Table 1, where we can observe that as the characteristic interaction length increases, the accuracy of the short-range network decreases while using the full-range network can restore the accuracy. This experiment shows that local networks are often highly accurate when the interactions are localized, but the accuracy quickly deteriorates as the interaction length increases (i.e. as µ1 decreases).\nFor the second set of experiments we used two sets of kernel parameters: one heavily biased towards a localized interaction length, and another in which both interaction lengths are equally weighted. For each set of kernel parameters, we generate 10, 000 small-scale snapshots using Ωsr = [0, 5] and N = 20, and a large number of large-scale snapshots using Ωlr = [0, 50] and N = 200 particles. The interaction radius R = 1.5, δmin = 0.05, and NFFT is 501. We train the network with the oneand two-scale training strategies described in the prequel. Fig. 3 (left) depicts the advantage of using the two-scale training strategy: we obtain roughly the same accuracy at a fraction of the number of large-scale training samples. We observe that when the number of large-scale training samples is sufficiently large, the resulting test accuracy is independent of the training strategy. We also observe that the training dynamics is stable with respect to different random seeds.\nWe compare the LRC-layer with a direct all-to-all computation.We benchmark the wall time of both layers, with increasingly number of particles. To account for implementation effects we normalize the wall times in Fig. 3 (right) and the results corroborate the complexity claims made in Section 2.\n2D We perform the same experiments as in the one-dimensional case. We fix Ω = [0, 15]2, N = 450 and Nsample = 10000. The results are summarized in Table 2, which shows that as µ decreases, the full-range network outperforms the short-range one.\nFor the second set of experiments, R = 1.5, δmin = 0.05, and NFFT is 312. For the small-scale data, Ωsr = [0, 3]2, N = 18, and Nsample = 10, 000. For the large-scale data, Ωlr = [0, 15]2 , N = 450. Similarly to the 1D case, we train the networks with both strategies using different amounts of large-scale data. The results summarized in Fig. 4 show that the two-scale strategy efficiently captures the long-range interactions with only a small number of the long-range training samples.\nTable 2: Relative testing error for trained screened-Coulomb type 2D models with α1 = 1, α2 = 0, and varying µ1. Again µ2 can be arbitrary given that α2 = 0.\nµ1 1.0 2.0 5.0 10.0 short-range network 0.07847 0.02332 0.00433 0.00242 full-range network 0.00785 0.00526 0.00363 0.00181\nFigure 4: Testing error of the trained 2D model with respect to the number of snapshots using the one- and two-scale training strategies using both screened-Coulomb and exponential potentials with µ1 = 10, µ2 = 1 : (left) α1 = 0.9, and α2 = 0.1; and (right) α1 = 0.5, and α2 = 0.5.\nAnalogously to the 1D case, we can observe that for sufficiently large-scale training samples the resulting test accuracy is identical regardless of the training strategy used. Also similar to Fig. 3 (left), we find that the lowest achievable test error is larger in Fig. 4 (right, with a larger α2) than that in Fig. 4 (left, with a smaller α2). Nonetheless, we observe that the test error of the two-scale training strategy becomes less sensitive with respect to the number of training samples when α2 becomes larger, i.e. the LRI becomes more prominent.\n3D The domain Ω is [0, 3]3 with 2 points in each of the 27 unit cells. The other parameters are the interaction radius R = 1.0, δmin = 0.1, and Nsample = 1000. The Fourier domain used is of size NFFT = 25\n3. The results in Table 3 demonstrate that full-range network is capable of maintaining good accuracy for a wide range of characteristic interactions lengths." }, { "heading": "5 CONCLUSION", "text": "We have presented an efficient long-range convolutional (LRC) layer, which leverages the nonuniform fast Fourier transform (NUFFT) to reduce the cost from quadratic to nearly-linear with respect to the number of degrees of freedom. We have also introduced a two-scale training strategy to effectively reduce the number of large-scale samples. This can be particularly important when the generation of these large-scale samples dominates the computational cost. While this paper demonstrates the effectiveness of the LRC-layer for computing the energy and force associated with a model N -body potential, we expect the LRC-layer to become a useful component in designing neural networks for modeling real chemical and materials systems, where the LRI cannot be accurately captured using short ranged models. We also expect that the LRC-layer can be a useful tool for a wide range of machine learning (such as regression and classification) tasks." }, { "heading": "A NOTATION", "text": "A table of notations is summarized in Table 4." }, { "heading": "B DATA GENERATION", "text": "We provide further details about the data generation process and how the parameter µ dictates the characteristic interaction length.\nExponential kernel: Suppose Ω be the torus [0, L]d and that x = {xi}Ni=1 ⊂ Ω ⊂ Rd for d = 1, 2, or 3. The exponential kernel is defined as\nψµ(x− y) = e−µ‖x−y‖, (8) where ‖ · ‖ is the Euclidean norm over the torus. Following Section 3.1 we define the total energy and the potential as\nU =\nN∑\ni<j\ne−µ‖xi−xj‖ and Uj(x) = N∑\ni6=j\ne−µ‖xi−x‖, (9)\nrespectively. The forces are given by\nFj = −∂xjUj(xj) = − N∑\ni 6=j\nxi − xj ‖xi − xj‖ µe−µ‖xi−xj‖. (10)\nScreened-Coulomb kernel: In 3D, the screened-Coulomb potential with free space boundary condition is given by\nψµ(x− y) = 1 4π‖x− y‖e −µ‖x−y‖. (11)\nOver the torus [0, L]d, the kernel ψµ(x− y) is the Green’s function Gµ(x, y) defined via ∆Gµ(x, y)− µ2Gµ(x, y) = −δy(x), (12)\nwith the periodic boundary condition. In order to compute the screened-Coulomb potential numerically, a spectral method is used: in particular,\nψµ(x− y) = Gµ(x, y) = F−1 ( eik·y ‖k‖2 + µ2χ (k) ) , (13)\nwhere F−1 stands for the inverse Fourier transform and χ (k) is a smoothing factor, usually Gaussian, to numerically avoid the Gibbs phenomenon. Similar to the exponential case, the parameter µ controls the localization of the potential. In addition, the derivatives are taken numerically in the Fourier domain.\nVisualization: To visualize the relation between µ and the characteristic interaction length in 1D, consider a given particle, e.g., x100 and compute the force contribution from the other particles. Fig. 5 shows that force contribution is extremely small outside a small interaction region for µ = 5.0 while the interaction region for µ = 0.5 is much larger." }, { "heading": "C DETAILS OF ARCHITECTURE AND TRAINING", "text": "C.1 SHORT-RANGE DESCRIPTOR\nHere we specify the structure of Di introduced in Section 3.2. For a given particle xi, and an interaction radiusR, define the interaction list Ii of xi as the set of indices j such that ‖xi−xj‖ < R, where ‖ · ‖ stands for the distance over the torus [0, L]d. To simplify the discussion, we assume that there exists a maximal number of neighbors NmaxNeigh for each xi. We stack the neighbors in a tensor whose dimensions are constant across different particles. This value is chosen to be sufficiently\nlarge to cover the number of elements in the interaction list. If the cardinality of Ii is less than NmaxNeigh, we pad the tensor with dummy values.\nIn the 1D case the generalized coordinates are defined as\nsi,j = ‖xi − xj‖, and ri,j = 1\n‖xi − xj‖ (14)\nfor j ∈ Ii. We introduce two fully-connected neural networks fθ1 , fθ2 : R+ → Rmsr/2, where each consists of five layers with the number of units doubling at each layer and ranging from 2 to 32. The activation function after each layer is tanh and the initialization follows Glorot normal distribution.\nFor particle xi the short-range descriptor is defined as the concatenation of\nDi1,sr = ∑\nj∈Ii\nfθ1(ŝi,j)r̂i,j and Di2,sr = ∑\nj∈Ii\nfθ2(r̂i,j)r̂i,j , (15)\nwhere r̂i,j , ŝi,j are the normalized copies of ri,j and si,j with mean zero and standard deviation equals to one. The mean and standard deviation are estimated by using a small number of snapshots. We multiply the network’s output fθ by r̂i,j (which is zero if j is a dummy particle). This procedure enforces a zero output for particles not in the interaction list. The construction satisfies the design requirement mentioned in Section 3.2.\nIn the short-range network, one concatenates the two descriptor above and feeds them particle-wise to the short-range fitting network. The fitting network Fsr : Rmsr → R is a residual neural network (ResNet) with six layers, each with 32 units. The activation function and initialization strategy are the same as the ones for the short-range descriptors. Fig. 6 shows the detailed architecture of the short-range network.\nUNNsr =\nN∑\ni=1\nF(Disr) = N∑\ni=1\nF(Di1,sr,Di2,sr) (16)\nIn 2D and 3D, there is a slight difference of generalized coordinates: we compute\nsi,j = xi − xj ‖xi − xj‖\nand ri,j = 1\n‖xi − xj‖ , (17)\nwhere si,j is a vector now. The local descriptors are defined in the following forms:\nDi1,sr = ∑\nj∈Ii\nfθ1(si,j)r̂i,j and Di2,sr = ∑\nj∈Ii\nfθ2(r̂i,j)r̂i,j (18)\nC.2 NUFFT\nIn this section we provide further details for the NUFFT implementation. Suppose that the input of the NUFFT is given by {xi}Ni=1 ⊂ Rd, where each point has a given associated weight fi. The first\nstep is to construct the weighted train of Dirac deltas as\nf(x) =\nN∑\nj=1\nfjδ (x− xj) . (19)\nWe point out that in some of the experiments fj simply equals to 1. One then defines a periodic Gaussian convolution kernel\ngτ (x) = ∑\n`∈Zd e−‖x−`L‖ 2/4τ , (20)\nwhere L is the length of the interval and τ determines the size of mollification. In practice a good choice is τ = 12( L2πLFFT )\n2 (Dutt & Rokhlin, 1993), where LFFT is the number of points in each dimension and NFFT = LdFFT. We define\nfτ (x) = f ∗ gτ (x) = ∫\n[0,L]d f(y)gτ (x− y)dy =\nN∑\nj=1\nfjgτ (x− xj). (21)\nWith the Fourier transform defined as\nFτ (k) = 1\nLd\n∫\n[0,L]d fτ (x)e\n−i2πk·x/Ldx (22)\nfor k ∈ Zd, we compute its discrete counterpart\nFτ (k) ≈ 1\nNFFT\n∑\nm∈[0,LFFT−1]d fτ (Lm/LFFT) e\n−i2πk·m/LFFT (23)\n≈ 1 NFFT\n∑\nm∈[0,LFFT−1]d\nN∑\nj=1\nfjgτ (Lm/LFFT − xj) e−i2πk·m/LFFT (24)\nThis operation can be done in O(NFFT log(NFFT)) steps, independently of the number of inputs. Once this is computed, one can compute the Fourier transform of f at each frequency point by\nF (k) = (π τ )d/2 e‖k‖ 2τFτ (k) (25)\nOnce the Fourier transform of the Dirac delta train is ready, we multiply it by the Fourier multiplier φ̂(k), which is the Fourier transform of φ:\nv̂(k) = φ̂(k)F (k) (26)\nIn the next sage, one needs to compute the inverse transform, and evaluate into the target points {xi}. First we deconvolve the signal\nv̂−τ (k) = (π τ )d/2 e‖k‖ 2τ v̂(k) (27)\nand compute the inverse Fourier transform\nu−τ (x) = ∑\nk∈[0,NFFT−1]d v̂−τ (k)e\nik·x. (28)\nNext, we interpolate to the point cloud\nu (xj) = u−τ ∗ gτ (xj) = 1\nLd\n∫\n[0,L]d u−τ (x)gτ (xj − x) dx (29)\n≈ 1 NFFT\n∑\nm∈[0,LFFT−1]d u−τ (Lm/LFFT) gτ (xj − Lm/LFFT) (30)\nEven though in the current implementation all the parameters of the NUFFT are fixed, they can in principle be trained along with the rest of the networks. This training, if done naively increases significantly the computational cost. How to perform this operation efficiently is a direction of future research.\nDerivatives For the computation of the forces in equation 5 one needs to compute the derivatives of the total energy UNN with respect to the inputs, in nearly-linear time. The main obstacle is how to compute the derivatives of the LRC-layer with respect to the point-cloud efficiently. To simplify the notation, we only discuss the case that d = 1, but the argument can be seamlessly extended to the case when d > 1. Recall that ui = ∑N j=1 φθ(xi − xj)fj , then the Jacobian of the vector u with respect to the inputs is given by\n(∇u)i,j := ∂ui ∂xj = { −fjφ′θ(xi − xj), if j 6= i,∑ k 6=i fkφ ′ θ(xi − xk), if j = i. (31)\nAs it will be explained in the sequel, for the computation of the forces in equation 5 one needs to compute the application of the Jacobian of u to a vector. For a fixed vector v ∈ RN , the product (∇u) · v can be written component-wise as\n((∇u) · v)i =− ∑\nj 6=i\nvjfjφ ′ θ(xi − xj) + vi\n∑\nj 6=i\nfjφ ′ θ(xi − xj),\n=− N∑\nj=1\nvjfjφ ′ θ(xi − xj) + vi\nN∑\nj=1\nfjφ ′ θ(xi − xj),\nwhere we have added ±vifiφ′(0) in the last equation and then distributed it within both sums. Let us define the following two long-range convolutions\nwi = − N∑\nj=1\nvjfjφ ′ θ(xi − xj), and pi =\nN∑\nj=1\nfjφ ′ θ(xi − xj), (32)\neach of which can be performed in O(N + NFFT logNFFT) steps using the NUFFT algorithm combined with the convolution theorem. In this case the derivative of φ can be computed numerically in the Fourier domain to a very high accuracy. Now one can leverage the expression above to rewrite (∇u) · v as ((∇u) · v)i = wi + vipi, (33) which can then be computed in nearly-linear time. The same is also true for v · (∇u).\nC.3 LONG-RANGE DESCRIPTOR\nAs mentioned before, the output of the LRC-layer is given by {u(xi)}Ni=1. For each particle we feed the output u(xi) to the long-range descriptor network hθ : R→ Rmlr , whose structure is the same\nas the local descriptor fθ mentioned in appendix C.1 except that the activation function is taken to be ReLU. The long-range descriptor, defined as\nDilr = gθ(u(xi)) (34) for the particle xi is concatenated with the corresponding short-range descriptor (which it is itself the concatenation of two short-range descriptors) and fed together to the total fitting network F : Rmsr+mlr → R. The results are then added together to obtain the total energy\nUNN =\nN∑\ni=1\nF(Disr,Dilr). (35)\nIt is clear that the energy can be evaluated in nearly-linear complexity.\nIn what follows we show that the force computation is also of nearly-linear. For simplicity we focus on the one-dimensional network and assume that Kchnls = 1, O(msr) = O(mlr) = O(1) and that the depth of the neural networks is O(1). As defined in the prequel the forces are given by F NN = −∇xUNN, Which can be written component wise as\nF NNj = −∂xjUNN = − N∑\ni=1\n[ ∂1F(Disr,Dilr)∂xjDisr + ∂2F(Disr,Dilr)g′θ(ui)∂xjui ] , (36)\nor in a more compact fashion as\nF NN = −∇UNN = − (vsr ·Dsr + vlr · ∇u) . (37) Here vsr, and vlr are vectors defined component-wise as (vsr)i = ∂1F(Disr,Dilr), and (vlr)i = ∂2F(Disr,Dilr)g′θ(ui). In addition (Dsr)i,j = ∂xjDisr and∇u is defined above. The first term in the right-hand side is easy to compute, given that Dsr is sparse: the i, j entry is non-zero only if the particle xi is in the interaction list of xj . Given that the cardinality of the interaction list is bounded, Dsr has O(N) non-zero entries in which each entry requires O(1) work, thus the first term in the right-hand side of equation 37 can be computed in O(N). At first glance the complexity of second term seems to be much higher. However, as discussed above, by using equation 33, we can apply the matrix (or its transpose) to a vector in O(N +NFFT logNFFT) time and the computation of vector vlr requires O(1) work per entry, thus resulting in a complexity of O(N + NFFT logNFFT) for computing the second term in equation 37. Finally, adding both contributions together results in an overall O(N +NFFT logNFFT) complexity for the forces. To summarize, both the computation of the energy and the forces can be performed in O(N) time.\nC.4 TRAINING\nWe use the Adam optimizer (Kingma & Ba, 2015) along with an exponential scheduler. The learning rate with the initial learning rate taken to be 0.001 and, for every 10 epochs, it decreases by a factor of 0.95. In order to balance the computational time and the accuracy, a multi-stage training is adopted, where at each stage we modify the batch-size and the number of epochs. In particular, four stages are used: we start using a batch size of 8 snapshots and train the network 200 epochs and then at each stage we double both the size of the batch size and the number of epochs. In the two-scale training strategy, the same training parameters defined above are used for each stage.\nC.5 DEPENDENCY ON NFFT\nWe measure the impact of NFFT on the approximation error, using a couple of examples in the oneand two-dimensional settings.\nFor the one-dimensional case, we test a screened-Coulomb type potential with parameters µ1 = 5.0, µ2 = 0.5, α1 = 0.5, α2 = 0.5, and Nsample = 1000. The domain Ω is [0, 50] and N = 200. We run the one-scale training procedure with varying NFFT (the number of Fourier multipliers), starting from NFFT = 63 and doubling them until NFFT = 501. Table 5 shows that the errors are relatively insensitive to the value of NFFT. The accuracy achieved by the architecture without the LRC-layer (denoted as None in Tables 5) is added in order to demonstrate that the architecture is indeed capturing the LRIs.\nFor the two-dimensional case, a screened-Coulomb type potential is tested with µ1 = 10.0, µ2 = 1.0, α1 = 0.9, α2 = 0.1. Here Ω = [0, 5]2, N = 50 and Nsample = 1000. Starting with NFFT = 212, we steadily increase its value and repeat the same training procedure. The results are summarized in Table 6 where one observes the same trend as in the one-dimensional case.\nIn addition, we recall that the Fourier multipliers are parametrized following\nφ̂β,λ(k) = 4πβ\n‖k‖2 + λ2 , (38)\nwhere β and λ are two trainable parameters with λ providing a measure of the decay in space. Therefore, NFFT only determines the number of Fourier modes and not the parameters of the ansatz. As long as the Fourier kernel is properly sampled, the method is able to compute the correct characteristic interaction length.\nOne can observe this phenomenon in the experiment above, in which we extract the terminal value after training of the parameters λ1 and λ2 that correspond to the two channels in the LRC-layer, as summarized in Table 7. We observe that the value of λ2 is very close to that of µ2, which is responsible for the LRIs even for small values of NFFT." } ]
2,020
null
SP:23db11b6d3d07a1820fd393c16e447f1716a17ca
[ "The paper proposes to use self-training to tackle the fundamental problem of causal inference where only one potential outcome is seen. The proposed self-training method is iterative: after training a model on the observational dataset, they run points with different actions (treatments) through the trained model and collect the predictions, which are the pseudo-labels. They then continue the training of the model, including the pseudo-labels, until convergence. The paper experiments with two versions of the method -- one with deterministic pseudo-labels (CST-AI) and another with soft pseudo-labels sampled from a probability distribution (CST-RI). It is assumed that there are no unobserved confounders." ]
Unlike traditional supervised learning, in many settings only partial feedback is available. We may only observe outcomes for the chosen actions, but not the counterfactual outcomes associated with other alternatives. Such settings encompass a wide variety of applications including pricing, online marketing and precision medicine. A key challenge is that observational data are influenced by historical policies deployed in the system, yielding a biased data distribution. We approach this task as a domain adaptation problem and propose a self-training algorithm which imputes outcomes with finite discrete values for finite unseen actions in the observational data to simulate a randomized trial. We offer a theoretical motivation for this approach by providing an upper bound on the generalization error defined on a randomized trial under the self-training objective. We empirically demonstrate the effectiveness of the proposed algorithms on both synthetic and real datasets.
[ { "affiliations": [], "name": "COUNTERFACTUAL SELF-TRAINING" } ]
[ { "authors": [ "Massih-Reza Amini", "Patrick Gallinari" ], "title": "Semi-supervised logistic regression", "venue": null, "year": 2002 }, { "authors": [ "Dimitris Bertsimas", "Nathan Kallus" ], "title": "The power and limits of predictive approaches to observational-data-driven optimization", "venue": "arXiv preprint arXiv:1605.02347,", "year": 2016 }, { "authors": [ "Matthew R Boutell", "Jiebo Luo", "Xipeng Shen", "Christopher M Brown" ], "title": "Learning multi-label scene classification", "venue": "Pattern recognition,", "year": 2004 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "Libsvm: A library for support vector machines", "venue": "ACM transactions on intelligent systems and technology (TIST),", "year": 2011 }, { "authors": [ "Miroslav Dudı́k", "Dumitru Erhan", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and optimization", "venue": "Statistical Science,", "year": 2014 }, { "authors": [ "André Elisseeff", "Jason Weston" ], "title": "A kernel method for multi-labelled classification", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Adam N Elmachtoub", "Paul Grigas. Smart" ], "title": "predict, then optimize", "venue": "arXiv preprint arXiv:1710.08005,", "year": 2017 }, { "authors": [ "Geoffrey French", "Michal Mackiewicz", "Mark Fisher" ], "title": "Self-ensembling for visual domain adaptation", "venue": "arXiv preprint arXiv:1706.05208,", "year": 2017 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Evan Greensmith", "Peter L Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Arthur Gretton", "Kenji Fukumizu", "Choon H Teo", "Le Song", "Bernhard Schölkopf", "Alex J Smola" ], "title": "A kernel statistical test of independence", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Ligong Han", "Yang Zou", "Ruijiang Gao", "Lezi Wang", "Dimitris Metaxas" ], "title": "Unsupervised domain adaptation via calibrating uncertainties", "venue": "In CVPR Workshops,", "year": 2019 }, { "authors": [ "Jennifer L Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "Guido W Imbens", "Jeffrey M Wooldridge" ], "title": "Recent developments in the econometrics of program evaluation", "venue": "Journal of economic literature,", "year": 2009 }, { "authors": [ "Thorsten Joachims", "Adith Swaminathan", "Maarten de Rijke" ], "title": "Deep learning with logged bandit feedback", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Fredrik Johansson", "Uri Shalit", "David Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Nathan Kallus" ], "title": "Classifying treatment responders under causal effect monotonicity", "venue": "arXiv preprint arXiv:1902.05482,", "year": 2019 }, { "authors": [ "Nathan Kallus", "Angela Zhou" ], "title": "Confounding-robust policy improvement", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Joseph DY Kang", "Joseph L Schafer" ], "title": "Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data", "venue": "Statistical science,", "year": 2007 }, { "authors": [ "A Gürhan Kök", "Marshall L Fisher" ], "title": "Demand estimation and assortment optimization under substitution: Methodology and application", "venue": "Operations Research,", "year": 2007 }, { "authors": [ "Sören R Künzel", "Jasjeet S Sekhon", "Peter J Bickel", "Bin Yu" ], "title": "Metalearners for estimating heterogeneous treatment effects using machine learning", "venue": "Proceedings of the national academy of sciences,", "year": 2019 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Damien Lefortier", "Adith Swaminathan", "Xiaotao Gu", "Thorsten Joachims", "Maarten de Rijke" ], "title": "Large-scale validation of counterfactual learning methods: A test-bed", "venue": "arXiv preprint arXiv:1612.00367,", "year": 2016 }, { "authors": [ "Lihong Li", "Wei Chu", "John Langford", "Robert E Schapire" ], "title": "A contextual-bandit approach to personalized news article recommendation", "venue": "In Proceedings of the 19th international conference on World wide web,", "year": 2010 }, { "authors": [ "Yuanqing Li", "Cuntai Guan", "Huiqi Li", "Zhengyang Chin" ], "title": "A self-training semi-supervised svm algorithm and its application in an eeg-based brain computer interface speller system", "venue": "Pattern Recognition Letters,", "year": 2008 }, { "authors": [ "Romain Lopez", "Chenchen Li", "Xiang Yan", "Junwu Xiong" ], "title": "Cost-effective incentive allocation via structured counterfactual inference", "venue": null, "year": 2020 }, { "authors": [ "Andreas Maurer", "Massimiliano Pontil" ], "title": "Empirical bernstein bounds and sample variance penalization", "venue": "arXiv preprint arXiv:0907.3740,", "year": 2009 }, { "authors": [ "Jeffrey I McGill", "Garrett J Van Ryzin" ], "title": "Revenue management: Research overview and prospects", "venue": "Transportation science,", "year": 1999 }, { "authors": [ "Kamal Nigam", "Andrew Kachites McCallum", "Sebastian Thrun", "Tom Mitchell" ], "title": "Text classification from labeled and unlabeled documents using em", "venue": "Machine learning,", "year": 2000 }, { "authors": [ "Judea Pearl" ], "title": "Detecting latent heterogeneity", "venue": "Sociological Methods & Research,", "year": 2017 }, { "authors": [ "Judea Pearl" ], "title": "Models, reasoning and inference", "venue": null, "year": 2000 }, { "authors": [ "Kenneth Rose", "Eitan Gurewitz", "Geoffrey Fox" ], "title": "A deterministic annealing approach to clustering", "venue": "Pattern Recognition Letters,", "year": 1990 }, { "authors": [ "Paul R Rosenbaum" ], "title": "Model-based direct adjustment", "venue": "Journal of the American Statistical Association,", "year": 1987 }, { "authors": [ "Paul R Rosenbaum", "Donald B Rubin" ], "title": "The central role of the propensity score in observational studies for causal effects", "venue": null, "year": 1983 }, { "authors": [ "Donald B Rubin" ], "title": "Causal inference using potential outcomes: Design, modeling, decisions", "venue": "Journal of the American Statistical Association,", "year": 2005 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Noveen Sachdeva", "Yi Su", "Thorsten Joachims" ], "title": "Off-policy bandits with deficient support", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Uri Shalit", "Fredrik D Johansson", "David Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Ilya Shpitser", "Judea Pearl" ], "title": "Identification of conditional interventional distributions", "venue": "arXiv preprint arXiv:1206.6876,", "year": 2012 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "Counterfactual risk minimization: Learning from logged bandit feedback", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Adith Swaminathan", "Thorsten Joachims" ], "title": "The self-normalized estimator for counterfactual learning", "venue": "In advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jafar Tanha", "Maarten van Someren", "Hamideh Afsarmanesh" ], "title": "Semi-supervised self-training for decision tree classifiers", "venue": "International Journal of Machine Learning and Cybernetics,", "year": 2017 }, { "authors": [ "Terence J Wales", "Alan Donald Woodland" ], "title": "Estimation of consumer demand systems with binding non-negativity constraints", "venue": "Journal of Econometrics,", "year": 1983 }, { "authors": [ "Lequn Wang", "Yiwei Bai", "A. Bhalla", "T. Joachims" ], "title": "Batch learning from bandit feedback through bias corrected reward imputation", "venue": null, "year": 2019 }, { "authors": [ "Yanbo Xu", "Yanxun Xu", "Suchi Saria" ], "title": "A bayesian nonparametric approach for estimating individualized treatment-response curves", "venue": "In Machine Learning for Healthcare Conference,", "year": 2016 }, { "authors": [ "Jinsung Yoon", "James Jordon", "Mihaela van der Schaar" ], "title": "Ganite: Estimation of individualized treatment effects using generative adversarial nets", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alan L Yuille", "Paul Stolorz", "Joachim Utans" ], "title": "Statistical physics, mixtures of distributions, and the em algorithm", "venue": "Neural Computation,", "year": 1994 }, { "authors": [ "Houssam Zenati", "Alberto Bietti", "Matthieu Martin", "Eustache Diemert", "Julien Mairal" ], "title": "Counterfactual learning of continuous stochastic policies", "venue": null, "year": 2020 }, { "authors": [ "Yang Zou", "Zhiding Yu", "BVK Kumar", "Jinsong Wang" ], "title": "Domain adaptation for semantic segmentation via class-balanced self-training", "venue": "arXiv preprint arXiv:1810.07911,", "year": 2018 }, { "authors": [ "Yang Zou", "Zhiding Yu", "Xiaofeng Liu", "BVK Kumar", "Jinsong Wang" ], "title": "Confidence regularized self-training", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 } ]
[ { "heading": null, "text": "Unlike traditional supervised learning, in many settings only partial feedback is available. We may only observe outcomes for the chosen actions, but not the counterfactual outcomes associated with other alternatives. Such settings encompass a wide variety of applications including pricing, online marketing and precision medicine. A key challenge is that observational data are influenced by historical policies deployed in the system, yielding a biased data distribution. We approach this task as a domain adaptation problem and propose a self-training algorithm which imputes outcomes with finite discrete values for finite unseen actions in the observational data to simulate a randomized trial. We offer a theoretical motivation for this approach by providing an upper bound on the generalization error defined on a randomized trial under the self-training objective. We empirically demonstrate the effectiveness of the proposed algorithms on both synthetic and real datasets." }, { "heading": "1 INTRODUCTION", "text": "Counterfactual inference (Pearl et al., 2000) attempts to address a question central to many applications - What would be the outcome had an alternative action was chosen? It may be selecting relevant ads to engage with users in online marketing (Li et al., 2010), determining prices that maximize profit in revenue management (Bertsimas & Kallus, 2016), or designing the most effective personalized treatment for a patient in precision medicine (Xu et al., 2016). With observational data, we have access to past actions, their outcomes, and possibly some context, but in many cases not the complete knowledge of the historical policy which gave rise to the action (Shalit et al., 2017). Consider a pricing setting in the form targeted promotion. We might record information of a customer (context), promotion offered (action) and whether an item was purchased (outcome), but we do not know why a particular promotion was selected.\nUnlike traditional supervised learning, we only observe feedback for the chosen action in observational data, but not the outcomes associated with other alternatives (i.e., in the pricing example, we do not observe what would occur if a different promotion was offered). In contrast to the gold standard of a randomized controlled trial, observational data are influenced by historical policy deployed in the system which may over or under represent certain actions, yielding a biased data distribution. A naive but widely used approach is to learn a machine learning algorithm directly from observational data and use it for prediction. This is often referred to as direct method (DM) (Dudı́k et al., 2014). Failure to account for the bias introduced by historical policy often results in an algorithm which has high accuracy on the data it was trained on, but performs considerably worse under a different policy. For example in the pricing setting, if historically most customers who received high promotion offers bear a certain profile, then a model based on direct method may fail to produce reliable predictions on these customers when low offers are given.\nTo overcome the limitations of direct method, Shalit et al. (2017); Johansson et al. (2016); Lopez et al. (2020) cast counterfactual learning as a domain adaptation problem, where the source domain is observational data and the target domain is a randomized trial whose assignment of actions follows a uniform distribution for a given context. The key idea is to map contextual features to an embedding space and jointly learn a representation that encourages similarity between these two domains, leading to better counterfactual inference. The embedding is generally learned by a neural network and the estimation of the domain gap is usually slow to compute." }, { "heading": "Update 𝑓 on target data", "text": "In this paper, while we also view counterfactual inference as a domain adaptation problem between observational data and an ideal randomized trial, we take a different approach - instead of estimating the domain gap between the two distributions via an embedding, we explicitly simulate a randomized trial by imputing pseudo-labels for the unobserved actions in the observational data. The optimization process is done by iteratively updating the pseudo-labels and a model that is trained on both the factual and the counterfactual data, as illustrated in Figure 1. As this method works in a selfsupervised fashion (Zou et al., 2018; Amini & Gallinari, 2002), we refer to our proposed framework as Counterfactual Self-Training (CST).\nThe contribution of our paper is as follows. First, we propose a novel self-training algorithm for counterfactual inference. To the best of our knowledge, this is the first application of self-training algorithm for learning from observational data. Moreover, in contrast to the existing methods from domain adaption on counterfactual inference, CST is flexible and can work with a wide range of machine learning algorithms, not limited to neural networks. Second, we offer a theoretical motivation of our approach by providing an upper bound on the generalization error defined on a randomized trial under the self-training objective. In other words, we show that the counterfactual self-training algorithm helps minimizing the risk on the target domain. Our theoretical bounds suggest generating pseudo-labels with random imputation, which is a methodological departure from traditional self-training algorithms which impute hard labels. Third, we present comprehensive experiments on several synthetic datasets and three counterfactual learning datasets converted from multi-label classification tasks to evaluate our method against state-of-the-art baselines. In all experiments, CST shows competitive or superior performance against all the baselines. Moreover, our algorithm is easy to optimize with a much faster training time than other baselines." }, { "heading": "2 RELATED WORK", "text": "Counterfactual policy optimization has received a lot of attention in the machine learning community in the recent years (Swaminathan & Joachims, 2015a; Joachims et al., 2018; Shalit et al., 2017; Lopez et al., 2020; Kallus, 2019; Kallus & Zhou, 2018; Wang et al., 2019). Most of the proposed algorithms can be divided into two categories: counterfactual risk minimization (CRM) and direct method (DM). Both can be used together to construct doubly robust estimators (Dudı́k et al., 2014) to further improve efficiency. CRM, also known as off-policy learning or batch learning from bandit feedback, typically utilizes inverse propensity weighting (IPW) (Rosenbaum, 1987; Rosenbaum & Rubin, 1983) to account for the bias in the data. Swaminathan & Joachims (2015a) introduces the CRM principle with a variance regularization term derived from an empirical Bernstein bound (Maurer & Pontil, 2009) for finite samples. In order to reduce the variance of the IPW\nestimator, Swaminathan & Joachims (2015b) proposes a self-normalized estimator, while BanditNet (Joachims et al., 2018) utilizes the baseline technique (Greensmith et al., 2004) in deep nets. As pointed out by Lefortier et al. (2016), CRM-based methods tend to struggle with medium to large action spaces in practice. Morever, CRM-based methods generally require a known and stochastic logging policy, along with full support on the action space. When either one of the requirements is violated, Sachdeva et al. (2020); Kang et al. (2007) observe direct method often demonstrates a more robust performance. When the logging policy is not available, the counterfactual learning problem is often referred to as learning from observational data, which is the setting we focus on. In addition to select the optimal actions, direct method can also be used to identify causal treatment effect (Künzel et al., 2019), CST can be viewed as an extention to direct method.\nLearning from observational data is also closely related to estimating Individualized Treatment Effects (ITE) (Shpitser & Pearl, 2012) or conditional average treatment effect (CATE), which is defined as the difference of expected outcomes between two actions, with respect to a given context. The main challenge of identifying ITE is that unlike an ideal randomized trial, observational data is biased and we do not have the access to the counterfactuals. Hill (2011) uses a bayesian nonparametric algorithm to address this issue. Yoon et al. (2018) proposes using generative adversarial nets to capture the uncertainty in the counterfactual distributions to facilitate ITE estimation. Johansson et al. (2016); Shalit et al. (2017) approach counterfactual inference with representation learning and domain adaptation. Their key idea is to learn a representation between observational data and a randomized trial that encourages better generalization on all possible actions. It is achieved by minimizing a weighted sum of the factual loss on the observational data (loss for direct method) plus an estimated domain gap measured by integral probability metrics. Lopez et al. (2020) further extends this framework to multiple treatments using Hilbert-Schmidt Independence Criterion (HSIC) (Gretton et al., 2008) and achieves state-of-the-art performance. The HSIC proposed in Lopez et al. (2020) has a computation time of at least O(N2), making its training process slow. While the aforementioned methods and our approach can be viewed as extensions to direct method, we tackle the domain adaptation problem differently by explicitly augmenting the observational data to create a simulated randomized trial via self-training. Different counterfactual estimation algorithms are classified as X-, T-, S-learner in Künzel et al. (2019), for example Hill (2011) is an instance of S-learner. Our approach is similar to X-learner which uses pseudo-label to create counterfactuals, but CST considers multiple instead of binary actions and is trained in an iterative fashion.\nSelf-training algorithms have been widely studied in semi-supervised learning and domain adaptation problems (Nigam et al., 2000; Amini & Gallinari, 2002; Grandvalet & Bengio, 2005; Zou et al., 2019; Han et al., 2019). Grandvalet & Bengio (2005) proposes to use entropy regularization for semi-supervised learning as a class-overlapping measure to utilize unlabeled data. Nigam et al. (2000); Amini & Gallinari (2002); Zou et al. (2019) formulate the pseudo-label imputation as classification EM algorithm and show its convergence under proper initialization. Han et al. (2019) points out that pseudo-label imputation can be viewed as minimizing min-entropy as a type of Rényi entropy 11−α log( ∑n i=1 p α i ) when α→∞, and Shannon entropy in Grandvalet & Bengio (2005) is the case when α → 1. Self-training is also shown to be effective in semi-supervised learning for many other machine learning models besides neural networks (Tanha et al., 2017; Li et al., 2008). It is worthy to mention that unlike traditional self-training where the target domain is given by the problem, we propose to construct a target domain by imputing pseudo-labels on all unseen actions to simulate a pseudo-randomized trial. Moreover, instead of hard labels used in traditional self-training, we propose to use random imputation to create pseudo-labels which have a theoretical motivation tailored for counterfactual inference and are shown to be more effective based on the experiments results." }, { "heading": "3 PROBLEM STATEMENT", "text": "Following the notation in Lopez et al. (2020), we use X to represent an abstract space and P(x) is a probability distribution on X . Each sample x = x1, · · · , xn ∈ Xn is drawn independently from P(x). P is the discrete action space that a central agent can select for each sample, after which a discrete reward r with finite possible values is revealed to the agent. In precision medicine, X may represent a patient cohort, P refers to feasible treatment for a disease, and r can be the indicator of whether a patient survives after the treatment. Similarly, X ,P, r can represent visitors, ads shown and whether visitor clicks in online marketing.\nWe focus on an example of pricing to illustrate our method. We use x ∈ Xn ∼ P(x) to denote a customer. Let P represent finite price options a central agent can offer to customers. After offering price p ∈ P , the agent observes the response from the customer r ∼ P(r|x, p), i.e., , either a 1 (buy) or a 0 (no-buy). As a direct method, the task is to learn a function f(x, p) by minimizing the loss Ex∼P(x),p∼π0(p|x)L(f(x, p), r) where π0(p|x) is a randomized assignment policy (Shalit et al., 2017; Lopez et al., 2020). The estimation task is often referred to as demand estimation (Wales & Woodland, 1983), which is critical for many downstream decisions such as inventory optimization and revenue management (Kök & Fisher, 2007; McGill & Van Ryzin, 1999). This is in contrast to CRM-based methods which use the final reward as objective to learn a policy π(p|x) that maximizes Ex∼P(x),p∼π(p|x)E[r|x, p] (Swaminathan & Joachims, 2015a). With observational data, the individualized treatment effect is not always identifiable. We use Rubin’s potential outcome framework and assume consistency and strong ignorability which is a sufficient condition for identifying ITE (Imbens & Wooldridge, 2009; Pearl, 2017). Here we formally present the ignorability assumption (Rubin, 2005; Shalit et al., 2017): Assumption 3.1 (Ignorability). Let P be action set, x is context (feature), r(p)|x is observed reward for action p ∈ P given context x, r(p) ⊥⊥ p|x,∀p ∈ P .\nIn other words, we assume there is no unobserved confounders. This condition generally cannot be made purely based on data and requires some domain knowledge." }, { "heading": "4 ALGORITHM", "text": "In this section, we introduce Counterfactual Self-Training (CST) algorithm, which can be viewed as an extension of the direct method via domain adaptation. Unlike existing methods using representation learning, we propose a novel self-training style algorithm to account the bias inherent in the observational data." }, { "heading": "4.1 SELF-TRAINING", "text": "Self-training has recently been used in unsupervised domain adaptation (UDA) and semi-supervised learning (SSL) and achieved great success (Zou et al., 2019; Han et al., 2019; Zou et al., 2018; Amini & Gallinari, 2002; Nigam et al., 2000; Grandvalet & Bengio, 2005). The self-training algorithm works in an iterative fashion: First, after training a classifier f(x, p) on a source dataset, pseudolabels are created by the best guess of f . Next, the model is trained on a target dataset, and the trained model is used to generate new pseudo labels. This idea is illustrated in Figure 1.\nTo formulate the counterfactual learning problem as a domain adaptation problem, observational data is viewed as data sampled from a source distribution DS = P(x)π(p|x). The target domain is a randomized trial on the same feature distribution to ensure a uniformly good approximation on all actions. Our goal is to transfer observational data from the source domain to a simulated pseudorandomized trial via self-training. To accomplish this, we first train an initial classifier f0(x, p) on observational data, then impute pseudo-labels on all unseen actions from the observation data with r̂i,p ∼ f(xi, p). The model is then updated by training with the following objective:\nmin θ LST =\n1\nN |P| ( N∑ i=1\nl(fθ(xi, pi), ri)︸ ︷︷ ︸ Lsrc +\nN∑ i=1 ∑ p∈P\\pi l(fθ(xi, p), r̂i,p) )\n(1)\nThe first term Lsrc in Equation 1 corresponds to the loss used in direct method, defined over the factual data alone. Meanwhile, the second term refers to the loss defined over the imputed counterfactual data. In other words, in order to get a good model across all actions, we utilize the pseudopopulation induced from imputation which represents a simulated randomized trial. We iteratively train the model and impute pseudo-labels until it converges. The algorithm is stated in Algorithm 1.\nNote that a key difference between our CST algorithm and traditional self-training (ST) methods for unsupervised domain adaptation (such in Zou et al. (2018)): Pseudo-labels in traditional ST are\nAlgorithm 1 Counterfactual Self-Training 1: while NOT converged do . Main training loop 2: for each i ∈ {1 . . . N} do 3: for each p ∈ P \\ pi do 4: Impute pseudo-label r̂i,p ∼ fθ(r|xi, p). . Pseudo-label imputation 5: end for 6: end for 7: Update θ by minimizing LST defined in Equation 1. . Self-training 8: end while\ngenerated from hard imputation while ours are sampled from a probability distribution as illustrated in Algorithm 1 line 4. Not only this randomized imputation has a theoretical motivation presented in Section 4.2, it also demonstrates superior performance over hard imputation in our experiments in Section 5." }, { "heading": "4.2 THEORETICAL MOTIVATION", "text": "As our objective is to augment observational data to a randomized trial such that the learnt model is able to perform better on all feasible actions, we focus on bounding the generalization error defined on a randomized trial. We use D to represent the distribution of a true randomized trial where the assignment policy is a uniform probability over P given context, and D̂ is the distribution of pseudolabel generated by the current model output fθ(r|x, p). DefineRD(f) as the risk of function f with respect to a loss function l(·, ·) asRD(f) = Ex,p∼D[l(f(x, p), p)], and R̂D̂(f) as empirical risk on D̂. Assume our classifier outputs a probability estimation fθ(r|x, p) for a feature and action pair (x, p), and we use a random imputation r̂ ∼ fθ(r|x, p) to generate outcomes for the unseen actions. We have the following theorem on the generalization bound: Theorem 1. Assume fθ(r|x, p) ≥ 1M0+1 , where M0 > 1 is constant, x, p is defined on a compact, discrete set respectively, let M = min{maxx,p( Pfθ − 1),M0}, f\n? = argminf∈F RD(f) , D̂ is the dataset generated by random imputation of current model output fθ, and f̂ minimizes the empirical risk on D̂. For any loss function l(·, ·), we have:\nRD(f̂)−RD(f?) ≤C( √ V\nn +\n√ log(1/δ)\nn ) + (M + 1)R̂D̂(f̂)−RD(f ?) (2)\nV is the VC dimension of hypothesis class F , and C is a universal constant. The proof is in Appendix A.1. By replacing M with M0 and minimizing the right hand side of Equation 2 over θ, we recover Equation 1 which is the objective that we are optimizing in the training procedure. This complete optimization involves optimizing over θ and r̂, and can be solved via classification EM (CEM) (Amini & Gallinari, 2002) and traditional self-training is an instance of CEM (Zou et al., 2019). These methods use a hard label as a classification step to impute the pseudo-labels but it is not clear how it relates to the risk that we are interested in. To establish Theorem 1, we require a random imputation of labels based on the probability output of the classifier to upper bound the risk under a true randomized trial using this objective. Therefore, we use a random sampling to generate pseudo-labels in our algorithm, and it is shown to be more robust than hard labels in our experiments. We would like to note this bound is relatively loose when P is very different from fθ, thus we only use it as a motivation of our proposed algorithm. Since in the source domain, the data is generated by the true data distribution, it is possible to get a tighter upper bound, which we leave for future work.\nOur assumption fθ(r|x, p) ≥ 1M0+1 is also proposed in Zou et al. (2019) in the form of entropy regularization to prevent the model converging too fast and getting too confident in the early stages of training. Since cross-validation is biased in the counterfactual learning due to the logging policy (Lopez et al., 2020), we avoid using hyperparameters and do not use this regularization in our experiments. Note that in traditional semi-supervised learning, deterministic pseudo-labels are commonly used by argmax operation r̂ = argmaxrf(r|x, p), which we refer to as CST-AI, and we refer the one with random imputation as CST-RI.\nIn Proposition 1, we show the convergence result of CST-AI . Theoretical analysis on convergence of CST-RI is more challenging. However, we empirically observe CST-RI converges without any additional techniques in all of our experiments. We show empirical loss curves and discuss the connection between CST-RI and entropy regularization (Grandvalet & Bengio, 2005) in Section A.3 in Appendix. Proposition 1. CST with argmax imputation is convergent under certain conditions.\nProof. Please refer to Section A.2 of Appendix." }, { "heading": "5 EXPERIMENTS", "text": "We construct synthetic datasets for a pricing example and utilize three real datasets to demonstrate the efficacy of our proposed algorithm. Implementationwise, we use a three layer neural network with 128 nodes as our model and binary entropy loss as the loss function. We avoid using early stopping and train each method until convergence to ensure a fair comparison. The following baselines are considered in our experiments:\n• Direct Method (DM): This baseline directly trains a model on observational data. • HSIC (Lopez et al., 2020): We use the last layer as an embedding and calculate HSIC\nbetween the embedding and the actions. The training objective is binary cross entropy loss + λ·HSIC, where λ is the hyperparameter which we choose from a grid search over [0.01, 0.1, 1, 10, 100].\n• BanditNet (Joachims et al., 2018): BanditNet is a CRM-based method developed for deep nets. For the baseline required in BanditNet, we normalize the reward as in Swaminathan & Joachims (2015a) and choose the hyperparameter using a grid search over [0, 0.2, 0.4, 0.6, 0.8] and cross validation. We fit an additional logging policy model on historical data for BanditNet.\n• Uniform DM (Wang et al., 2019): Uniform DM (UDM) also estimates the logging policy using historical data and use importance sampling to simulate a randomize trial.\nSince BanditNet is designed for reward maximization, evaluation of the accuracy (i.e., hamming loss) is not appropriate under our problem. In each experiment, we only evaluate BanditNet in the reward comparison. We also experiment with two versions of CST, CST-AI and CST-RI. Unlike CST, HSIC and BanditNet require a hyperparameter as an input to their algorithms. Following Joachims et al. (2018); Lopez et al. (2020), we use a 5-fold cross-validation and grid search to select the hyperparameter for all experiments. All experiments are conducted using one NVidia GTX 1080-Ti GPU with five repetitions. Mean and standard error are reported for each metric." }, { "heading": "5.1 SYNTHETIC DATASETS", "text": "In synthetic experiments, we use a pricing example similar to the experiment in Lopez et al. (2020). Let U(·, ·) be a uniform distribution. Assume customer features are a 50-dimensional vector X drawn from U(0, 1)50 and there are 10 price options from $1 to $10. The logging policy is set as π(p = i|x) = xi∑10\ni=1 xi . Five types of demand functions are simulated, and the complete data\ngeneration process is detailed in Appendix A.5.\nWe generate 1000 samples for each demand function and report hamming loss which relies on the hard labels generated by the algorithm in Table 1. In addition, as we are interested in probability estimation, we report the multi-label soft margin loss in Table 2. Lastly, as a pricing application, we also evaluate the revenue generated on the test set by solving the revenue maximization problem:\npi = argmax p\nP(r = 1|xi, p) · p (3)\nFor each dataset, the test set has 5000 samples from the corresponding demand distribution. The results are shown in Table 3.\nAmong all datasets, CST-RI has the best performance in terms of both hamming loss and soft margin loss. HSIC outperforms DM baseline by a significant margin and comes as a close second to CSTRI. In 4 out of 5 demand functions (with the exception of D1), CST-RI achieves a comparable\nor superior performance on reward as shown in Table 3. Hence, while CST-RI results in the best demand model in terms of the losses, it does not guarantee the highest revenue in all cases. This is because the downstream optimization task is independent from demand estimation (Elmachtoub & Grigas, 2017). Nevertheless, CST-RI significantly outperforms BanditNet which is designed for reward maximization due to unknown logging policy (Kang et al., 2007). We also want to point out that CST-AI performs worse than DM which is a naive baseline, demonstrating the importance of random imputation in our algorithm." }, { "heading": "5.2 MULTI-LABEL DATASETS", "text": "We use three multi-label datasets from LIBSVM repository (Elisseeff & Weston, 2002; Boutell et al., 2004; Chang & Lin, 2011), which are used for semantic scene, text and gene classification. We convert the supervised learning datasets to bandit feedback by creating a logging policy using 5% of the data following Swaminathan & Joachims (2015a); Lopez et al. (2020). More specifically, each feature x has a label y ∈ {0, 1}p where p is the number of labels. After the logging policy selects a label (action) i, a reward yi is revealed as bandit feedback (x, i, yi), i.e., , for each data point, if the policy selects one of the correct labels of that data point, it gets a reward of 1 and 0 otherwise. By doing so, we have the full knowledge of counterfactual outcomes for evaluation. Data statistics are summarized in Section A.6 in Appendix.\nHamming loss, multi-label soft margin loss and reward are reported in Table 4, 5 and 6 respectively. CST-RI generally achieves comparable or superior performance against all baselines in all three datasets. Since we assume we do not know the logging policy, BanditNet performs poorly in dataset like Scene, which is consistent with Kang et al. (2007). HSIC has a comparable performance with CST-RI on TMC and Yeast, but performs poorly on Scene. We suspect this is due to the bias introduced in cross-validation which in turn results in a sub-optimal hyperparameter selection. UDM can improve over DM effectively but CST-RI still outperforms it significantly. Overall, CSTRI shows the most robust performance across all three metrics being studied." }, { "heading": "5.3 RUNNING TIME ANALYSIS", "text": "We compare the average running time for one repetition for each experiment under same number of epochs. The results are summarized in Section A.4 in Appendix. Unsurprisingly, DM is the fastest algorithm. While our method is almost twice as slow as DM, it is still relatively fast compared to\nthe other baselines. BanditNet is relatively slow due to the cross validation selection. Note that the time efficiency of HSIC is bottlenecked by its high computational complexity (Lopez et al., 2020). We thus observe HSIC is approximately 30 to 100 times slower than CST across all datasets. Since CST offers a competitive performance against HSIC with a much faster running time, it is potentially more suitable for large-scale applications which require frequent model update, such as a daily updated pricing system. For example, HSIC may take days for model re-training but CST can be updated day-to-day." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we proposed a novel counterfactual self-training algorithm for learning from observational data. Comparing to existing approaches, our method is easy to compute and optimize. It also does not have the need for hyperparameter selection through cross validation, which is biased in nature for observational data. We provided a theoretical analysis showing self-training objective serves as an upper bound of the true risk of a randomized trial. However, our CST framework has several limitations. First, CST requires finite discrete action set. In order to augment observation data, CST will augment every action not observed. For continuous action, discretization or joint kernel embedding proposed in Zenati et al. (2020) might be used as an extension to CST, which we leave for future work. Second, CST in this paper can only work with discrete outcomes. If the outcome is continuous, it is also possible to extend our framework to continuous valued problems by: (1) discretize continuous value into discrete categories; (2) the pseudo-labels can be defined as self-ensemble (French et al., 2017) predictions, e.g. dropout (Bayesian neural networks) ensemble or temporal ensembling (Laine & Aila, 2016).\nWhile this analysis is tailored for counterfactual learning, we hope it can shed light on a broader range of problems such as unsupervised domain adaptation and semi-supervised learning. It may also open doors for solving counterfactual learning with a model-based extrapolation for direct method. As shown in our pricing example, a good demand model may not necessarily lead to the highest revenue because of the downstream revenue maximization optimization (Elmachtoub & Grigas, 2017). A different formulation of target domain may help address this problem, which we leave for future work. Moreover, we believe our counterfactual self-training framework can be adapted to yield many specific algorithms for tasks such as learning from observational data with structured reward (Lopez et al., 2020; Kallus, 2019) and deficient historical logging policy (Sachdeva et al., 2020)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF THEOREM 1", "text": "Theorem 1. Assume fθ(r|x, p) ≥ 1M0+1 , where M0 > 1 is constant, x, p is defined on a compact, discrete set respectively, let M = min{maxx,p( Pfθ − 1),M0}, f\n? = argminf∈F RD(f) , D̂ is the dataset generated by random imputation of current model output fθ, and f̂ minimizes the empirical risk on D̂. For any loss function l(·, ·), we have:\nRD(f̂)−RD(f?) ≤C( √ V\nn +\n√ log(1/δ)\nn ) + (M + 1)R̂D̂(f̂)−RD(f ?) (2)" }, { "heading": "Proof.", "text": "RD(f̂)−RD(f?) =RD̂(f̂)−RD̂(f ?) +RD(f̂)−RD̂(f̂)− (RD(f ?)−RD̂(f ?)) (4)\n=RD̂(f̂)−RD̂(f ?) + ED̂( P(r|x, p) fθ(r|x, p) − 1)(l(f̂ (x, p), r)− l(f ?(x, p), r)) (5)\n≤RD̂(f̂)−RD̂(f ?) +MED̂l(f̂ (x, p), r) + ED̂l(f ?(x, p), r)−RD(f?) (6) =(M + 1)(RD̂(f̂)− R̂D̂(f̂)) + (M + 1)R̂D̂(f̂)−RD(f ?) (7)\n≤C( √ V\nn +\n√ log(1/δ)\nn ) + (M + 1)R̂D̂(f̂)−RD(f ?) (8)\nEquation 4 comes from adding and subtracting the risk defined on D and D̂. Since D̂ is imputed with probability Pθ, we can use importance sampling to get Equation 5. We get Equation 7 by adding and subtracting the empirical risk. Equation 8 is from the basic excess-risk bound. V is the VC dimension of hypothesis class F , and C is a universal constant." }, { "heading": "A.2 PROOF OF PROPOSITION 1", "text": "Proposition 1. CST with argmax imputation is convergent under certain conditions.\nProof. Our CST objective is defined as\nmin θ,R̂\nC1 = 1\nN |P| ( N∑ i=1 l(fθ(xi, pi), ri) + N∑ i=1 ∑ p∈P\\pi l(fθ(xi, p), r̂i,p) )\n(9)\nwhere ri is the factual data observed and r̂i,p is imputed by trained classifier fθ. We show our proof using binary cross entropy loss which we use in the paper, it can be generalized to cross entropy easily. We show the convergence of CST-AI defined in Section 5, which imputes pseudo-label using argmax operation. The objective is optimized via the following two steps:\n1) Pseudo-Label Imputation: Fix θ and impute R̂ to solve:\nmin R̂ N∑ i=1 ∑ p∈P\\pi − ( r̂i log fθ(xi, p) + (1− r̂i) log(1− fθ(xi, p)) ) (10)\ns.t. r̂i,p ∈ ∆,∀i, p where ∆ is the possible discrete values of the outcome.\n2) Model Retraining: Fix R̂ and solve the following optimization using gradient descent, where l(·, ·) is binary cross entropy loss:\nmin θ N∑ i=1 l(fθ(xi, pi), ri) + N∑ i=1 ∑ p∈P\\pi l(fθ(xi, p), r̂i,p) (11)\nFor CST-AI, we have:\nStep 1) is non-increasing: (10) has an optimal solution which is given by pseudo-labels imputed by argmax operation with feasible set being all possible outcomes. As a result, (10) is non-increasing.\nStep 2) is non-increasing: If one use gradient descent to minimize the loss defined in Equation 11. The loss is guaranteed to decrease monotonically with a proper learning rate (Zou et al., 2019). For mini-batch gradient descent commonly used in practice, the loss is not guaranteed to decrease but also almost certainly converge to a lower value.\nSince loss in Equation 9 is lower bounded, CST-AI is convergent." }, { "heading": "A.3 EMPIRICAL CONVERGENCE ANALYSIS OF CST-RI", "text": "For CST-RI, since the convergence analysis is more challenging, we show empirically CST-RI converges in all of our experiments without any additional techniques. We show our loss curves in all experiments for our synthetic and multi-label datasets in Figure 7 and 11 respectively. CST-RI is trained with gradient descent with momentum (Ruder, 2016). For syntheic datasets, we set learning rate as 1e-3 and momentum as 0.9. For multi-label datasets, we set learning rate as 1e-1 and momentum as 0.9.\nNext, we share some intuition on CST-RI and its connection with entropy regularization (Grandvalet & Bengio, 2005). Consider the second term in Equation 9 with stochastic imputation:\nFigure 7: Loss Curves for Synthetic Datasets\nFigure 8: TMC Figure 9: Yeast Figure 10: Scene\nSince r ∈ {0, 1}, by taking expectation over r̂,\nEr̂ N∑ i=1 ∑ p∈P\\pi − ( r̂i log fθ(xi, p) + (1− r̂i) log(1− fθ(xi, p)) ) (13)\n= N∑ i=1 ∑ p∈P\\pi − ( fθ(xi, p) log fθ(xi, p) + (1− fθ(xi, p)) log(1− fθ(xi, p)) ) (14)\nwhich equals to the entropy term defined on fθ, thus Our CSI-RI framework can be viewed as a variant of entropy regularization in semi-supervised learning (Grandvalet & Bengio, 2005). Since we aim to simulate a randomized trail, the hyper-parameter in Grandvalet & Bengio (2005) is set to 1 in CST. Instead of taking the argmax imputation which is commonly used in classification EM (CEM) (Amini & Gallinari, 2002) that minimizes the objective, we impute a randomly assigned\nlabel with a larger probability to be the CEM solution. This step is very similar to deterministic annealing EM (Grandvalet & Bengio, 2005; Yuille et al., 1994; Rose et al., 1990) where a pseudolabel is generated by the output probability with a annealing temperature instead of CEM solution, which aims to find the global minimum more efficiently." }, { "heading": "A.4 EXPERIMENT RESULTS FOR RUNNING TIME ANALYSIS", "text": "" }, { "heading": "A.5 DATA GENERATION FOR SYNTHETIC DATASET", "text": "In the synthetic experiments, we use a pricing example similar to the experiment in Lopez et al. (2020). Let U(·, ·) be a uniform distribution. Assume customer features are a 50-dimensional vector X drawn from U(0, 1)50 and there are 10 price options from $1 to $10. The logging policy is set as π(p = i|x) = xi∑10\ni=1 xi . σ denotes sigmoid function. We simulated five types of demand functions, with h(x) = ∑ ai ∑ exp( ∑ bj‖xj − cj‖), a, b, c ∼ U(0, 1)50, r ∈ {0, 1} :\n• r ∼ σ(h(x)− 2x0 · p) • r ∼ σ(5 · (x0 − 0.5)− 0.4 · p) • r ∼ σ(h(x)− stepwise1(x0) · p) • r ∼ σ(h(x)− stepwise2(x0, x1) · p) • r ∼ σ(h(x)− (x0 + x1) · p)\nwhere the stepwise function is defined as:\nstepwise1(x) = 0.7, if x ≤ 0.1 0.5, if 0.1 < x ≤ 0.3 0.3, if 0.3 < x ≤ 0.6 0.1, if 0.6 < x ≤ 1\n(15)\nstepwise2(x, y) = 0.65, if x ≤ 0.1 and y > 0.5 0.45, if x ≤ 0.1 and y ≤ 0.5 0.55, if 0.1 < x ≤ 0.3 and y > 0.5 0.35, if 0.1 < x ≤ 0.1 and y ≤ 0.5 0.45, if 0.3 < x ≤ 0.6 and y > 0.5 0.25, if 0.3 < x ≤ 0.6 and y ≤ 0.5 0.35, if 0.6 < x ≤ 1 and y > 0.5 0.15, if 0.6 < x ≤ 1 and y ≤ 0.5\n(16)" }, { "heading": "A.6 MULTI-LABEL DATASETS STATISTICS", "text": "" } ]
2,020
null
SP:4b0b0b58ac822beb29097ed55dfe44128530d5ed
[ "of Paper: The main claim of the paper is that out of distribution (OOD) detection can be done by use of pre-training and appropriately deriving a feature space from SOTA activations via pooling, PCA based dimensionality reduction, L2 normalization. Classical methods such as GMMs, k-means etc. can then be used to estimate the probability density function of features for use in OOD detection. Several alternative schemes are compared against many OOD detection schemes. " ]
Machine-learned safety-critical systems need to be self-aware and reliably know their unknowns in the open-world. This is often explored through the lens of anomaly/outlier detection or out-of-distribution modeling. One popular formulation is that of open-set classification, where an image classifier trained for 1-of-K classes should also recognize images belonging to a (K + 1) “other” class, not present in the training set. Recent work has shown that, somewhat surprisingly, most if not all existing open-world methods do not work well on high-dimensional open-world images (Shafaei et al., 2019). In this paper, we carry out an empirical exploration of open-set classification, and find that combining classic statistical methods with carefully computed features can dramatically outperform prior work. We extract features from off-the-shelf (OTS) state-of-the-art networks for the underlying K-way closed-world task. We leverage insights from the retrieval community for computing feature descriptors that are low-dimensional (via pooling and PCA) and normalized (via L2-normalization), enabling the modeling of training data densities via classic statistical tools such as kmeans and Gaussian Mixture Models (GMMs). Finally, we (re)introduce the task of open-set semantic segmentation, which requires classifying individual pixels into one of K known classes or an “other" class. In this setting, our feature-based statistical models noticeably outperform prior open-world methods.
[ { "affiliations": [], "name": "OPEN-SET RECOG" }, { "affiliations": [], "name": "NITION VIA" } ]
[ { "authors": [ "Abhijit Bendale", "Terrance E Boult" ], "title": "Towards open set deep networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Oren Boiman", "Eli Shechtman", "Michal Irani" ], "title": "In defense of nearest-neighbor based image classification", "venue": "In CVPR,", "year": 2008 }, { "authors": [ "Yue Cao", "Mingsheng Long", "Jianmin Wang", "Han Zhu", "Qingfu Wen" ], "title": "Deep quantization network for efficient image retrieval", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Varun Chandola", "Arindam Banerjee", "Vipin Kumar" ], "title": "Anomaly detection: A survey", "venue": "ACM computing surveys (CSUR),", "year": 2009 }, { "authors": [ "Guangyao Chen", "Limeng Qiao", "Yemin Shi", "Peixi Peng", "Jia Li", "Tiejun Huang", "Shiliang Pu", "Yonghong Tian" ], "title": "Learning open set network with discriminative reciprocal points", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": null, "year": 2016 }, { "authors": [ "Jesse Davis", "Mark Goadrich" ], "title": "The relationship between precision-recall and roc curves", "venue": "In ICML,", "year": 2006 }, { "authors": [ "David Dehaene", "Oriel Frigo", "Sébastien Combrexelle", "Pierre Eline" ], "title": "Iterative energy-based projection on a normal data manifold for anomaly localization", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Akshay Raj Dhamija", "Manuel Günther", "Terrance Boult" ], "title": "Reducing network agnostophobia", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "Decaf: A deep convolutional activation feature for generic visual recognition", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Mark Everingham", "SM Ali Eslami", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "ZongYuan Ge", "Sergey Demyanov", "Zetao Chen", "Rahil Garnavi" ], "title": "Generative openmax for multiclass open set classification", "venue": "In BMVC,", "year": 2017 }, { "authors": [ "Chuanxing Geng", "Sheng-jun Huang", "Songcan Chen" ], "title": "Recent advances in open set recognition: A survey", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Yunchao Gong", "Liwei Wang", "Ruiqi Guo", "Svetlana Lazebnik" ], "title": "Multi-scale orderless pooling of deep convolutional activation features", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Albert Gordo", "Jon Almazan", "Jerome Revaud", "Diane Larlus" ], "title": "End-to-end learning of deep visual representations for image retrieval", "venue": null, "year": 2017 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Jörn-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "P.R.M. Júnior", "R.M. De Souza", "Rafael de O Werneck", "Bernardo V Stein", "Daniel V Pazinato", "Waldir R de Almeida", "Otávio AB Penatti", "R.S. Torres", "Anderson Rocha" ], "title": "Nearest neighbors distance ratio open-set classifier", "venue": "Machine Learning,", "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": null, "year": 2019 }, { "authors": [ "Antonio Loquercio", "Mattia Segu", "Davide Scaramuzza" ], "title": "A general framework for uncertainty estimation in deep learning", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": null, "year": 2018 }, { "authors": [ "Thomas Mensink", "Jakob Verbeek", "Florent Perronnin", "Gabriela Csurka" ], "title": "Metric learning for large scale image classification: Generalizing to new classes at near-zero cost", "venue": "In ECCV,", "year": 2012 }, { "authors": [ "Thomas Mensink", "Jakob Verbeek", "Florent Perronnin", "Gabriela Csurka" ], "title": "Distance-based image classification: Generalizing to new classes at near-zero cost", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Kevin P Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Kevin Musgrave", "Serge Belongie", "Ser-Nam Lim" ], "title": "A metric learning reality check", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Lawrence Neal", "Matthew Olson", "Xiaoli Fern", "Weng-Keen Wong", "Fuxin Li" ], "title": "Open set learning with counterfactual images", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Poojan Oza", "Vishal M. Patel" ], "title": "C2AE: class conditioned auto-encoder for open-set recognition", "venue": null, "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Stanislav Pidhorskyi", "Ranya Almohsen", "Gianfranco Doretto" ], "title": "Generative probabilistic novelty detection with adversarial autoencoders", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Walter J Scheirer", "Anderson Rocha", "Ross J Micheals", "Terrance E Boult" ], "title": "Meta-recognition: The theory and practice of recognition score analysis", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2011 }, { "authors": [ "Walter J Scheirer", "Anderson de Rezende Rocha", "Archana Sapkota", "Terrance E Boult" ], "title": "Toward open set recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2012 }, { "authors": [ "Alireza Shafaei", "Mark Schmidt", "James J. Little" ], "title": "A less biased evaluation of out-of-distribution sample detectors", "venue": "In BMVC,", "year": 2019 }, { "authors": [ "Jacob Steinhardt", "Percy S Liang" ], "title": "Unsupervised risk estimation using only conditional independence structure", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Xin Sun", "Zhenning Yang", "Chi Zhang", "Keck-Voon Ling", "Guohao Peng" ], "title": "Conditional gaussian distribution learning for open set recognition", "venue": null, "year": 2020 }, { "authors": [ "Antonio Torralba", "Alexei A Efros" ], "title": "Unbiased look at dataset bias", "venue": "In CVPR, pp", "year": 2011 }, { "authors": [ "Matthew A Turk", "Alex P Pentland" ], "title": "Face recognition using eigenfaces", "venue": "In CVPR,", "year": 1991 }, { "authors": [ "Jingdong Wang", "Ke Sun", "Tianheng Cheng", "Borui Jiang", "Chaorui Deng", "Yang Zhao", "Dong Liu", "Yadong Mu", "Mingkui Tan", "Xinggang Wang", "Wenyu Liu", "Bin Xiao" ], "title": "Deep high-resolution representation learning for visual recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Songfan Yang", "Deva Ramanan" ], "title": "Multi-scale recognition with dag-cnns", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Ryota Yoshihashi", "Wen Shao", "Rei Kawakami", "Shaodi You", "Makoto Iida", "Takeshi Naemura" ], "title": "Classification-reconstruction learning for open-set recognition", "venue": null, "year": 2019 }, { "authors": [ "Hongjie Zhang", "Ang Li", "Jie Guo", "Yanwen Guo" ], "title": "Hybrid models for open set recognition", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Bo Zong", "Qi Song", "Martin Renqiang Min", "Wei Cheng", "Cristian Lumezanu", "Daeki Cho", "Haifeng Chen" ], "title": "Deep autoencoding gaussian mixture model for unsupervised anomaly detection", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "F OPEN-SOURCE" ], "title": "DEMONSTRATION We attach our code (via three Jupyter Notebook files) to demonstrate our exploration of open-set recognition. One can run the code with access to networks (Res50 and HRNet (Wang et al., 2019)) trained for closed-world tasks. We are not able to upload models or pre-computed features due to space limit", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Embodied perception and autonomy require systems to be self-aware and reliably know their unknowns. This requirement is often formulated as the open set recognition problem (Scheirer et al., 2012), meaning that the system, e.g., a K-way classification model, should recognize anomalous examples that do not belong to one of K closed-world classes. This is a significant challenge for machine-learned systems that notoriously over-generalize to anomalies and unknowns on which they should instead raise a warning flag (Amodei et al., 2016).\nOpen-world benchmarks: Curating open-world benchmarks is hard (Liu et al., 2019). One common strategy re-purposes existing classification datasets into closed vs open examples – e.g., declaring MNIST digits 0-5 as closed and 6-9 as open (Neal et al., 2018; Oza & Patel, 2019; Geng et al., 2020). In contrast, anomaly/out-of-distribution (OOD) benchmarks usually generate anomalous samples by adding examples from different datasets - e.g., declaring CIFAR as anomalous for MNIST (Ge et al., 2017; Oza & Patel, 2019; Liu et al., 2019). Most open-world protocols assume open-world data is not available during training (Liang et al., 2018; Oza & Patel, 2019). Interestingly, Dhamija et al. (2018); Hendrycks et al. (2019b) find that, if some open examples are available during training, one can learn simple open-vs-closed binary classifiers that are remarkably effective. However, Shafaei et al. (2019) comprehensively compare various well-known open-world methods through rigorous experiments, and empirically show that none of the compared methods generalize to high-dimensional open-world images. Intuitively, classifiers can easily overfit to the available set of open-world images, which won’t likely exhaustively span the open world outside the K classes of interest.\nIn this paper, we carry out a rigorous empirical exploration of open-set recognition of highdimensionial images. We explore simple statistical models such as Nearest Class Means (NCMs), kmeans and Gaussian Mixture Models (GMMs). Our hypothesis is that such classic statistical methods can reliably model the closed-world distribution (through the closed-world training data),\nand help avoid overfitting (an issue in open-vs-closed classifiers). Traditionally, such simple models have been used to address the open-world (Chandola et al., 2009; Geng et al., 2020), but are largely neglected in the recent literature. We revisit these simple methods, and find them quite effective once crucial techniques are considered, as summarized by contributions below.\nContribution 1: We build classic statistical models on top of off-the-shelf (OTS) features computed by the underlying K-way classification network. We find it crucial to use OTS features that have been pre-trained and post-processed appropriately (discussed further below). Armed with such features, we find classic statistical models such as kmeans and GMMs (Murphy, 2012) can outperform prior work. We describe two core technical insights below.\nInsight-1 Pre-training networks (e.g., on ImageNet (Deng et al., 2009)) is a common practice for traditional closed-world tasks. However, to the best of our knowledge, open-world methods do not sufficiently exploit pre-training (Oza & Patel, 2019). Hendrycks et al. (2019a) report that pre-training improves anomaly detection using softmax confidence thresholding (Hendrycks & Gimpel, 2017). We find pretraining to be a crucial factor in learning better representations that support more sophisticated open-world reasoning. Intuitively, pre-trained networks expose themselves to diverse data that may look similar to open-world examples encountered at test-time. We operationalize this intuition by building statistical models on top of existing discriminative networks, which tend to make use of pre-training by design. We demonstrate this significantly outperforms features trained from scratch, as most prior open-set work does.\nInsight-2 Low-dimensional normalized features. While some existing open-world methods also exploit OTS features (Lee et al., 2018), we find it crucial to make use of insufficiently well-known best practices for feature extraction. Specifically, to reduce dimensionality, we pool spatially (Gong et al., 2014) and use principle component analysis (PCA) (Turk & Pentland, 1991). Then, to ensure features are invariant to scalings, we adopt L2 normalization (Gong et al., 2014; Gordo et al., 2017). While these are somewhat standard practices for deep feature extraction in areas such as retrieval, their combination is not well explored in the open-set literature (Bendale & Boult, 2016; Grathwohl et al., 2019). Given a particular OTS K-way classification network, we determine the “right” feature processing through validation. In particular, we find that L2-normalization greatly boosts open-world recognition performance; spatial pooling and PCA altogether reduce feature dimension by three orders of magnitude without degrading performance, resulting in a lightweight pipeline.\nContribution 2: We re(introduce) the problem of open-set semantic segmentation. Interestingly, classic benchmarks explicitly evaluate background pixels outside the set of K classes of interest (Everingham et al., 2015). However, contemporary benchmarks such as Cityscapes (Cordts et al., 2016)\nignore such pixels during evaluation. As a result, most contemporary segmentation networks also ignore such pixels during training. Perhaps surprisingly, such ignored pixels include vulnerable objects like strollers and wheelchairs. Misclassifying such objects may have serious implications for real-world autonomous systems (see Figure 1). Instead of ignoring these pixels, we use them to explore open-world recognition by repurposing them as open-world examples. Interestingly, this setup naturally allows for open-pixels in the train-set, a protocol advocated by (Dhamija et al., 2018; Hendrycks et al., 2019b). We benchmark various open-world methods on this setup, and show that our suggested simple statistical models still outperform typical open-world methods. Similar to past work, we also find that simple open-vs-closed binary classifiers serve as strong baselines, provided one has enough training examples of open pixels that span the open-world." }, { "heading": "2 RELATED WORK", "text": "Open-set recognition. There are multiple lines of work addressing the open-world problems in the context of K-way classification, such as anomaly/out-of-distribution detection (Chandola et al., 2009; Zong et al., 2018; Hendrycks et al., 2019b), novelty/outlier detection (Pidhorskyi et al., 2018). Defined on K-way classification, these problems can be crisply formulated as open-set recognition (Scheirer et al., 2012; Bendale & Boult, 2016; Lee et al., 2018; Geng et al., 2020). Given a testing example, these methods compute the likelihood that it belongs to the open-world via post-hoc functions like density estimation (Zong et al., 2018), uncertainty modeling (Gal & Ghahramani, 2016; Liang et al., 2018; Kendall & Gal, 2017) and reconstruction error of the testing example (Pidhorskyi et al., 2018; Dehaene et al., 2020). Different from the above sophisticated methods, we train simple statistical models (e.g., GMM) which can work much better by following our proposed pipeline.\nFeature extraction. Off-the-shelf (OTS) features can be extracted from the discriminative network and act as powerful embeddings (Donahue et al., 2014). Using OTS features for open-set recognition has been explored in prior work (Oza & Patel, 2019; Grathwohl et al., 2019; Lee et al., 2018). OTS features can be logits, softmax and other intermediate feature activations computed by the discriminative network. Early open-set methods modify the softmax (Hendrycks & Gimpel, 2017; Bendale & Boult, 2016). Grathwohl et al. (2019) learn an energy-based model over the logit features for anomaly detection. Oza & Patel (2019) reconstruct input images from penultimate-layer features and use the reconstruction error as the open-set likelihood. Most related to our work is Lee et al. (2018), who build Gaussian models over OTS features for anomaly detection, but relies on input image perturbation for better open-set classification performance. In contrast, we study even simpler statistical models such as kmeans and GMM, and show that proper feature processing (via L2-normalization and PCA) greatly boosts the efficacy and efficiency of open-set recognition." }, { "heading": "3 OPEN-SET RECOGNITION VIA LIGHTWEIGHT STATISTICAL PIPELINES", "text": "In this section, we discuss various design choices in our pipeline, including (1) training schemes for the underlying closed-world task, (2) methods for extracting and repurposing closed-world feature descriptors for open-world recognition, and (3) the statistical density estimation models built on such\nextracted features. We conclude with (4) an analysis of the additional compute required for self-aware processing (via the addition of an open-world \"head\" on top of the closed-world network), pointing out that minimal additional processing is needed.\n1. Network training strategies. Virtually all state-of-the-art deep classifiers make use of large-scale pre-training, e.g., on ImageNet (Deng et al., 2009), which seems to consistently improve towards the state-of-the-art performance on the closed-world data (Sun et al., 2017; Mahajan et al., 2018). However, many, if not all, open-world methods trains the discriminant network purely on the closedworld data without pre-training (Oza & Patel, 2019; Hendrycks & Gimpel, 2017). We argue that a pre-trained network also serves as an abstraction of the (pseudo) open world. Intuitively, such a pre-trained model has already seen diverse data that may look similar to the open-world examples that will be encountered at test-time, particularly if ImageNet does not look similar to the (closed) training set for the task of interest. Recently, Hendrycks et al. (2019a) show that pre-training improves open-world robustness with a simplistic method that thresholds softmax confidence (Hendrycks & Gimpel, 2017). Our diagnostic study shows that our explored statistical models, as well as prior methods, do perform much better when built on a pre-trained network than a network trained from scratch!\n2. Feature extraction. OTS features generated at different layers of the trained discriminative model can be repurposed for open-set recognition (Lee et al., 2018). Most methods leverage softmax (Hendrycks & Gimpel, 2017) and logits (Bendale & Boult, 2016; Grathwohl et al., 2019) which can be thought of as features extracted at top layers. Similar to (Lee et al., 2018), we find it crucial to analyze features from intermediate layers for open-set recognition, for which logits and softmax may be too invariant to be effective for open-set recognition (see Figure 3). One immediate challenge to extract features from an intermediate layer is their high dimensionality, e.g., of size 512x7x7 from ResNet18 (He et al., 2016). To reduce feature dimension, we simply (max or average) pool the feature activations spatially into a 512-dim feature vectors (Yang & Ramanan, 2015). We further use PCA, which can reduce dimension by 10× (from 512-dim to 50-dim) without sacrificing performance. We find this dimensionality particularly important for learning second-order covariance statistics as in GMMs, described below. Finally, following (Gong et al., 2014; Gordo et al., 2017), we find it crucial to L2-normalize extracted features (see Figure 2).\n3. Statistical models. Given the above extracted features, we can learn various generative statistical models to capture the confidence/probability that a test example belongs to the closed-world distribution. We explore simple parametric models such as Nearest Class Means (NCMs) (Mensink et al., 2013) and class-conditional Gaussian models (Lee et al., 2018; Grathwohl et al., 2019), as well as non-parametric models such has nearest neighbors (NN) (Boiman et al., 2008; Júnior et al., 2017). We finally explore an intermediate regime of mixture models, including (class-conditional) GMMs and kmeans (Chandola et al., 2009;?; Cao et al., 2016; Geng et al., 2020). Our models label a test example as open-world when the inverse probability (e.g., of the most-likely class-conditional GMMs) or distance (e.g., to the closest class centroid) is above a threshold. One benefit of such simple statistical models is that they are interpretable and relatively easier to diagnose failures. For example, one failure mode is an open-world sample being misclassified as a closed-world class. This happens when open-world data lie close to a class-centroid or Gaussian component mean (see Figure 3-left). Note that a single statistical model may have several hyperparameters – GMMs can have multiple Gaussian components and different structures of second-order covariance, e.g., either a single scalar, a vector or a full-rank general covariance per component, as denoted by “spherical”, “diag” and “full”, respectively. We make use of a validation set to determine the hyperparameters (as well as feature processing steps listed above).\n4. Lightweight Pipeline. We re-iterate that the above feature extraction and statistical models result in a lightweight pipeline for open-set recognition. We now analyze the number of additional parameters in our pipeline. Naively learning a GMM over features from the last convolutional layer result in massive second-order statistics, on the order of (512× 7× 7)2 for a 512x7x7 Res18 feature map. We find that spatial pooling and PCA can reduce dimensionality to 50, which requires only 502 covariance parameters (a reduction of 105). We find linear dimensionality reduction more effective than sparse covariance matrices (e.g., assuming diagonal structure). The appendix includes additional experiments. Given a class-conditional five-component GMM (the largest found to be effective through cross validation), this requires 128KB storage per class, or 594KB for all 19 classes in Cityscapes. This is less than 0.1% of the compute of the underlying closed-world network\n(e.g., HRNet at 250 MB), making it a quite practical addition that enables self-aware processing on real-time autonomy stacks." }, { "heading": "4 EXPERIMENT", "text": "We extensively validate our proposed lightweight statistical pipeline under standard open-set recognition benchmarks, typically focused on image classification. We also consider open-set semantic segmentation, revisiting classic formulations of semantic segmentation that make use of a background label (Everingham et al., 2015). We start by introducing implementation details, evaluation metrics and baselines. We then present comprehensive evaluations on each setup.\nImplementation. As discussed earlier, open-world recognition is often explored through the lens of open-set classification. To ensure our approaches retain high-accuracy on the original closedworld tasks, we build statistical models on top of off-the-shelf (OTS) state-of-the-art networks. For open-set image classification, we fine-tune an ImageNet-pretrained ResNet network (Res18/50 in our experiments) (He et al., 2016) exclusively on the closed-train-set using cross-entropy loss. For open-set semantic segmentation we use HRNet (Wang et al., 2019), a highly-ranked model on the Cityscapes leaderboard (Cordts et al., 2016). We extract features at the penultimate layer of each discriminative network (other layers also apply but we do not explore them in this work). We conduct experiments with PyTorch (Paszke et al., 2017) on a single Titan X GPU.\nEvaluation Metric. Following past work (Hendrycks & Gimpel, 2017; Lee et al., 2018), we evaluate binary detection of open-world examples using the area under the receiver operating characteristic curve (AUROC) (Davis & Goadrich, 2006). AUROC is a calibration-free and threshold-less metric, simplifying comparisons between methods. For open-set semantic segmentation, we also use AUROC to evaluate the performance of recognizing “background” pixels as open-world examples. This is different from traditional practice in segmentation benchmarks (Everingham et al., 2015) which treat such “background” pixels as just another class.\nBaselines. Our statistical pipeline supports various statistical models. We study the simple models proposed in Section 3, including NN, kmeans, NCM, and GMMs. All models, including baselines to which we compare, are based on the same underlying classification network. Hyperparameters for all models (e.g., number of mixtures) are tuned on a validation set1.\n• Classifiers. Hendrycks et al. (2019b) learn a binary open-vs-closed classifier (CLS2) for anomaly detection. Following classic work in semantic segmentation (Everingham et al., 2015), we also evaluate a (K+1)-way classifier (CLS(K+1)). We use the softmax score corresponding to the (K+1)th “other” class as the open-set likelihood. Both methods require open-set examples during training.\n• Likelihoods. Many probabilistic models measure open-set likelihood on OTS features, including Max Softmax Probability (MSP) (Hendrycks & Gimpel, 2017) and Entropy (Steinhardt & Liang, 2016) (derived from softmax probabilities). OpenMax (Bendale & Boult, 2016) fits logits to Weibull distributions (Scheirer et al., 2011) that recalibrate softmax outputs for open-set recognition. C2AE (Oza & Patel, 2019) learns an additional K-way classifier on OTS features based on reconstruction errors, which are then used as open-set likelihood function. GDM (Lee et al., 2018) learns a Gaussian Discriminant Model on OTS features and designs open-set likelihood based on Mahalanobis distance. CROSR (Yoshihashi et al., 2019) trains a reconstruction-based model that jointly performs closed-set K-way classification and open-set recognition. G-Open (Ge et al., 2017) and OSRCI (Neal et al., 2018) turn to Generative Adversarial Networks (GANs) to generate fake images that augment the closed-set training set, and train a discriminative model for open-set recognition. CGDL (Sun et al., 2020) learns class-conditional Gaussian model and relies on reconstruction error for open-set recognition. The last three methods (CROSR, G-Open and CGDL) train ground-up models in contrast to our statistical models that operate on OTS features of an already-trained K-way classification network. As we focus on an empirical exploration rather than achieving the state-of-the-art, we refer readers to more recent approaches for the state-of-the-art by training ground-up models with sophisticated techniques (Zhang et al., 2020; Chen et al., 2020).\n1We use open-source code when available. We implemented C2AE and its authors validated our code through personal communication.\n• Bayesian Networks. Bayesian neural networks compute uncertainty estimates via Monte Carlo estimates (MCdrop) (Gal & Ghahramani, 2016; Loquercio et al., 2020) and calibrated Max Softmax Probability (MSPc) (Liang et al., 2018), which can also be used as open-set likelihoods. We implement MCdrop via 500 samples." }, { "heading": "4.1 SETUP-I: SINGLE-DATASET OPEN-SET RECOGNITION", "text": "Setup. We begin by following the standard benchmark protocol used in most prior work; split a single dataset into open and closed sets w.r.t class labels (e.g., define MNIST digits 0-5 as the closed-set, and digits 6-9 as the open-set). This is a common practice in open-set recognition (Neal et al., 2018; Oza & Patel, 2019). Notably, methods do not have access to open-set examples during training.\nDatasets. MNIST / CIFAR / SVHN are popular datasets used in the open-set recognition literature (Neal et al., 2018; Hendrycks & Gimpel, 2017). All three datasets contain ten classes with balanced numbers of images per class. Standard protocol randomly splits six (four) classes of train/validation-sets into closed (open) train/validation-sets, respectively. We repeat five times and report average AUROC for each method. Through cross-validation, we find reliable OTS features can be computed by average-pooling features from the last convolutional layer down to 512-dim, projecting down to 50-dim via PCA, and L2-normalizing.\nResults. Table 1 shows that, perhaps surprisingly, simple statistical models (like kmeans and GMMs) defined on such normalized features already performs on par to many prior methods/ Because GDM (Lee et al., 2018) does not L2-normalize features, we evaluate a variant that does (GDML2). The improved performance demonstrates the importance of feature normalization, which although is well known in the image retrieval community, is not widely used in open-set recognition. We hereby focus on statistical models trained on normalized features, providing raw vs. normalized comparisons in the appendix." }, { "heading": "4.2 SETUP-II: CROSS-DATASET OPEN-SET RECOGNITION", "text": "Setup. In these experiments, we use the cross-dataset protocol advocated by (Shafaei et al., 2019), where some outlier examples are sampled from a different dataset for training/validation. (e.g., train on TinyImageNet-closed as closed-set, validate on MNIST-open as outlier set, and test on CIFAR-open as open-set). Conclusions drawn under this setup may generalize better due to less dataset bias in the experimental protocol (Torralba & Efros, 2011).\nDatasets. We use TinyImageNet as the closed-world dataset (for K-way classification), which has 200 classes of 64x64 images, split into 500/50/50 images as the train/val/test sets. Following (Shafaei et al., 2019), we construct open val/test sets using cross-dataset images (Torralba & Efros, 2011), including MNIST, SVHN, CIFAR and Cityscapes. For example, we use an outlier dataset (e.g., MNIST train-set) to tune/train an open-world method, and test on another dataset as the open-set (e.g., CIFAR test-set). We use bilinear interpolation to resize all images into 64x64 to match TinyImageNet image resolution. Through cross validation, we find reliable OTS features can be computed by average-pooling features from the last convolutional layer down to 2048-dim, projecting down to 200-dim via PCA, and L2-normalizing.\nResults for Table 2 are summarized below:\nFigure 3: tSNE plots (Maaten & Hinton, 2008) of open-vs-closed data, as encoded by different features from a Res50 model (trained with pre-training in the closed world, cf. Table 2). Points are colored w.r.t closed-world class labels. Left: Logit features mix open and closed data, suggesting that methods based on them (Entropy, SoftMax and OpenMax) may struggle in open-set classification. Right: Convolutional features better separate open vs closed data (cf. Figure 2).\nTable 2: Cross-dataset open-set image recognition (Setup-II) AUROC↑. In this setup, we train on TinyImageNet, validate using outlier images from a second dataset, and test using open-set images from a third dataset. For each open-set dataset, we compute the average AUROC over all results when using different\noutlier datasets. We study two Res50 models either trained from scratch (pink row), or fine-tuned from an ImageNet-pretrained model (blue row). Clearly, simple statistical models can handily outperform much prior work. Pre-training boosts open-set recognition performance for all methods (see last row pair). Binary classifiers CLS2 do not generalize well, presumably due to overfitting. Somewhat surprisingly, OpenMax works quite poorly. We conjecture that the regularized logit features on which it is based may too invariant to be effective for cross-dataset open-set recognition. Table 4 and 5 supplement this table with more details.\nopen-test MSP Entropy OpenMax MSPc MCdrop C2AE GDM GDML2 NN NCM kmeans GMM CLS2 CLS(K+1) MNIST .709 .712 .144 .773 .657 .811 .454 .799 .966 .961 .939 .940 .963 .939.775 .789 .453 .832 .801 .796 .723 .957 .901 .979 .963 .964 .986 .944 SVHN .752 .768 .314 .803 .833 .723 .841 .991 .993 .994 .982 .984 .754 .907.770 .787 .123 .863 .783 .780 .820 .999 .994 .995 .993 .990 .701 .948 CIFAR .694 .703 .338 .750 .741 .719 .712 .886 .852 .963 .937 .968 .739 .867.725 .732 .471 .791 .809 .763 .838 .961 .927 .975 .948 .961 .754 .880 Citysc. .739 .753 .604 .862 .877 .753 .725 .650 .559 .839 .903 .885 .601 .919.751 .762 .543 .851 .868 .784 .651 .513 .715 .833 .886 .867 .646 .971 average .723 .734 .350 .797 .777 .752 .683 .832 .843 .939 .940 .938 .764 .908.755 .768 .397 .834 .815 .781 .758 .857 .884 .946 .948 .945 .772 .936\n• Simple statistical models (e.g., NCM and kmeans) can outperform prior open-set methods (e.g., C2AE and GDM). We find that L2-normalization greatly contributes to the success of these simple statistical methods (cf. details in the appendix). Both the metric learning and image retrieval (Mensink et al., 2012; Musgrave et al., 2020) literature have shown the importance of L2-normalization. Informally, open-set recognition queries the testing example and measures how close it is to any of the closed-world training examples (Musgrave et al., 2020).\n• Interestingly, kmeans performs slightly better than GMMs. Considering that the former can be seen as a special case of GMMs that have an identity covariance, we conjecture that learning other types of covariance (e.g., a full-rank covariance matrix) does not help when the underlying K-way network has already provided compact feature representations.\n• From last row pair, we can see pre-training notably improves all the methods. GDML2 outperforms the original GDM which operates on raw features (without L2-normalization). This further confirms the importance of L2-normalization in feature extraction for open-set recognition.\n• Perhaps surprisingly, OpenMax does not work well in this setup (though we have spent considerable effort tuning it). This is consistent with the results in (Dhamija et al., 2018; Shafaei et al., 2019), and we conjecture the reason is that OpenMax cannot effectively recognize cross-dataset anomalous inputs using logit features because they are too invariant to be useful for open-set recognition (Figure 3). Similar lackluster results hold for other methods that operate on logit features (Entropy and MSP)." }, { "heading": "4.3 SETUP-III: OPEN-SET SEMANTIC SEGMENTATION", "text": "Setup. In these experiments, we (re)introduce the task of open-set segmentation by repurposing “background” pixels in contemporary segmentation benchmarks (Cityscapes) as open-world pixels. As elaborated before, such pixels are either traditionally treated as just another class for segmentation evaluation (Everingham et al., 2015) or ignored completely. Instead, we evaluate them using openworld metrics such as AUROC. We will show our statistical methods also outperform other typical open-world methods. As this setup has natural access to open-world pixels during training, we explore the training of simple open-vs-closed classifiers.\nDatasets. Cityscapes (Cordts et al., 2016) provides per-pixel annotations for urban scene images (1024x2048-resolution) for autonomous driving research. We construct our train- and val-sets from its 2,975 training images, in which we use the last 10 images as val-set and the rest as train-set. We use its official 500 validation images as our test-set. The “background” pixels (shown in white of ground-truth visual in Figure 4) are the open-world examples in this setup. Through validation, we find reliable OTS features can be computed by projecting features from the last convolutional layer from 720 down to 100-dim via PCA, and L2-normalizing.\nResults. For our statistical models (as well as GDM), we randomly sample 5000 closed-world pixel features from each class, as it is prohibitively space-consuming to use all the pixel features from the Cityscapes train-set. We show quantitative comparison in Table 3 and list salient conclusions below.\n• Clearly, our simple statistical models (e.g., NN and GMM) perform significantly better than the classic open-world methods (e.g., MSP and OpenMax). However, when training on large amounts of open-pixels, CLS methods achieve significantly better performance. This clearly shows the benefit of training on open-world pixels (Hendrycks et al., 2019b). We do note that GMMs do not need any open pixels during learning, and so may generalize better to novel open-world scenarios not encountered in the training set (Figure 5).\n• GDM performs poorly, probably due to arbitrary scales of the raw features that are too uninformative to be used for open-set pixel recognition. We note that other statistical methods all struggle with raw pooled features (cf. appendix). However, once we L2-normalize the pixel features to be scale-invariant, these statistical methods perform significantly better (as reported in this table).\n• Figure 4 shows qualitative results. MSP predicts segment boundaries as open-pixels. This makes sense as the MSP mostly returns aleatoric uncertainties corresponding to ambiguous pixel sensor measurements around object boundaries (Kendall & Gal, 2017). In contrast, GMM reports openpixels on truly novel objects, such as the street-shop and rollator, both of which are ignored by the semantic segmentation network during training HRNet (Wang et al., 2019). These regions appear to be caused by epistemic uncertainty arising from the lack of training data (Kendall & Gal, 2017).\n• Figure 6 plots AUROC performance vs model size for various statistical models. Notably, NN consumes the most memory, even more than the underlying networks. GMMs perform the best and are quite lightweight, only consuming 0.6MB when built on the HRNet model (250MB).\nTable 3: Open-set semantic segmentation (Setup-III) AUROC↑. Simple statistical methods (GMMs) outperform prior methods, with the notable exception of discriminative classifiers (CLS2 and CLS(K+1)) that have access to open-set training examples. Figure 5 analyzes this further, demonstrating that GMMs can outperform such discriminative models when they have access to less open training examples, suggesting that GMMs may better generalize to never-before-seen open-world scenarios.\nMSP Entropy OpenMax C2AE MSPc MCdrop GDM NN NCM kmeans GMM CLS2 CLS(K+1) .590 .600 .655 .603 .612 .563 .539 .769 .715 .755 .795 .897 .867\nFigure 6: AUROC vs. memory cost (MB) for various statistical models for open-set semantic segmentation. NN stores ∼100k OTS features, which is larger than the underlying network (HRNet). We explore GMMs with various covariance structures (spherical, diagonal, full), feature dimensionality via PCA, and mixture components. We find the best AUROC-memory tradeoff on the validation set (shown here to be a single-mixture GMM with full-covariance and PCA), and find it generalizes well to held-out test (cf. Appendix)." }, { "heading": "5 CONCLUSION", "text": "We explore an empirical exploration of open-set recognition via lightweight statistical pipelines. We find simple statistical models quite effective if built on properly processed off-the-shelf features computed by the discriminative networks (originally trained for the closed-world tasks). Our pipelines endow K-way networks with the ability to be “self-aware”, with negligible additional compute costs (0.1%). Finally, we (re)introduce the task of open-set semantic segmentation by repurposing background pixels as open-world examples, requiring classification of individual pixels into one of K known/closed-world classes and an “other” open-world class." }, { "heading": "APPENDIX OUTLINE", "text": "As elaborated in the main paper, we introduce a lightweight statistical pipeline for open-set recognition by repurposing off-the-shelf (OTS) features computed by a state-of-the-art recognition network. As our pipeline does not require (re)training the underlying network, it is guaranteed to replicate the state-of-the-art performance of the network on the (closed-world) task for which it was trained, but still allows the final recognition system to properly identify never-before-seen data from the open-world. In the appendix, we expand on our pipeline, including more experiments, analyses and visualizations. We outline the appendix below.\nSection A: Data statistics for open-set semantic segmentation. We provide data details used for open-set semantic segmentation (Setup-III), motivated by safety concerns in autonomous stacks as shown in Figure 1 left.\nSection B: Detailed results by statistical models. We provide detailed results on the open-set recognition including open-set image recognition (Setup-II), and open-set semantic segmentation (Setup-III). We detail the performance of the various statistical models studied in the main paper, including Nearest Neighbor (NN), Nearest Class Mean (NCMs), kmeans and Gaussian Mixture Models (GMMs).\nSection C: Reduced dimension via PCA. We show that PCA can reduce dimensionality significantly (making our pipeline quite lightweight), while maintaining or even improving performance.\nSection D: Performance vs. memory/compute. We rigorously evaluate the memory/compute costs of our various statistical pipelines, emphasizing solutions that are both accurate and lightweight.\nSection E: Visualization of Gaussian component means. One benefit of our simple statistical models is their interpretability; we visualize Gaussian means through centroid images, and demonstrate that they correspond to canonical objects (e.g., those with standard poses and clean background).\nSection F: Open-Source demonstration. We include code (via Jupyter Notebook) for open-set semantic segmentation, assuming one has access to precomputed features from HRNet (Wang et al., 2019)." }, { "heading": "A SETUP FOR OPEN-SET SEMANTIC SEGMENTATION", "text": "As we (re)introduce the task of open-set semantic segmentation for exploring open-set recognition, in which we re-purpose “backgroud” pixels of Cityscapes as open-world examples (that are from the (K+1)th “other” class). We hereby list the statistics of open and closed-world examples (pixels). Cityscapes training set has 2,975 images. We use the first 2,965 images for training, and hold out the last 10 as validation set for model selection. We use the 500 Cityscapes validation images as our test set. Here are the statistics for the full train/val/test sets.\n• train-set for closed-pixels: 2,965 images providing 334M closed-set pixels. • train-set for open-pixels: 2,965 images providing 44M open-set pixels. • val-set for closed-pixels: 10 images providing 1M closed-set pixels. • val-set for open-pixels: 10 images providing 0.2M open-set pixels. • test-set for closed-pixels: 500 images providing 56M pixels. • test-set for open-pixels: 500 images providing 8.3M pixels." }, { "heading": "B DETAILED RESULTS BY STATISTICAL PIPELINE", "text": "In Figure 10 on the last page of this document, we provide detailed results of various statistical models for open-world tasks, including open-set image recognition and open-set semantic segmentation. In the main paper, we state that we tune and select statistical models (if it has hyper-parameters to tune) on the small validation set, and report on the test set with the selected (best-performing) model. Such hyper-parameters can be the number of means/components in kmeans and GMM models, and covariance types in GMM — “spherical”, “diagonal” and “full” denote that the covariance matrix of each Gaussian component is controlled by a single scalar, a vector and full-rank matrix, respectively. It demonstrates that validation can reliably tune the statistical models whose performance can be translated to the test sets. Moreover, we also record the detailed results of whether using\nL2-normalization on the features. We can see that L2-normalization greatly boosts open-world performance.\nTable 4 and 5 list details of various methods on cross-dataset evaluation (Setup-II), supplementing Table 2. Please refer to the caption for details." }, { "heading": "C PERFORMANCE VS. PCA REDUCED DIMENSION", "text": "As analyzed in the main paper, PCA is an important technique to make our pipeline lightweight by considerably reducing feature dimensions. We study how a statistical model performs under different reduced feature dimensions by PCA. We choose the simplistic NCM method which does not induce randomness (unlike kmeans and GMM which require random initialization for learning). We study this through open-set image recognition under Setup-II. To simplify the study, we choose (resized) Cityscapes images as open-set data, i.e., we use TinyImagenet/Cityscapes images as the closed/open set. As we use the network Res50 in the diagnostic study, the original dimension of the pooled features is 2048. In Fig. 7, we plot the performance (AUROC) of NCM as a function of reduced dimension by PCA. Perhaps surprisingly, PCA even improves the open-world performance while significantly reducing feature dimension (from 2048 to 100)!" }, { "heading": "D PERFORMANCE VS. MEMORY/COMPUTE", "text": "As seen previously, PCA reduces the feature dimension greatly and hence makes the statistical models quite lightweight. We now study how lightweight different statistical models can be by considering the open-world performance. We analyze the models learned for two tasks (open-world image classification and open-world semantic segmentation), where the OTS features have dimension 2048 (extracted from Res50) and 720 (extracted from HRNet), respectively. We use PCA to reduce the features dimensions to 200 and 100, respectively.\nWe focus on NN, kmeans and GMMs, all of which operate on the PCA reduced features (with L2normalization). NN is the straightforward baseline that memorizes all training examples to recognize open-set examples. For GMM, we study it by specifying three types of covariance – “spherical”, “diag” and “full” meaning the covariance matrix of each Gaussian component is controlled by a single scalar, a vector or a full-rank (symmetric) matrix, respectively. For open-set semantic segmentation, it is prohibitively space-consuming to memorize all the pixel features of the whole train set. So we randomly sample 5000 pixels from each of the 19 classes defined by Cityscapes (∼200k in total). In Figure 8, we draw the open-world performance (AUROC) for the two tasks w.r.t the total memory cost (i.e., the required space to save a model’s parameters). We can see that NN takes the most memory\nusage, which is even more than the underlying networks. In contrast, GMM-spherical and k-means models are significantly more compact, i.e., ∼0.3MB for both tasks. Moreover, on open-world image classification (Figure 8-left), k-means and GMM-spherical achieve much better performance than the other models. Interestingly GMM-full achieves the best and stable performance for open-set semantic segmentation (Figure 8-right) but not for open-set image classification (Figure 8-left). Despite this, we note that the validation performance (as plotted here) can be nicely translated to real test sets, as shown in Figure 10).\nIt is worth noting that the specified PCA-reduced dimension is not optimal that an even lower dimension can lead to better open-world performance (cf. Figure 7). We do not exhaustively explore this in this work, but instead emphasize that our pipeline is quite lightweight that can be tuned on specific tasks, e.g., 0.6MB GMM-full compared with 250MB HRNet for semantic segmentation.\nE VISUALIZATION OF GAUSSIAN MEANS\nAs statistical models are interpretable, we visualize what the statistical model can capture. To do so, we visualize per-class Gaussian means through medoid images, which are training images that have features closest to their corresponding per-class mean feature. We show the medoid images in Figure 9, as well as some random images sorted by the cosine similarity (i.e., Euclidean distance on L2-normalized features) to the Gaussian means within each class. We can see the medoid images most likely capture the canonical objects of each class, e.g., those of with “standard” pose and clean background." }, { "heading": "F OPEN-SOURCE DEMONSTRATION", "text": "We attach our code (via three Jupyter Notebook files) to demonstrate our exploration of open-set recognition. One can run the code with access to networks (Res50 and HRNet (Wang et al., 2019)) trained for closed-world tasks. We are not able to upload models or pre-computed features due to space limit, but we are committed to releasing them to the public after paper notification. We refer readers to the Jupyter Notebook files for self-explanatory descriptions.\n• “demo_Open-Set-Image-Recognition-Setup-II_GMM_Res50pt_pca_L2norm.ipynb”: We show how we train, select and evaluate GMMs on cross-dataset open-set image recognition (Setup-II).\n• “demo_tsne_visual_res50pt.ipynb”: We show t-SNE visualizations of OTS features of cross-dataset open-set examples (Setup-II). This intuitively demonstrates the benefit of exploiting OTS features for open-set recognition.\n• “demo_open-set-semantic-segmentation.ipynb”: We demonstrate how we train and evaluate GMM under Setup-III, open-set semantic segmentation." } ]
2,020
null
SP:5fc35f794bdf1281225c24a5096547e75904a2d0
[ "This paper proposes a new dataset based on textbook / classroom chemistry questions for complex knowledge retrieval and aggregation. The authors scrape several thousands questions from online repositories and add additional natural language annotations signifying the quantities to be solved for in each question, as well as the declarative knowledge. Two baselines, one end-to-end neural and another symbolic, both fail at this dataset." ]
Many Question Answering (QA) tasks have been studied in NLP and employed to evaluate the progress of machine intelligence. One kind of QA tasks, such as Machine Reading Comprehension QA, is well solved by end-to-end neural networks; another kind of QA tasks, such as Knowledge Base QA, needs to be translated to a formatted representations and then solved by a well-designed solver. We notice that some real-world QA tasks are more complex, which cannot be solved by end-to-end neural networks or translated to any kind of formal representations. To further stimulate the research of QA and development of QA techniques, in this work, we create a new and complex QA dataset, ChemistryQA, based on real-world chemical calculation questions. To answer chemical questions, machines need to understand questions, apply chemistry and math knowledge, and do calculation and reasoning. To help researchers ramp up, we build two baselines: the first one is BERT-based sequence to sequence model, and the second one is an extraction system plus a graph search based solver. These two methods achieved 0.164 and 0.169 accuracy on the development set, respectively, which clearly demonstrates that new techniques are needed for complex QA tasks. ChemistryQA dataset will be available for public download once the paper is published.
[]
[ { "authors": [ "Aida Amini", "Saadia Gabriel", "Shanchuan Lin", "Rik Koncel-Kedziorski", "Yejin Choi", "Hannaneh Hajishirzi" ], "title": "MathQA: Towards interpretable math word problem solving with operation-based formalisms", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Berant", "Percy Liang" ], "title": "Semantic parsing via paraphrasing", "venue": "In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2014 }, { "authors": [ "Jonathan Berant", "Andrew Chou", "Roy Frostig", "Percy Liang" ], "title": "Semantic parsing on freebase from question-answer pairs", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Jonathan Berant", "Vivek Srikumar", "Pei-Chun Chen", "Abby Vander Linden", "Brittany Harding", "Brad Huang", "Peter Clark", "Christopher D Manning" ], "title": "Modeling biological processes for reading comprehension", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Antoine Bordes", "Sumit Chopra", "Jason Weston" ], "title": "Question answering with subgraph embeddings", "venue": "arXiv preprint arXiv:1406.3676,", "year": 2014 }, { "authors": [ "Peter Clark" ], "title": "Elementary school science and math tests as a driver for ai: take the aristo challenge", "venue": "In AAAI,", "year": 2015 }, { "authors": [ "Peter Clark", "Isaac Cowhey", "Oren Etzioni", "Tushar Khot", "Ashish Sabharwal", "Carissa Schoenick", "Oyvind Tafjord" ], "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge", "venue": "arXiv preprint arXiv:1803.05457,", "year": 2018 }, { "authors": [ "Peter Clark", "Oren Etzioni", "Tushar Khot", "Bhavana Dalvi Mishra", "Kyle Richardson", "Ashish Sabharwal", "Carissa Schoenick", "Oyvind Tafjord", "Niket Tandon", "Sumithra Bhakthavatsalam" ], "title": "From’f’to’a’on the ny regents science exams: An overview of the aristo project", "venue": "arXiv preprint arXiv:1909.01958,", "year": 2019 }, { "authors": [ "David A Ferrucci" ], "title": "Introduction to “this is watson", "venue": "IBM Journal of Research and Development,", "year": 2012 }, { "authors": [ "Yanchao Hao", "Yuanzhe Zhang", "Kang Liu", "Shizhu He", "Zhanyi Liu", "Hua Wu", "Jun Zhao" ], "title": "An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "year": 2017 }, { "authors": [ "Danqing Huang", "Shuming Shi", "Chin-Yew Lin", "Jian Yin", "Wei-Ying Ma" ], "title": "How well do computers solve math word problems? large-scale dataset construction and evaluation", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Guokun Lai", "Qizhe Xie", "Hanxiao Liu", "Yiming Yang", "Eduard Hovy" ], "title": "Race: Large-scale reading comprehension dataset from examinations", "venue": "arXiv preprint arXiv:1704.04683,", "year": 2017 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Robin Jia", "Percy Liang" ], "title": "Know what you don’t know: Unanswerable questions for SQuAD", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Mrinmaya Sachan", "Eric P Xing" ], "title": "Parsing to programs: A framework for situated qa", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Mrinmaya Sachan", "Kumar Avinava Dubey", "Tom M Mitchell", "Dan Roth", "Eric P Xing" ], "title": "Learning pipelines with limited data and domain knowledge: A study in parsing physics problems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Amrita Saha", "Vardaan Pahuja", "Mitesh M Khapra", "Karthik Sankaranarayanan", "Sarath Chandar" ], "title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yan Wang", "Xiaojiang Liu", "Shuming Shi" ], "title": "Deep neural solver for math word problems", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Zhuoyu Wei", "Jun Zhao", "Kang Liu", "Zhenyu Qi", "Zhengya Sun", "Guanhua Tian" ], "title": "Large-scale knowledge base completion: Inferring via grounding network sampling over selected instances", "venue": "In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management,", "year": 2015 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Wen-tau Yih", "Xiaodong He", "Christopher Meek" ], "title": "Semantic parsing for single-relation question answering", "venue": "In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2014 }, { "authors": [ "Wen-tau Yih", "Matthew Richardson", "Christopher Meek", "Ming-Wei Chang", "Jina Suh" ], "title": "The value of semantic parse labeling for knowledge base question answering", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed huge advances for the question answering (QA) task, and some AI agents even beat human beings. For example, IBM Watson won Jeopardy for answering questions which requires a broad range of knowledge (Ferrucci, 2012). Transformer-based neural models, e.g. XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), beat human beings on both machine reading comprehension and conversational QA task. Ariso System (Clark et al., 2019) gets an ’Ace’ for an eighth-grade science examination and is able to give 80 percent correct answers for 12th-grade science test.\nMost solutions of the QA task fall into two categories, end-to-end solution and parsing plus execution. The former predicts answers with an end-to-end neural network, e.g., Reading comprehension QA (Rajpurkar et al., 2016; 2018; Lai et al., 2017) and Science Exam QA (Clark et al., 2019; 2018). The latter translates a question into a specific structural form which is executed to get the answer. For example, in knowledge-based question answering (KBQA) (Berant et al., 2013; Yih et al., 2016; Saha et al., 2018) questions are parsed into SPARQL-like queries consisting of predicates, entities and operators. In Math Word Problem (Huang et al., 2016; Amini et al., 2019) questions are translated to stacks of math operators and quantities.\nHowever, in the real world, many QA tasks cannot be solved by end-to-end neural networks and it is also very difficult to translate questions into any kind of formal representation. Solving Chemical Calculation Problems is such an example. Chemical Calculation Problems cannot be solved by end-to-end neural networks since complex symbolic calculations are required. It is also difficult to translate such problems into formal representations, since not all operators in solving processes occur in question stems, which makes it difficult to annotate data and train models.\nTable 1 shows a question in ChemistryQA. To answer the question in Table 1, machines need to: 1) understand the question and extract variable to be solved and conditions in the question; 2) retrieve and apply related chemistry knowledge, including calculating molarity by mole and volume, balancing a chemical equation and calculating the equilibrium constant K, although there is no explicit statement\nfor ”calculating molarity” and ”balancing equations” in the question. The combination of these capabilities is scarcely evaluated well by existing QA datasets. In order to foster the research on this area, we create a dataset of chemical calculation problems, namely ChemstriyQA.\nWe collect about 4,500 chemical calculation problems from https://socratic.org/ chemistry, covering more than 200 topics in chemistry. Besides the correct answer, we also label the target variable and conditions provided in a question. Such additional labels facilitate potential data augmentation and inferring golden solving process for training.\nTo verify the dataset is consistent with the purpose of evaluating AI’ comprehensive capability and help other researchers ramp up, we build two baselines as follows. a) We build a BERT based sequence to sequence model, which take the raw question as input and the answer as output. The first baseline achieves 0.164 precision on ChemistryQA. b) We create an extraction system which extracts the target variable and conditions from raw questions. The extracted structure information is fed into a graph searching based solver, which performs a sequence of calculating and reasoning to get the final answer. The second baseline achieves 0.169 precision on ChemistryQA.\nIn summary, our contribution of this paper is shown as follows.\n• We propose a new QA task, ChemistryQA, which requires open knowledge and complex solving processes. ChemistryQA is different with other existing QA tasks, and cannot be solved by existing QA methods very well.\n• We create a ChemistryQA dataset, which contains about 4,500 chemical calculation problems and covers more than 200 topics in chemistry. In this dataset, we provide a novel annotation for questions, which only labels the variable asked and conditions from question stem but not solving processes. This annotation is much easier and cost less effort, and it is flexible for researchers to explore various of solutions as a weakly supervised dataset.\n• We build two baselines to prove: a) end-to-end neural networks cannot solve this task very well; b) the annotation we provided can be used to improve a simple graph search based solver." }, { "heading": "2 CHEMISTRYQA DATASET", "text": "" }, { "heading": "2.1 DATA COLLECTION", "text": "We collect chemical calculation problems from https://socratic.org/chemistry. It this website, there are more than 30,000 questions which cover about 225 chemistry related topics, e.g., Decomposition Reactions, Ideal Gas Law and The Periodic Table. There is an example of annotation page in Appendix A. Figure 2.A shows the original page in Socratic, which contains a raw question,\nan answer and probably a description of solving process. We filter raw questions by a simple rule, and only keep questions with a numerical value, a chemical formula or a chemical equation as answers." }, { "heading": "2.2 DATA ANNOTATION", "text": "Unlike similar tasks’ annotation, we cannot collect all the atomic operations needed before starting annotation, since the set of chemical operators is not closed. Therefore, we propose a novel annotation method that only the target variable and all conditions will be labeled in a triple-like format. For instance in Figure 2, the target variable is labeled as (subject = reaction, predicate = Equilibrium constant K, object = ?), and one of conditions is labeled as (subject = N2, predicate = Mole, object = 2.80× 10−4 mol). Therefore, for a question in a link, parts to be annotated are question stems, correct answers, the target variable and all conditions. Figure 2.B shows our annotation page for a question link. For questions and answers, we ask annotators to copy them into corresponding forms. If there are typos or obvious mistakes, we also ask annotators to correct them. For the target variable and conditions, we break them down into several types: physical unit, chemical formula, chemical equation, substance name and other. We also design easy-to-understand annotation interfaces, e.g., ([BLANK (predicate)] OF [BLANK (subject)] IN [BLANK (unit or None)]) and ([BLANK (predicate)] OF [BLANK (subject)] = [BLANK (object or value)]) for tagging the physical unit from the raw question as variables and conditions, respectively. More detail about other types’ definitions and annotation interfaces are shown in Appendix A.\nWe employed crowdsourcing for this annotation work. The task was split into 6 batches and assigned to annotators sequentially. We applied some check-and-verify mechanism in first batch to ensure the annotation quality, also help annotators be more familiar with this task. Finally, we have collected 4418 annotated questions within around 336 hours.\nDuring the annotating phase, we encourage annotators to use text phrase in original questions whenever possible for chemical formula, chemical equation, substance name, subject and value in physical unit, while for predicates and units, we do not make any restrictions. We maintain two dynamic mappings to convert mentions labeled to identified predicates or unites, which greatly reduces the difficulty of labeling and the total annotation time. For other, there is not any restrictions either, and we only consume identified ones, e.g., STP." }, { "heading": "2.3 DATA ANALYSIS", "text": "We divide annotated questions into train, valid and test subsets, and their sizes are 3433, 485 and 500, respectively. We make some statistics on the annotated questions in different perspectives as follows.\n1) According to the types of target variables, we divide questions into 3 classes, physical unit, chemical formula, chemical equation. Table 2 shows examples belonging to different question types, and Table 3 shows the distribution of question types.\n2) There are 172 unique predicates, 90 unique units and 25 identified other conditions. We conducted detailed statistics on them in Appendix B," }, { "heading": "2.4 COMPARING WITH OTHER QA DATASETS", "text": "We pick a representative dataset from each type of task to compare with ChemistryQA, including WEBQUESTIONS(Berant et al., 2013), RACE(Lai et al., 2017), ARC(Clark et al., 2018) and MathQA(Amini et al., 2019). We compare these QA datasets in Answer Type, External Knowledge, Knowledge usage, Calculation and Annotation perspectives, and Table 4 shows the detail.\nComparing ChemistryQA with existing QA datasets, ChemistryQA has the following advantages:\n1) ChemistryQA contains more diverse answer types and excludes the influence of randomness by not providing options.\n2) There are various knowledge required by ChemistryQA including a) triplet-like fact, e.g., substances’ molar mass, colour and other physical properties, b) calculation methods between various physical quantities and c) domain specific skills, e.g., balancing chemical equations. The knowledge in ChemsitryQA is open and used in various ways, while other datasets use knowledge in single way.\n3) ChemistryQA only provides triplet like extraction annotation which isolates language understanding and domain knowledge as much as possible. This setting makes annotation and model training easier." }, { "heading": "3 METHODS", "text": "We provide two completely different baselines: 1) an end-to-end neural based solver and 2) a solving pipeline composed of an extractor and a graph search based solver." }, { "heading": "3.1 END TO END SOLVER", "text": "We build a sequence to sequence model, and both of its encoder and decoder are based on BERT model. Both encoder and decoder load from pretrained BERT and share the same vocabulary, more than 30,000 sub-tokens from BERT. To build the decoder, we change the encoder’s structure as Vaswani et al. (2017) did: 1) the self-attention of decoder has a triangular mask matrix and 2) there is an extra layer of attention performing over outputs of the encoder and hidden states of the decoder. We also append a head of predicting next token to the decoder, which maps hidden states into the vocabulary size space Rv and follows a softmax layer behind it. The end-to-end solver takes the question as the input of encoder and takes the answer as the target of decoder. Questions are split into sub-tokens, and even real numbers also break into sub-tokens. We greedily choose the next token with maximum score after softmax. We append a special token, ’[SEP]’, as the end of the target sequence. During inference, the decoding process ends when the decoder outputs ’[SEP]’ token.This method represents a class of powerful neural networks, which achieved state-of-the-art performance on many QA tasks." }, { "heading": "3.2 EXTRACTOR PLUS GRAPH SEARCH BASED SOLVER", "text": "As the second baseline, we build an extractor plus solver pipeline. First, we employ the extractor to extract the target variable and conditions from the question text. The target variable and conditions are represented as triplets as described in the above Data Annotation Section. Second, we employ a graph search based solver to take triplets as input and execute pre-defined functions in chemistry domain to get the final answer. Figure 1 shows the structure of the extractor plus solver pipeline." }, { "heading": "3.2.1 FSA BASED EXTRACTOR", "text": "We expect the extractor can take the raw question as input and output triplet like variable and conditions, so a sequence to sequence model is a good candidate. However, such straight forward method is hard to work, because triplets are structural and we also want to extract simple semantic information, i.e., whether a triplet is a target variable or a condition and types of triplets. Therefore, we design a Finite State Automata (FSA) to restrict the type of each output token. We define 12 separate vocabulary sets, and the Table in Appendix C shows the these vocabulary names, sizes and tokens belonging to them. To distinguish between vocabulary and token, we use upper cases to represent vocabulary names and lower cases to represent tokens. START, END, QC END are control symbols. START and END represent the beginning and end of the decoding sequence, while QC END represents the end of the target variable or a condition block. PHYSICAL UNIT, CHEMICAL EQUATION, CHEMICAL FORMULA, SUBSTANCE, C OTHER are types of target variables or conditions. POINTER contains all possible positions of token in question, which can be used to represent the start or end of a text span, e.g., subjects, chemical equations and values.\nFor the model, we employ a standard pretrained BERT based model as the encoder. We still use 6 layers transformers as the decoder model, and use a triangular mask matrix as the attention mask. We use state transitions to represent the relations between output tokens and build a FSA to model the state transition process. The decoder always outputs ”start” as the first token, determines the current state and chooses the next vocabulary based on the current output and state. Figure 1 also shows an example of the output of FSA based Extractor.\nIn the training stage, we convert triplet-like annotations to the FSA based token sequence, and treat them as the target of the extractor. For example in Figure 1, the target variable, <reaction, equilibrium constant k , ?>, is translated to physical unit 28 28 equilibrium constant k None qc end, and one of conditions, <reaction, chemical equation, N2(g) +O2(g)→ N2O(g) > is translated to chemical equation 30 34 qc end. In the inference stage, we obtain the FSA based token sequence from the decoder, and then we convert it to triplets as the input of subsequent solver. Since there is no conflicts in FSA, both of directed conversions are determinate and lossless." }, { "heading": "3.2.2 GRAPH SEARCH BASED SOLVER", "text": "We get the target variable and all kinds of conditions from the extractor, and then we want to call a sequence of functions to calculate the final answer step by step. However, we do not have such information, because the annotators only label target variables and conditions but not the solving processes. According to our observation, specific physical quantities only can be calculated by functions with specific chemistry knowledge. For example in Figure 1, the equilibrium constant k of the reaction can be calculated by k = [N2O] 2\n[N2]2[O2] , where [] is the molarity of a substance. Therefore,\nwe can implement a function, noted as CalculateK, which takes the molarity of the three substances and the balanced chemical equation as the input and calculates k. However, there is no molarity but only mole of substances extracted, and the chemical equation is not balanced. Therefore, We need to implement a function, CalculateMolarity, to obtain molarity of these substances and a function, BalanceChemicalEquation, to balance the chemical equation before calling CalculateK.\nWe model this process as a graph search in a hyper graph as follows: 1) We view triplets as nodes in this graph, e.g., <reaction, chemical equation, N2(g) +O2(g)→ N2O(g) >. 2) We view pre-build functions as directed hyper edges, e.g., CalculateK, CalculateMolarity and BalanceChemicalEquation. A hyper edge directs from the input triplets to the output triplets, e.g., the hyper edge, CalculateK, starts from < N2,molarity,?>, < O2,molarity,?>, < N2O,molarity,?>,<reaction,chemical equation,N2(g)+O2(g)→ N2O(g)> and directs to <reaction, k, ?>. 3) The solver maintains a set of triplets without unknown quantities, and uses conditions obtained from the extractor to initialize the set. 4) The solver starts searching from the target variable. If the inputs can be satisfied by the current triplets, then the solver executes the function and adds the result to the set. Otherwise, the solver performs deep searching to check whether the unsatisfied input can be calculated by any function. Table 5 shows the algorithm, where S is the set of triplets without unknown quantities, If and Of are inputs and outputs of function f , Iu is a subset of If and ∀i ∈ Iu : i /∈ S, and Fp is the set of functions with p as their output triplets’ predicate.\nIn Appendix D, we show several functions with inputs and outputs as examples. A triplet with specific predicate can be calculated by different more than one functions with different inputs, e.g., CalculateMoleFromMass and CalculateMoleFromNumber take mass and the number of atoms(or molecules) as input, respectively. We do not need to implement all functions for all predicate combinations, since there may a combination of functions can represent the relationship among predicates. For example, we can call CalculateMoleFromMass and CalculateMolarity sequentially to calculate a substance’s molarity given its mass. We implemented 78 functions which covers 61 predicates in training set, and the coverage is 35.5%. We list all functions we implemented in Appendix D and will publish them once the paper is published." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SETTING", "text": "For the end-to-end solver, we use the pretrained uncased BERT base model to initialize its encoder, and use the first 6 layers of BERT to initialize its decoder. We also reuse the tokenizer from BERT for this model, and both encoder and decoder share the sub-token embedding matrix from BERT base.\nWe tuned the hyper parameters as follows: learning rate ∈ {1×10−5, 5×10−5, 1×10−4, 5×10−5}, epoch ∈ {100, 200, 300, 500} and fix batch size as 8. For the FSA based extractor, we use the same model structure as the end-to-end solver. However, we replace BERT based decoder’s vocabulary to 12 customized FSA based vocabularies. We initialize parameters of the extractor by pretrained uncased BERT base model, except the embeddings of tokens from decoder’s vocabularies. We randomly initialize the embeddings of tokens from decoder’s vocabularies. We also tuned hyper parameters in extractor as exactly same as we did for the end-toend solver. For the graph search based solver, we restrain the maximum depth as 5 when perform searching. We train both the end-to-end solver and the extractor on a single P40 GPU." }, { "heading": "4.2 EVALUATION AND RESULT", "text": "To evaluate the methods, we design different criterion for different types of questions, i.e., physical unit, chemical formula, chemical equation questions. For physical unit questions, if |A−Â|A < 0.05, we treat the answer is correct. For both chemical formula and chemical equation, we remove spaces in both A and  and perform string matching between them. A is the ground true answer and  is the predicted answer.\nFor graph search based solver, we also evaluate the accuracy of the extractor beside the final answer. We evaluate token-level and question-level accuracy for the output sequence from the decoder,\nrespectively calculated by At = 1Nq ∑Nq i ∑Nti j (tj==t̂j)?1:0 Nti and Aq = ∑Nq i (si==ŝi)?1:0 Nq , where t and t̂ are respectively ground true and predicted tokens, s and ŝ are ground true and predicted output sequences, Nq is the number of questions, and Nt is the number of tokens in s. (s == ŝ) is true if and only if (t == t̂) is true at all position in s.\nTable 6 shows the performances of these two methods. We can obtain the following observations:\na) End-to-end solver achieves 0.164 answer accuracy, which surprises us but also implies that the powerful neural network can learn some pattern of questions, including calculating physical quantities, inferring chemical formulas and equations.\nb) The FSA based extractor plus graph search based solver achieves 0.169 answer accuracy even only with 35.5% functions implemented, which implies this framework is effective, and the larger coverage of functions implemented will likely increase the accuracy.\nc) For the extractor, the token-level accuracy is 0.713, but the sequence level accuracy drops to only 0.303. This implies the issue of cascading error is serious, but to improve sequence-level accuracy is very difficult. Thus, the more robust subsequent solver is probably needed." }, { "heading": "4.3 ANALYSIS", "text": "We want to analyze reasons from wrong cases and get ratios of them. First, we know there are about 69.7% wrong cases come from the wrong extraction results, and then we sample 100 wrong cases which have the correct extraction results from the development set for analyzing reasons. Table 7 shows the ratios of reasons.\nFrom the analysis, we can observe that most of wrong cases are caused by lacking some chemical knowledge in other forms which cannot be handled by the current solver." }, { "heading": "5 RELATED WORK", "text": "We introduce related work of this paper from two perspectives: 1) There are kinds of examinations employed as benchmark to evaluate machine intelligence. 2) There are some NLP/CV tasks or datasets solved by both end-to-end neural networks and parsing plus execution method.\nFrom the perspective of examinations, so far, there have been several tasks proposed based on some question types in specific subjects: 1) Math word problem (Huang et al., 2016; Wang et al., 2017; Amini et al., 2019) can be viewed as a semantic parsing task. Most of them require models translating question texts to equations or sequences of mathematical operations and then performing simple calculations to get the final answer. However, these datasets do not involve domain knowledge and usually provide strong supervision for the parser, i.e., equations or sequences of operations. 2) Newtonian physics datasets (Sachan & Xing, 2018; Sachan et al., 2018) involves physical knowledge, but its knowledge is narrow and limited to Newtonian physics. This dataset is not public. 3) Elementary and middle school science examination (Clark, 2015; Clark et al., 2018) contains multiple choice questions involving open knowledge crossing various subjects. However, many questions in the dataset can be solved by retrieval and text matching. Although ARC(Clark et al., 2018) separates a subset of hard questions, the state-of-the-art on ARC is retrieval plus a powerful transformer based model. 3) Biological processes problem dataset (Berant et al., 2014) provides passages to describe biological processes and asks questions about the passages. Different from ours, this dataset focuses more on evaluating machine reading comprehension, as English reading comprehension data (Lai et al., 2017) does.\nFrom the perspective of solving methods, besides examinations, there are several datasets can be solved by both end-to-end models and parsing plus execution method. For example, WEBQUESTIONS(Berant et al., 2013) is a famous KBQA dataset, and there are both end-to-end models (Bordes et al., 2014; Hao et al., 2017) and semantic parser (Yih et al., 2014; Berant & Liang, 2014) working on it. For WEBQUESTIONS, the solving process (i.e., executing SPARQL on Freebase) is fix after obtaining the parsing result, and the correct parsing result must lead to the correct answer. However, for ChemistryQA, there is more than one paths from extraction result to the correct answer, and it requires searching on the graph. Another example is Knowledge Base Completion (KBC) task. KBC task can be solved by both end-to-end knowledge graph embedding models (Bordes et al., 2013) and logical inference based method, e.g., Markov Logic Network (Wei et al., 2015). However, the input of KBC is not natural language." }, { "heading": "6 CONCLUSION", "text": "Real world question answering is more complex than existing QA tasks, since it requires not only understanding questions well but also interleaving complex reasoning with knowledge retrieval, which is scarcely represented by existing QA datasets. To foster the research on this area, we create ChemstriyQA dataset which contains chemical calculation problems. We implemented two baselines, a sequence to sequence model and a FSA based extractor plus a graph search based solver, which stands for two types of methods, the end to end neural network and the extractor plus solver, respectively. The experiment result shows the extractor-plus-solver baseline can achieve a better performance with only 35.5% functions in domain implemented. Therefore, there is still room for improving in the extractor-plus-solver method, but it is hard to improve performance for the end to end models." }, { "heading": "A ANNOTATION EXAMPLES, INTERFACES AND PROCESSES", "text": "A.1 AN EXAMPLE OF ANNOTATION PAGE\nFigure 2 shows a page of annotation, the left part is the original web page in socratic.org and the right part is the annotation area.\nA.2 ANNOTATION TYPES\nThe annotation area contains two parts: one for labeling the target variable and the other one for labeling conditions. For a chemical calculation problem, there is only one target variable but probably more than one conditions. Thus, annotators are free to add condition block. Usually, the interface for question variables and conditions are different even for the same type of annotations. For a variable or condition block will be first ask to choose a type of annotation, and then the page will show the interface of it. Table 8 shows interfaces for all types of annotations.\nA.3 CROWD SOURCING ANNOTATION DETAILS\nFirst, we estimated the annotation time by recording time spent on annotators label a small scale experimental set, and got the average time spent on labeling and verifying are 4.65 minutes and 3.25 minutes, respectively. In earlier stage, we performed both labeling and verifying on each question. After we got the verified rate is high enough, we only keep the labeling process, which leads to the annotation time of a question is 4.4 minutes. Finally, we obtain 4418 annotated questions with spending about 336 hours." }, { "heading": "B PREDICATES, UNITS AND OTHERS IN CHEMISTRYQA", "text": "Table 9 shows top predicates and units.\nWe list all predicates in Table 10, list all units in Table 11 and list all other conditions in Table 12." }, { "heading": "C FSA VOCABULARIES AND STATE TRANSITIONS", "text": "Table 13 shows the vocabularies used in Finite State Automata of Extractor and some tokens in them.\nWe define 26 states for FSA. Table 14 shows all states and Figure 3 shows the state transitions.\nTable 14: States of FSA\ns start s q pu s q pu subject start s q pu subject end s q pu predicate s q pu unit s c pu s c pu subject start s c pu subject end s c pu value start s c pu value end s c pu property s q ce s q cf s q end s c ce s c cf s c sub s c other s c ce start s c ce end s c cf start s c cf end s c sub start s c sub end s c other type\nFigure 3: The state transition graph for the FSA" }, { "heading": "D FUNCTIONS", "text": "Table 15 shows several functions’ inputs and outputs for example.\nWe also list all functions we implemented as follows:\nFFunc Name2CE\nFunc Formula2CE\nFunc Equation2CE\nFunc BalanceChemicalEquation\nFunc Mole2Atom\nFunc CE2MolarMass\nFunc Mass2Mole\nFunc Ph2Kw\nFunc Number2Mole\nFunc Mole2Number\nFunc MassMolar mass2Mole\nFunc MoleMolar mass2Mass\nFunc VolumeMolarity2Mole\nFunc VolumeMoleTemperature2Pressure\nFunc VolumeTemperaturePressure2Mole\nFunc PressureMolar massTemperature2Density\nFunc PressureDensityTemperature2Molar mass\nFunc MoleMass2Molar mass\nFunc MoleVolumePressure2Temperature\nFunc MoleTemperaturePressure2Volume\nFunc MoleVolume2Molarity\nFunc MoleVolume2Concentration\nFunc MolarityVolume2Mole\nFunc MoleMolarity2Volume\nFunc Ph2Acid concentration\nFunc Acid concentration2Ph\nFunc Ph2Poh\nFunc Poh2Ph\nFunc Ka2Pka\nFunc Mass concentrationMolar mass2Molarity\nFunc MassVolume2Density\nFunc DensityVolume2Mass\nFunc MassDensity2Volume\nFunc MolarityMolar mass2Molality\nFunc MolalityMolar mass2Molarity\nFunc MolarityTemperature2Osmolarity\nFunc Theoretical yieldPercent yield2Actual yield\nFunc Actual yieldPercent yield2Theoretical yield\nFunc Theoretical yieldActual yield2Percent yield\nFunc Ka2Degree of dissociation\nFunc 2Freezing point temperature\nFunc Gauge pressure2Pressure\nFunc MassVelocity2Kinetic energy\nFunc KaMolarity2Percent ionization\nFunc 2Standard atmospheric pressure\nFunc MolalityMolar mass2Ww\nFunc WwMolar mass2Molality\nFunc MassMass2Mass percent\nFunc MoleMole2Mole percent\nFunc MolarityMolarity2Molarity percent\nFunc Poh2Alkali concentration\nFunc AtomMoleculeMole2Mole\nFunc MassSpecific heat capacityTemperature2Heat energy\nFunc Heat energySpecific heat capacityTemperature2Mass\nFunc Heat energyTemperatureMass2Specific heat capacity\nFunc Heat energySpecific heat capacityMass2Temperature\nFunc MassVolume2Mass concentration\nFunc MassVolume2Density\nFunc BptHvapPressurePressure2Bpt\nFunc Molarity3 2Ka\nFunc Molarity3 2Kb\nFunc MolarityDepthAbsorbance2Absorptivity\nFunc AbsorbanceMolarityAbsorptivity2Depth\nFunc MolarityDepthAbsorptivity2Absorbance\nFunc Absorbance2Transmittance\nFunc Volume2 2Dilution factor\nFunc ResistanceVoltage2Electric current\nFunc Electric currentVoltage2Resistance\nFunc Electric currentResistance2Voltage\nFunc DensityHeight2Gauge pressure\nFunc Mass2 Time2HalfLife\nFunc Heat of fusionMolePower2Melted time\nFunc ShcMolar mass2Molar heat\nFunc MoleculeMolarity2Ph\nFunc Chemistry Equation2K\nFunc GetCoefficient\nFunc Chemistry Equation2K\nFunc Chemistry Formula2Oxidation number" } ]
2,020
CHEMISTRYQA: A COMPLEX QUESTION ANSWERING
SP:804fada5af8ccfe842706ac812bcc294956b4fb4
[ "The authors present a new representation learning algorithm that trades off between a sufficiency condition (that is, the label should be independent of the input conditioned on the representation) and what they call a \"disentangling\" condition - that the representation vectors should be independent of one another and rotationally invariant. While the first condition has been used to define disentangled representations, the second is not standard. From the condition of rotational invariance, they require that the distribution over representations is isomorphic to a uniform Gaussian. They arrive at a loss with two terms, the first is a distance correlation between labels and representation, and the second is a divergence between the representation and a uniform Gaussian. In this sense, the regularization term looks quite similar to a VAE while the loss term looks quite similar to standard classification losses. The regularization is represented as a maximum over another loss, leading to a GAN-like coupled optimization problem." ]
We propose a novel representation learning approach called sufficient and disentangled representation learning (SDRL). With SDRL, we seek a data representation that maps the input data to a lower-dimensional space with two properties: sufficiency and disentanglement. First, the representation is sufficient in the sense that the original input data is conditionally independent of the response or label given the representation. Second, the representation is maximally disentangled with mutually independent components and is rotation invariant in distribution. We show that such a representation always exists under mild conditions on the input data distribution based on optimal transport theory. We formulate an objective function characterizing conditional independence and disentanglement. This objective function is then used to train a sufficient and disentangled representation with deep neural networks. We provide strong statistical guarantees for the learned representation by establishing an upper bound on the excess error of the objective function and show that it reaches the nonparametric minimax rate under mild conditions. We also validate the proposed method via numerical experiments and real data analysis.
[]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Emergence of invariance and disentanglement in deep representations", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "Understanding intermediate layers using linear classifier probes", "venue": "arXiv preprint arXiv:1610.01644,", "year": 2016 }, { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Syed Mumtaz Ali", "Samuel D Silvey" ], "title": "A general class of coefficients of divergence of one distribution from another", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1966 }, { "authors": [ "Brandon Amos", "J. Zico Kolter" ], "title": "A PyTorch Implementation of DenseNet", "venue": "https://github. com/bamos/densenet.pytorch. Accessed:", "year": 2020 }, { "authors": [ "Martin Anthony", "Peter L Bartlett" ], "title": "Neural Network Learning: Theoretical Foundations", "venue": null, "year": 2009 }, { "authors": [ "M.S. Bartlett" ], "title": "The vector representation of a sample", "venue": "Mathematical Proceedings of the Cambridge Philosophical Society,", "year": 1934 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Yann Brenier" ], "title": "Polar factorization and monotone rearrangement of vector-valued functions", "venue": "Communications on Pure and Applied Mathematics,", "year": 1991 }, { "authors": [ "Wlodzimierz Bryc" ], "title": "The Normal Distribution: Characterizations with Applications. Lecture Notes in Statistics", "venue": null, "year": 1995 }, { "authors": [ "C. Burgess", "I. Higgins", "A. Pal", "Loic Matthey", "Nick Watters", "G. Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "arXiv: Machine Learning,", "year": 2018 }, { "authors": [ "Ricky T.Q. Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Dennis R. Cook" ], "title": "Regression Graphics: Ideas for Studying Regressions Through Graphics", "venue": "Wiley Series in Probability and Statistics. Wiley,", "year": 1998 }, { "authors": [ "Dennis R. Cook", "S. Weisberg" ], "title": "Sliced inverse regression for dimension reduction: comment", "venue": "Journal of the American Statistical Association,", "year": 1991 }, { "authors": [ "R Dennis Cook" ], "title": "Fisher lecture: Dimension reduction in regression", "venue": "Statistical Science,", "year": 2007 }, { "authors": [ "Victor De la Pena", "Evarist Giné" ], "title": "Decoupling: from Dependence to Independence", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Kien Do", "T. Tran" ], "title": "Theory and evaluation metrics for learning disentangled", "venue": "representations. ArXiv,", "year": 2020 }, { "authors": [ "Cian Eastwood", "Christopher K.I. Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ronald A. Fisher" ], "title": "On the mathematical foundations of theoretical statistics", "venue": "Philosophical Transactions of the Royal Society of London, A,", "year": 1922 }, { "authors": [ "Kenji Fukumizu", "Francis R Bach", "Michael I Jordan" ], "title": "Kernel dimension reduction in regression", "venue": "The Annals of Statistics,", "year": 1871 }, { "authors": [ "Yuan Gao", "Yuling Jiao", "Yang Wang", "Yao Wang", "Can Yang", "Shunkang Zhang" ], "title": "Deep generative learning via variational gradient flow", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yuan Gao", "Jian Huang", "Yuling Jiao", "Jin Liu" ], "title": "Learning implicit generative models with theoretical guarantees", "venue": "arXiv preprint arXiv:2002.02862,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Balázs Gyenis" ], "title": "Maxwell and the normal distribution: A colored story of probability, independence, and tendency toward equilibrium", "venue": "Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Irina Higgins", "D. Amos", "D. Pfau", "Sébastien Racanière", "Loı̈c Matthey", "Danilo Jimenez Rezende", "Alexander Lerchner" ], "title": "Towards a definition of disentangled", "venue": "representations. ArXiv,", "year": 2018 }, { "authors": [ "Alex Hjelm", "Devon R", "Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Xiaoming Huo", "Gábor J Székely" ], "title": "Fast computing for distance covariance", "venue": null, "year": 2016 }, { "authors": [ "HM Kabir", "Moloud Abdar", "Seyed Mohammad Jafar Jalali", "Abbas Khosravi", "Amir F Atiya", "Saeid Nahavandi", "Dipti Srinivasan" ], "title": "Spinalnet: Deep neural network with gradual input", "venue": null, "year": 2007 }, { "authors": [ "Amor Keziou" ], "title": "Dual representation of φ-divergences and applications", "venue": "Comptes Rendus Mathématique,", "year": 2003 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical Report TR-2009,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabled observations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yann LeCun", "B. Boser", "J.S. Denker", "D. Henderson", "R.E. Howard", "W. Hubbard", "L.D. Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges. Mnist handwritten digit database." ], "title": "URL http://yann", "venue": "lecun. com/exdb/mnist, 7:23, 2010.", "year": 2010 }, { "authors": [ "Ker-Chau Li" ], "title": "Sliced inverse regression for dimension reduction", "venue": "Journal of the American Statistical Association,", "year": 1991 }, { "authors": [ "Tengyuan Liang" ], "title": "On how well generative adversarial networks learn densities: nonparametric and parametric results", "venue": "arXiv: Statistics Theory,", "year": 2018 }, { "authors": [ "Francesco Locatello", "Michael Tschannen", "Stefan Bauer", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Disentangling factors of variation using few labels", "venue": null, "year": 1905 }, { "authors": [ "Alireza Makhzani", "Jonathon Shlens", "Navdeep Jaitly", "Ian Goodfellow" ], "title": "Adversarial autoencoders", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Robert J. McCann" ], "title": "Existence and uniqueness of monotone measure-preserving maps", "venue": "Duke Mathematical Journal,", "year": 1995 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Guido Philippis" ], "title": "Regularity of optimal transport maps and applications, volume 17", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Karl Ridgeway", "Michael C. Mozer" ], "title": "Learning deep disentangled embeddings with the f-statistic", "venue": "loss. ArXiv,", "year": 2018 }, { "authors": [ "Tyrrell R. Rockafellar" ], "title": "Convex analysis", "venue": null, "year": 1970 }, { "authors": [ "Andrew M. Saxe", "Yamini Bansal", "Joel Dapello", "Madhu Advani", "Artemy Kolchinsky", "Brendan D Tracey", "David D Cox" ], "title": "On the information bottleneck theory of deep learning", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Zuowei Shen", "Haizhao Yang", "Shijun Zhang" ], "title": "Deep network approximation characterized by number of neurons", "venue": "arXiv preprint arXiv:1906.05497,", "year": 2019 }, { "authors": [ "Ravid Shwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "arXiv preprint arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Shashank Singh", "Ananya Uppal", "Boyue Li", "Chun-Liang Li", "Manzil Zaheer", "Barnabás Póczos" ], "title": "Nonparametric density estimation under adversarial losses", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aravind Srinivas", "Michael Laskin", "Pieter Abbeel" ], "title": "Curl: Contrastive unsupervised representations for reinforcement learning", "venue": "arXiv preprint arXiv:2004.04136,", "year": 2020 }, { "authors": [ "Charles J. Stone" ], "title": "Optimal global rates of convergence for nonparametric regression", "venue": "The Annals of Statistics,", "year": 1982 }, { "authors": [ "Gábor J. Székely", "Maria L. Rizzo" ], "title": "Brownian distance covariance", "venue": "The Annals of Applied Statistics,", "year": 2009 }, { "authors": [ "Gábor J. Székely", "Maria L. Rizzo", "Nail K. Bakirov" ], "title": "Measuring and testing dependence by correlation of distances", "venue": "The Annals of Statistics,", "year": 2007 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "W. Bialek" ], "title": "The information bottleneck", "venue": "method. ArXiv,", "year": 2000 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Alexandre Tsybakov" ], "title": "Introduction to Nonparametric Estimation", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Cédric Villani" ], "title": "Optimal Transport: Old and New, volume 338", "venue": null, "year": 2008 }, { "authors": [ "Rick Wang", "Amir-Hossein Karimi", "Ali Ghodsi" ], "title": "Distance correlation autoencoder", "venue": "In 2018 International Joint Conference on Neural Networks (IJCNN),", "year": 2018 }, { "authors": [ "Satosi Watanabe" ], "title": "Information theoretical analysis of multivariate correlation", "venue": "IBM Journal of Research and Development,", "year": 1960 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning is a fundamental problem in machine learning and artificial intelligence (Bengio et al., 2013). Certain deep neural networks are capable of learning effective data representation automatically and achieve impressive prediction results. For example, convolutional neural networks, which can encode the basic characteristics of visual observations directly into the network architecture, is able to learn effective representations of image data (LeCun et al., 1989). Such representations in turn can be subsequently used for constructing classifiers with outstanding performance. Convolutional neural networks learn data representation with a simple structure that captures the essential information through the convolution operator. However, in other application domains, optimizing the standard cross-entropy and least squares loss functions do not guarantee that the learned representations enjoy any desired properties (Alain & Bengio, 2016). Therefore, it is imperative to develop general principles and approaches for constructing effective representations for supervised learning.\nThere is a growing literature on representation learning in the context deep neural network modeling. Several authors studied the internal mechanism of supervised deep learning from the perspective of information theory (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017; Saxe et al., 2019), where they showed that training a deep neural network that optimizes the information bottleneck (Tishby et al., 2000) is a trade-off between the representation and prediction at each layer. To make the information bottleneck idea more practical, deep variational approximation of information bottleneck (VIB) is considered in Alemi et al. (2016). Information theoretic objectives describing conditional independence such as mutual information are utilized as loss functions to train a representation-learning function, i.e., an encoder in the unsupervised setting (Hjelm et al., 2018; Oord et al., 2018; Tschannen et al., 2019; Locatello et al., 2019; Srinivas et al., 2020). There are several interesting extensions of variational autoencoder (VAE) (Kingma & Welling, 2013) in the form of VAE plus a regularizer, including beta-VAE (Higgins et al., 2017), Annealed-VAE (Burgess et al., 2018), factor-VAE (Kim & Mnih, 2018), beta-TC-VAE (Chen et al., 2018), DIP-VAE (Kumar et al., 2018). The idea of using a latent variable model has also been used in adversarial auto-\nencoders (AAE) (Makhzani et al., 2016) and Wasserstein auto-encoders (WAE) (Tolstikhin et al., 2018). However, these existing works focus on the unsupervised representation learning.\nA challenge of supervised representation learning that distinguishes it from standard supervised learning is the difficulty in formulating a clear and simple objective function. In classification, the objective is clear, which is to minimize the number of misclassifications; in regression, a least squares criterion for model fitting error is usually used. In representation learning, the objective is different from the ultimate objective, which is typically learning a classifier or a regression function for prediction. How to establish a simple criterion for supervised presentation learning has remained an open question (Bengio et al., 2013).\nWe propose a sufficient and disentangled representation learning (SDRL) approach in the context of supervised learning. With SDRL, we seek a data representation with two characteristics: sufficiency and disentanglement. In the context of representation learning, sufficient means that a good representation should preserve all the information in the data about the supervised learning task. This is a basic requirement and a long-standing principle in statistics. This is closely related to the fundamental concept of sufficient statistics in parametric statistical models (Fisher, 1922). A sufficient representation can be naturally characterized by the conditional independence principle, which stipulates that, given the representation, the original input data does not contain any additional information about the response variable.\nIn addition to the basic sufficiency property, the representation should have a simple statistical structure. Disentangling is based on the general notion that some latent causes underlie data generation process: although the observed data are typically high-dimensional, complex and noisy, the underlying factors are low-dimensional, independent and have a relatively simple statistical structure. There is a range of definitions of disentangling (Higgins et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018; Do & Tran, 2020). Several metrics have been proposed for the evaluation of disentangling. However, none of these definitions and metrics have been turned into empirical criterions and algorithms for learning disentangled representations. We adopt a simple definition of disentangling which defines a representation to be disentangled if its components are independent (Achille & Soatto, 2018). This definition requires the representation to be maximally disentangled in the sense that the total correlation is zero, where the total correlation is defined as the KL divergence between the joint distribution of g(x) and the product of the marginal distributions of its components (Watanabe, 1960).\nIn the rest of the paper, we first discuss the motivation and the theoretical framework for learning a sufficient and disentangled representation map (SDRM). This framework leads to the formulation of an objective function based on the conditional independence principle and a metric for disentanglement and invariance adopted in this work. We estimate the target SDRM based on the sample version of the objective function using deep neural networks and develop an efficient algorithm for training the SDRM. We establish an upper error bound on the measure of conditional independence and disentanglement and show that it reaches the nonparametric minimax rate under mild regularity conditions. This result provides strong statistical guarantees for the proposed method. We validate the proposed SDRL via numerical experiments and real data examples." }, { "heading": "2 SUFFICIENT AND DISENTANGLED REPRESENTATION", "text": "Consider a pair of random vectors (x,y) ∈ Rp×Rq , where x is a vector of input variables and y is a vector of response variables or labels. Our goal is to find a sufficient and disentangled representation of x.\nSufficiency We say that a measurable map g : Rp → Rd with d ≤ p is a sufficient representation of x if y x|g(x), (1) that is, y and x are conditionally independent given g(x). This condition holds if and only if the conditional distribution of y given x and that of y given g(x) are equal. Therefore, the information in x about y is completely encoded by g(x). Such a g always exists, since if we simply take g(x) = x, then (1) holds trivially. This formulation is a nonparametric generalization of the basic condition in sufficient dimension reduction (Li, 1991; Cook, 1998), where it is assumed g(x) = BTx with B ∈ Rp×d belonging to the Stiefel manifold, i.e.,BTB = Id.\nDenote the class of sufficient representations satisfying (1) by\nF = {g : Rp → Rd, g measurable and satisfies y x|g(x)}.\nWe refer toF as a Fisher class because of its close connection with the concept of sufficient statistics (Fisher, 1922; Cook, 2007). For an injective measurable transformation T : Rd → Rd and g ∈ F , T ◦ g(x) is also sufficient by the basic property of conditional probability. Therefore, the Fisher class F is invariant in the sense that\nT ◦ F = F , provided T is injective,\nwhere T ◦ F = {T ◦ g : g ∈ F}. An important class of transformations is the class of affine transformations, T ◦ g = Ag + b, whereA is a d× d nonsingular matrix and b ∈ Rd.\nDisentanglement We focus on the disentangled representations among those that are sufficient. Therefore, we start from the functions of the input data that are sufficient representations in the Fisher class F . For any sufficient and disentangled representation g(x), let Σg = Var(g(x)). Since the components of g(x) are disentangled in the sense that they are independent, Σg is a diagonal matrix, thus Σ−1/2g g(x) also has independent components. Therefore, we can always rescale g(x) such that it has identity covariance matrix. To further simplify the statistical structure of a representation g, we also require it to be rotation invariant in distribution, that is, Qg(x) = g(x) in distribution for any orthogonal matrix Q ∈ Rd×d. The Fisher class F is rotation invariant in terms of conditional independence, but not all its members are rotation invariant in distribution. By the Maxwell characterization of the Gaussian distributions (Maxwell, 1860; Bartlett, 1934; Bryc, 1995; Gyenis, 2017), a random vector of dimension two or more with independent components is rotation invariant in distribution if and only if it is Gaussian with zero mean and a spherical covariance matrix. Therefore, after absorbing the scaling factor, for a sufficient representation map to be disentangled and rotation invariant, it is necessarily distributed as Nd(0, Id). Let M be the Maxwell class of functions g : Rd → Rd, where g(x) is disentangled and rotation invariant in distribution. By the Maxwell characterization, we can write\nM = {g : Rp → Rd, g(x) ∼ N (0, Id)}. (2)\nNow our problem becomes that of finding a representation in F ∩M, the intersection of the Fisher class and the Maxwell class.\nThe first question to ask is whether such a representation exists. The following result from optimal transport theory provides an affirmative answer and guarantees that F ∩M is nonempty under mild conditions (Brenier, 1991; McCann, 1995; Villani, 2008). Lemma 2.1. Let µ be a probability measure on Rd. Suppose it has finite second moment and is absolutely continuous with respect to the standard Gaussian measure, denoted by γd. Then it admits a unique optimal transportation map T : Rd → Rd such that T#µ = γd ≡ N (0, Id), where T#µ denotes the pushforward distribution of µ under T . Moreover, T is injective µ-almost everywhere.\nDenote the law of a random vector z by µz. Lemma 2.1 implies that, for any g ∈ F with E‖g(x)‖2 < ∞ and µg(x) absolutely continuous with respect to γd, there exists a map T ∗ transforming the distribution of g(x) to N (0, Id). Therefore, R∗ := T ∗ ◦ g ∈ F ∩M, that is,\nx y|R∗(x) and R∗(x) ∼ N (0, Id), (3)\ni.e., R∗ is a sufficient and disentangled representation map (SDRM)." }, { "heading": "3 OBJECTIVE FUNCTION FOR SDRL", "text": "The above discussions lay the theoretical foundation for formulating an objective function that can be used for constructing a SDRM R∗ satisfying (3), or equivalently, R∗ ∈ F ∩M. Let V be a measure of dependence between random variables x and y with the following properties: (a) V[x,y] ≥ 0 with V[x,y] = 0 if and only if x y; (b) V[x,y] ≥ V[R(x),y] for all measurable function R; (c) V[x,y] = V[R∗(x),y] if and only if R∗ ∈ F . The properties (a)-(c) imply that\nR∗ ∈ F ⇔ R∗ ∈ arg max R V[R(x),y] = arg min R {−V[R(x),y]}.\nWe use a divergence measure D to quantify the difference between µR(x) and γd, as long as this measure satisfies the condition D(µR(x)‖γd) ≥ 0 for all measurable function R and D(µR(x)‖γd) = 0 if and only if R ∈M. Then the problem of finding an R∗ ∈ F ∩ M can be expressed as a constrained minimization problem:\narg min R\n−V[R(x),y] subject to D(µR(x)‖γd) = 0.\nIts Lagrangian form is L(R) = −V[R(x),y] + λD(µR(x)‖γd), (4) where λ ≥ 0 is a tuning parameter. This parameter provides a balance between the sufficiency property and the disentanglement constraint. A small λ leads to a representation with more emphasis on sufficiency, while a large λ yields a representation with more emphasis on disentanglement. We show in Lemma 4.1 below that any R∗ satisfying (3) is a minimizer of L(R). Therefore, we can train a SDRM by minimizing the empirical version of L(R). There are several options for V with the properties (a)-(c) described above. For example, we can take V to be the mutual information V[R(x),y] = I(R(x);y). However, in addition to the estimation of the SDRM R, this choice requires the estimation of the density ratio between p(y, R(x)) and p(y)p(R(x)), which is not an easy task. We can also use the conditional covariance operators on reproducing kernel Hilbert spaces (Fukumizu et al., 2009). To be specific, in this work we use the distance covariance (Székely et al., 2007) of y and R(x), which has an elegant U -statistic expression, does not involve additional unknown quantities and is easy to compute. For the divergnce measure of two distributions, we use the f -divergence (Ali & Silvey, 1966), which includes the KLdivergence as a special case." }, { "heading": "4 LEARNING SUFFICIENT AND DISENTANGLED REPRESENTATION", "text": "We first describe some essentials about distance covariance and f -divergence.\nDistance covariance We first recall the concept of distance covariance (Székely et al., 2007), which characterizes the dependence of two random variables.\nLet i be the imaginary unit (−1)1/2. For any t ∈ Rd and s ∈ Rm, let ψz(t) = E[expit T z], ψy(s) = E[expisTy], and ψz,y(t, s) = E[expi(t T z+sTy)] be the characteristic functions of random vectors z ∈ Rd,y ∈ Rq, and the pair (z,y), respectively. The squared distance covariance V[z,y] is defined as\nV[z,y] = ∫ Rd+m |ψz,y(t, s)− ψz(t)ψy(s)|2 cdcm‖t‖d+1‖s‖q+1 dtds, where cd = π(d+1)/2 Γ((d+ 1)/2) .\nGiven n i.i.d copies {zi,yi}ni=1 of (z,y), an unbiased estimator of V is the empirical distance covariance V̂n, which can be elegantly expressed as a U -statistic (Huo & Székely, 2016)\nV̂n[z,y] = 1\nC4n ∑ 1≤i1<i2<i3<i4≤n h ((zi1 ,yi1) , · · · , (zi4 ,yi4)) , (5)\nwhere h is the kernel defined by\nh ((z1,y1) , . . . , (z4,y4)) = 1 4 ∑ 1≤i,j≤4 i6=j ‖zi − zj‖‖yi − yj‖\n− 14 ∑4 i=1 (∑ 1≤j≤4 j 6=i ‖zi − zj‖ ∑ 1≤j≤4 i6=j ‖yi − yj‖ ) + 124 ∑ 1≤i,j≤4 i6=j ‖zi − zj‖ ∑ 1≤i,j≤4 i6=j ‖yi − yj‖.\nf-divergence Let µ and γ be two probability measures on Rd. The f -divergence (Ali & Silvey, 1966) between µ and γ with µ γ is defined as Df (µ‖γ) = ∫ Rd f( dµ dγ )dγ, where f : R\n+ → R is a differentiable convex function satisfying f(1) = 0. Let f∗ be the Fenchel conjugate of f (Rockafellar, 1970), defined as f∗(t) = supx∈R{tx − f(x)}, t ∈ R. The f -divergence admits the following variational formulation (Keziou, 2003; Nguyen et al., 2010; Nowozin et al., 2016). Lemma 4.1.\nDf (µ‖γ) = max D:Rd→dom(f∗) Ez∼µ[D(z)]− Ew∼γ [f∗(D(w))], (6)\nwhere the maximum is attained at D(z) = f ′(dµdγ (z)).\nCommonly used divergence measures include the Kullback-Leibler (KL) divergence, the JensenShanon (JS) divergence and the χ2-divergence.\nLearning SDRM We are now ready to formulate an empirical objective function for learning SDRM R∗. Let R ∈ M, whereM is the Maxwell class defined in (2). By the variational formulation (6), we can write the population version of the objective function (4) as\nL(R) = −V[R(x),y] + λ max D:Rd→dom(f∗) {Ex∼µx [D(R(x))]− Ew∼γd [f∗(D(w))]}. (7)\nThis expression is convenient since we can simply replace the expectations by the corresponding empirical averages. Theorem 4.2. We have R∗ ∈ arg minR∈M L(R) provided (3) holds.\nAccording to Theorem 4.2, it is natural to estimateR∗ based on the empirical version of the objective function (7) when a random sample {(xi,yi)}ni=1 is available. We estimate R∗ using deep neural networks. We employ two networks as follows:\n• Representer network Rθ: This network is used for training R∗. Let R be the set of such neural networks Rθ : Rp → Rd.\n• Discriminator network Dφ: This network is used as the witness function for checking whether the distribution of the estimator of R∗ is approximately the same as N (0, Id). Similarly, denote D as the set of such neural networks Dφ : Rd → R.\nLet {wi}ni=1 be n i.i.d random vectors drawn from γd. The estimated SDRM is defined by R̂θ ∈ arg min\nRθ∈R L̂(Rθ) = −V̂n[Rθ(x),y] + λD̂f (µRθ(x)‖γd), (8)\nwhere V̂n[Rθ(x),y] is an unbiased and consistent estimator of V[Rθ(x),y] as defined in (5) based on {(Rθ(xi),yi), i = 1, . . . , n} and\nD̂f (µRθ(x)‖γd) = max Dφ∈D\n1\nn n∑ i=1 [Dφ(Rθ(xi))− f∗(Dφ(wi))]. (9)\nStatistical guarantee Since a SDRM R∗ is only identifiable up to orthogonal transforms under the constraint that R∗(x) ∼ N (0, Id), no consistency results for R̂θ itself can be obtained. But this is not a flaw of the proposed method. Indeed, the most important statistical guarantee of the learned R∗ is that the objective of conditional independence and disentanglement is achieved. Therefore, we establish an upper bound on the excess risk L(R̂θ)−L(R∗) of the deep nonparametric estimator R̂θ in (8). We make the following assumptions.\n(A1) For any ε > 0, there is a constant B1 > 0 such that µx([−B1, B1]p) > 1 − ε, and R∗ is Lipschitz continuous on [−B1, B1]p with Lipschitz constant L1.\n(A2) For R ∈ M, we assume r(z) = dµR(x)dγd (z) is Lipschitz continuous on [−B1, B1] p with\nLipschitz constant L2, and 0 < c1 ≤ r(z) ≤ c2.\nDenote B2 = max{|f ′(c1)|, |f ′(c2)|}, B3 = max|s|≤2L2 √ d logn+B2 |f∗(s)|.\nThe specifications of the network parameters, including depth, width, size and the supremum norm over the domains of the representer Rθ and the discriminator Dφ are given in Appendix B. Theorem 4.3. Suppose λ > 0 and λ = O(1). Suppose conditions (A1)-(A2) hold and set the network parameters according to (i)-(ii). Then\nE{xi,yi,wi}ni=1 [L(R̂θ)− L(R ∗)] ≤ C((L1 + L2)\n√ dpn− 2 2+p + L2 √ d(log n)n− 2 2+d ),\nwhere C is a constant that depends on B1, B2 and B3 but not on n, q, p and d.\nThe proof of this theorem is given in Appendix B. The result established in Theorem 4.3 provides strong statistical guarantees for the proposed method. The rate n−2/(2+p) matches the minimax nonparametric estimation rate for Lipschitz class contained in Rp of functions (Stone, 1982; Tsybakov, 2008). Up to a log n factor, the rate (log n)n−2/(2+d) matches the minimax rate of nonparametric estimation of Lipschitz densities via GANs (Singh et al., 2018; Liang, 2018)." }, { "heading": "5 COMPUTATION", "text": "We can update θ and φ alternately as in training GANs (Goodfellow et al., 2014). However, this approach suffers from the instability issues. In our implementation, we utilize the more stable particle method based on gradient flow (Gao et al., 2019; 2020). The key idea is to find a sequence of nonlinear but simpler residual maps, say T(z) = z+sv(z), pushing the samples from µRθ(x) to the target distribution γd along a velocity fields v(z) = −∇f ′(r(z)) that most decreases the f -divergence Df (·||γd) at µRθ(x). The residual maps can be estimated via deep density-ratio estimators, which take the form T(z) = z+sv̂(z), z ∈ Rd,where s is a step size and v̂(z) = −f ′′(r̂(z))∇r̂(z).Here r̂(z) is an estimated density ratio of the density ofRθ(x) at the current value of θ over the density of the reference distribution. We use T to transform zi = Rθ(xi), i = 1, . . . , n into Gaussian samples. Once this is done, we update θ via minimizing the loss−V̂n[Rθ(x),y]+λ ∑n i=1 ‖Rθ(xi)−zi‖2/n. We describe the algorithm below.\n• Input {xi,yi}ni=1. Tuning parameters: s, λ, d. Sample {wi}ni=1 ∼ γd. • Outer loop for θ\n– Inner loop (particle method) ∗ Let zi = Rθ(xi), i = 1, 2, ..., n. ∗ Solve D̂φ ∈ arg minQφ ∑n i=1 1 n ( log(1 + expDφ(zi)) + log(1 + exp−Dφ(wi)) ) .\n∗ Define the residual map T(z) = z − sf ′′(r̂(z))∇r̂(z) with r̂(z) = exp−D̂φ(z) . ∗ Update the particles zi = T(zi), i = 1, 2, ..., n.\n– End inner loop – Update θ via minimizing −V̂n[Rθ(x),y] + λ ∑n i=1 ‖Rθ(xi)− zi‖2/n using SGD.\n• End outer loop" }, { "heading": "6 EXPERIMENTS", "text": "We evaluate the proposed SDRL with the KL-divergence using both simulated and real data. The goal of our experiments is to demonstrate that the representations trained based the proposed method perform well. Our proposed method is not trying to learn a classifier or a regression function directly, but rather to learn representation that preserve all the information. So our experiments are designed to evaluate the performance of simple classification and regression methods using the representations we learned as input. The results demonstrate that a simple classification or regression model using the representations we trained performs better than or comparably with the best classification or regression method using deep neural networks.\nDetails on the network structures and hyperparameters are included in Appendix A. Our experiments were conducted on Nvidia DGX Station workstation using a single Tesla V100 GPU unit. The PyTorch code of SDRL is available at https://github.com/anonymous/SDRL." }, { "heading": "6.1 SIMULATED DATA", "text": "In this subsection, we evaluate SDRL on simulated regression and classification problems.\nRegression We generate 5, 000 data points from two models. Model A: y = x1[0.5 + (x2 + 1.5) 2 ]−1 + (1 + x2) 2 + σε, where x ∼ N (0, I4); Model B: y = sin2 (πx1 + 1) + σε, where x ∼ Uniform([0, 1]4). In both models, ε ∼ N (0, I4). We use a 3-layer network with ReLU activation for Rθ and a single hidden layer ReLU network for Dφ. We compare SDRL with two prominent sufficient dimension reduction methods: sliced inverse regression (SIR) (Li, 1991) and sliced average variance estimation (SAVE) (Cook & Weisberg, 1991). We fit a linear model with the learned features and the response variable, and report the prediction errors in Table 1. We see that SDRL outperforms SIR and SAVE in terms of prediction error.\nClassification We visualize the learned features of SDRL on two simulated datasets. We first generate (1) 2-dimensional concentric circles from two classes as in Figure 1 (a); (2) 2-dimensional moons data from two classes as in Figure 1 (e); (3) 3-dimensional Gaussian data from six classes\nas in Figure 1 (i). In each dataset, we generate 5,000 data points for each class. We next map the data into 100-dimensional space using matrices with entries i.i.d Unifrom([0, 1]). Finally, we apply SDRL to these 100-dimensional datasets to learn 2-dimensional features. We use a 10-layer dense convolutional network (DenseNet) (Huang et al., 2017) as Rθ and a 4-layer network as Dφ. We display the evolutions of the learned 2-dimensional features by SDRL in Figure 1. For ease of visualization, we push all the distributions onto the uniform distribution on the unit circle, which is done by normalizing the standard Gaussian random vectors to length one. Clearly, the learned features for different classes in the examples are well disentangled." }, { "heading": "6.2 REAL DATASETS", "text": "Regression We use a benchmark YearPredictionMSD dataset to demonstrate the prediction performance of SDRL (https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD). This dataset has 515,345 observations with 90 predictors. The problem is to predict the year of song release. We randomly split the data into five parts for cross validated evaluation of the prediction performance. We employ a 3-layer network for both Dφ and Rθ. A linear regression model is fitted using the learned representations and the response. The mean prediction errors and their standard errors based on SDRL, principal component analysis (PCA), sparse principal component analysis (SPCA) and ordinary least squares (OLS) regression with original data are reported in Table 2. SDRL outperforms PCA, SPCA and OLS in terms prediction accuracy.\nClassification We benchmark the classification performance of SDRL using MNIST (LeCun et al., 2010), FashionMNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky et al., 2009) against alterna-\ntive methods including convolutional networks (CN) and distance correlation autoencoder (dCorAE) (Wang et al., 2018). With CN, we use the feature extractor by dropping the cross entropy layer of the DenseNet trained for classification. The MNIST and FashionMNIST datasets consist of 60k and 10k grayscale images with 28× 28 pixels for training and testing, respectively, while the CIFAR-10 dataset contains 50k and 10k colored images with 32 × 32 pixels for training and testing, respectively. For the learning from scratch strategy, the representer network Rθ has 20 layers for MNIST data and 100 layers for CIFAR-10 data. We apply the transfer learning technique to the combination of SDRL and CN on CIFAR-10 (Krizhevsky et al., 2009). The pretrained WideResnet-101 model (Zagoruyko & Komodakis, 2016) on the Imagenet dataset with Spinal FC (Kabir et al., 2020) is adopt for Rθ. The discriminator network Dφ is a 4-layer network. The the architecture of Rθ and most hyperparameters are shared across all four methods - SDRL, CN, SDRL+CN and dCorAE. Finally, we use the k-nearest neighbor (k = 5) classifier on the learned features for all methods.\nThe classification accuracies are reported in Tables 3 and 4. We can see that the classification accuracies of SDRL are comparable with those of CN and dCorAE. As shown in Table 4, the classification accuracies of CN leveraging SDRL outperforms those of CN. We also calculate the estimated distance correlation (DC) between the learned features and their labels as ρ2z,y = V[z,y]2/ √ (V[z]2 × V[y]2), where V[z] and V[y] are the distance variances such that V[z] = V[z, z], V[y] = V[y,y]. For more details, please see Székely et al. (2007). Figure 2 shows the DC values MNIST, FashionMNIST and CIFAR-10 data. SDRL and SDRL+CN achieves higher DC values." }, { "heading": "7 CONCLUSION AND FUTURE WORK", "text": "In this work, we formulate a framework for sufficient and disentangled representation learning and construct an objective function characterizing conditional independence and disentanglement. This enables us to learn a representation with the desired properties empirically. We provide statistical guarantees for the learned representation by deriving an upper bound on the excess risk of the objective function.\nThere are several questions that deserve further study. First, we can adopt different measures of conditional independence including mutual information and conditional covariance operators on reproducing kernel Hilbert spaces (Fukumizu et al., 2009). We can also use other divergence measures such as the Wasserstein distance in the objective function. Finally, Lemma 2.1 suggests that the intersection of the Fisher class F and the Maxwell classM can still be large, and there can be many statistically equivalent representations in F ∩ M. We can make further reduction of F ∩ M by imposing additional constraints, for example, certain minimal properties, sparsity, and robustness against noise perturbation." }, { "heading": "A APPENDIX: EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.1 SIMULATION STUDIES", "text": "The values of the hyper-parameters for the simulated experiments are given in Table A1, where λ is the penalty parameter, d is the dimension of the SDRM, n is the mini-batch size in SGD, T1 is the number of inner loops to push forward particles zi, T2 is the number of outer loops for training Rθ, and s is the step size to update particles. For the regression models, the neural network architectures are shown in Table A2\nAs shown in Table A3, a multilayer perceptron (MLP) is utilized for the neural structure Dφ in the classification problem. The detailed architecture of 10-layer dense convolutional network (DenseNet) (Huang et al., 2017; Amos & Kolter) deployed for Rθ is shown in Table A4. For all the settings, we adopted the Adam (Kingma & Ba, 2014) optimizer with an initial learning rate of 0.001 and weight decay of 0.0001.\nTable A1: Hyper-parameters for simulated examples, where s varies according to epoch\ns\nTask λ d n T1 T2 0-150 151-225 226-500\nRegression 1.0 2 or 1 64 1 500 3.0 2.0 1.0 Classification 1.0 2 64 1 500 2.0 1.5 1.0\nTable A2: MLP architectures for Dφ and Rθ in regression\nDφ Rθ\nLayers Details Output size Details Output size\nLayer 1 Linear, LeakyReLU 16 Linear, LeakyReLU 16 Layer 2 Linear 1 Linear, LeakyReLU 8 Layer 3 Linear d\nTable A3: MLP architecture for Dφ of simulated classification examples and the benchmark classification datasets\nLayers Details Output size\nLayer 1 Linear, LeakyReLU 64 Layer 2 Linear, LeakyReLU 128 Layer 3 Linear, LeakyReLU 64 Layer 4 Linear 1\nTable A4: DenseNet architecture for Rθ in the simulated classification examples\nLayers Details Output size\nConvolution 3× 3 Conv 24× 20× 20 Dense Block 1 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 1 36× 20× 20\nTransition Layer 1 BN, ReLU, 2× 2 Average Pool,1× 1 Conv 30× 10× 10 Dense Block 2 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 1 18× 10× 10\nTransition Layer 2 BN, ReLU, 2× 2 Average Pool, 1× 1 Conv 15× 5× 5 Dense Block 3 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 1 27× 5× 5\nPooling BN, ReLU, 5× 5 Average Pool, Reshape 27 Fully connected Linear 2" }, { "heading": "A.2 REAL DATASETS", "text": "Regression: In the regression problems, hyper-parameters are presented in Table A5. The Adam optimizer with an initial learning rate of 0.001 and weight decay of 0.0001 is adopted. The MLP architectures of Dφ and Rθ for the YearPredictionMSD data are shown in Table A6.\nTable A5: Hyper-parameters for YearPredictionMSD data\nDataset λ d n T1 T2 s\nYearPredictionMSD 1.0 10, 20, 30, 40 64 1 500 1.0\nTable A6: MLP architectures for Dφ and Rθ for YearPredictionMSD data\nDφ Rθ\nLayers Details Output size Details Output size\nLayer 1 Linear, LeakyReLU 32 Linear, LeakyReLU 32 Layer 2 Linear, LeakyReLU 8 Linear, LeakyReLU 8 Layer 3 Linear 1 Linear d\nClassification: For the classification problems, hyper-parameters are shown in Table A7. We again use Adam as the SGD optimizers for bothDφ andRθ. Specifically, learning rate of 0.001 and weight decay of 0.0001 are used for Dφ in all datasets and for Rθ on MNIST (LeCun et al., 2010). We customized the SGD optimizers with momentum at 0.9, weight decay at 0.0001, and learning rate ρ in Table A8 for FashionMNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky et al., 2012). For the transfer learning of CIFAR-10, we use customized SGD optimizer with initial learning rate of 0.001 and momentum of 0.9 for Rθ. MLP architectures of the discriminator network Dφ for MNIST, FashionMNIST and CIFAR-10 are given in Table A3. The 20-layer DenseNet networks shown in Table A9 were utlized for Rθ on the MNIST dataset, while the 100-layer DenseNet networks shown in Table A10 and A11 are fitted for Rθ on FashionMNIST and CIFAR-10.\nTable A7: Hyper-parameters for the classification benchmark datasets\nDataset λ d n T1 T2 s\nMNIST 1.0 16, 32, 64 64 1 300 0.1 FashionMNIST 1.0 16, 32, 64 64 1 300 1.0 CIFAR-10 1.0 16, 32, 64 64 1 300 1.0 CIFAR-10 (transfer learning) 0.01 16, 32, 64 64 1 50 1.0" }, { "heading": "B APPENDIX: PROOFS", "text": "In this appendix, we prove Lemmas 2.1 and 4.1, and Theorems 4.2 and 4.3." }, { "heading": "B.1 PROOF OF LEMMA 2.1", "text": "Proof. By assumption µ and γd are both absolutely continuous with respect to the Lebesgue measure. The desired result holds since it is a spacial case of the well known results on the existence of optimal transport (Brenier, 1991; McCann, 1995), see, Theorem 1.28 on page 24 of (Philippis, 2013) for details.\nLayers Details Output size\nConvolution 3× 3 Conv 24× 32× 32 Dense Block 1 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 16 216× 32× 32\nTransition Layer 1 BN, ReLU, 2× 2 Average Pool,1× 1 Conv 108× 16× 16 Dense Block 2 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 16 300× 16× 16\nTransition Layer 2 BN, ReLU, 2× 2 Average Pool, 1× 1 Conv 150× 8× 8 Dense Block 3 [\nBN, 1× 1 Conv BN, 3× 3 Conv\n] × 16 342× 8× 8\nPooling BN, ReLU, 8× 8 Average Pool, Reshape 342 Fully connected Linear d" }, { "heading": "B.2 PROOF OF LEMMA 4.1", "text": "Proof. Our proof follows Keziou (2003). Since f(t) is convex, then ∀t ∈ R, we have f(t) = f∗∗(t), where\nf∗∗(t) = sup s∈R {st− f∗(s)}\nis the Fenchel conjugate of f∗. By Fermat’s rule, the maximizer s∗ satisfies\nt ∈ ∂f∗(s∗), i.e., s∗ ∈ ∂f(t) Plugging the above display with t = dµZdγ (x) into the definition of f -divergence, we derive (6)." }, { "heading": "B.3 PROOF OF THEOREM 4.2", "text": "Proof. Without loss of generality, we assume d = 1. For R∗ satisfying (3) and any R ∈ R, we have R = ρ(R,R∗)R∗ + εR, where ρ(R,R∗) is the correlation coefficient between R and R∗, εR = R − ρ(R,R∗)R∗. It is easy to see that εR R∗ and thus Y εR. As (ρ(R,R∗)R∗, Y ) is independent of (εR, 0), then by Theorem 3 of Székely & Rizzo (2009)\nV[R,y] =V[ρ(R,R∗)R∗ + εR,y] ≤ V[ρ(R,R∗)R∗,y] + V(εR, 0) =V[ρ(R,R∗)R∗,y] = |ρ(R,R∗)|V[R∗,y] ≤V[R∗,y].\nAs R(x) ∼ N (0, 1) and R∗(x) ∼ N (0, 1), then Df (µR(x)‖γd) = Df (µR∗(x)‖γd) = 0, and L(R)− L(R∗) = V[R∗,y]− V[R,y] ≥ 0.\nThe proof is completed." }, { "heading": "B.4 PROOF OF THEOREM 4.3", "text": "Denote B2 = max{|f ′(c1)|, |f ′(c2)|}, B3 = max|s|≤2L2 √ d logn+B2 |f∗(s)|. We set the network parameters of the representer Rθ and the discriminator Dφ as follows.\n(i) Representer networkRD,W,S,B parameters: depth D = 9 log n+ 12, width W = dmax{8d(n p 2+p / log n) 1 p + 4p, 12n p 2+p / log n+ 14}, size S = dn p−2 p+2 / log4(npd),\nB = (2B3L1 √ p+ log n) √ d,\n(ii) Discriminator network MD̃,W̃,S̃,B̃ parameters: depth D̃ = 9 log n + 12, width W̃ = max{8d(n d 2+d / log n) 1 d + 4d, 12n d 2+d / log n + 14}, size S̃ = n d−2 d+2 /(log4 npd), B̃ =\n2L2 √ d log n+B2.\nBefore getting into the details of the proof of Theorem 4.3, we first give an outline of the basic structure of the proof.\nWithout loss of generality, we assume that λ = 1 and m = 1, i.e. y ∈ R. First we consider the scenario that y is bounded almost surely, say |y| ≤ C1. We also assumeB1 <∞. We can utilize the truncation technique to transfer the unbounded cases to the bounded ones under some common tail assumptions. Consequently, an additional log n multiplicative term will appear in the final results. For any R̄ ∈ ND,W,S,B, we have,\nL(R̂θ)− L(R∗) = L(R̂θ)− L̂(R̂θ) + L̂(R̂θ)− L̂(R̄) + L̂(R̄)− L(R̄) + L(R̄)− L(R∗) ≤ 2 sup\nR∈ND,W,S,B |L(R)− L̂(R)|+ inf R̄∈ND,W,S,B |L(R̄)− L(R∗)|, (10)\nwhere we use the definition of R̂θ in (8) and the feasibility of R̄. Next we bound the two error terms in (10), i.e., the approximation error infR̄∈ND,W,S,B |L(R̄) − L(R\n∗)| and the statistical error supR∈ND,W,S,B |L(R) − L̂(R)| separately. Then Theorem 4.3 follows after bounding these two error terms." }, { "heading": "B.4.1 THE APPROXIMATION ERROR", "text": "" }, { "heading": "Lemma B.1.", "text": "inf\nR̄∈ND,W,S,B |L(R̄)− L(R∗)| ≤ 2600C1B1L1\n√ pdn− 2 p+2 . (11)\nProof. By (3) and (6) and the definition of L, we have\ninf R̄∈ND,W,S,B\n|L(R̄)− L(R∗)| ≤ |Df (µR̄θ̄(x)‖γd)|+ |V[R ∗(x),y]− V[R̄θ̄(x),y]|, (12)\nwhere R̄θ̄ ∈ ND,W,S,B is specified in Lemma B.2 below. We finish the proof by (14) in Lemma B.3 and (15) in Lemma B.4, which will be proved below.\nLemma B.2. Define R̃∗(x) = min{R∗(x), log n}. There exist a R̄θ̄ ∈ ND,W,S,B with depth D = 9 log n + 12, width W = dmax{8d(n p 2+p / log n) 1 p + 4d, 12n p 2+p / log n + 14}, and size S = dn p−2 p+2 /(log4 npd), B = (2B1L1 √ p+ log n) √ d, such that\n‖R̄θ̄ − R̃∗‖L2(µx) ≤ 160L1B1 √ pdn− 2 p+2 . (13)\nProof. Let R̃∗i (x) be the i-th entry of R̃ ∗(x) : Rd → Rd. By the assumption on R∗, it is easy to check that R̃∗i (x) is Lipschitz continuous on [−B1, B1]d with the Lipschitz constant L1 and ‖R̃∗i ‖L∞ ≤ log n. By Theorem 4.3 in Shen et al. (2019), there exists a ReLU network R̄θ̄i with with depth 9 log n + 12, width max{8d(n p 2+p / log n) 1 p + 4d, 12n\np 2+p / log n + 14}, ‖R̄θ̄i‖L∞ =\n2B1L1 √ p+ log n, such that\n‖R̄θ̄i‖L∞ ≤ 2B1L1 √ p+ log n,\nand ‖R̃∗i − R̄θ̄i‖L∞([−B1,B1]p\\H) ≤ 80L1B1 √ pn− 2 p+2 ,\nµx(H) ≤ 80L1B1\n√ pn− 2 p+2\n2B1L1 √ p+ log n .\nDefine R̄θ̄ = [R̄θ̄1 , . . . , R̄θ̄d ] ∈ ND,W,S,B. The above three display implies ‖R̄θ̄ − R̃∗‖L2(µx) ≤ 160L1B1 √ pdn− 2 p+2 ." }, { "heading": "Lemma B.3.", "text": "|V[R∗(x),y]− V[R̄θ̄(x),y]| ≤ 2580C1B1L1 √ pdn− 2 p+2 . (14)\nProof. Recall that Székely et al. (2007)\nV[z,y] =E [‖z1 − z2‖|y1 − y2|]− 2E [‖z1 − z2‖|y1 − y3|] + E [‖z1 − z2‖]E [|y1 − y2|] ,\nwhere (zi,yi), i = 1, 2, 3 are i.i.d. copies of (z,y). We have\n|V[R∗(x),y]− V[R̄θ̄(x),y]| ≤ |E [ (‖R∗(x1)−R∗(x2)‖ − ‖R̄θ̄(x1)− R̄θ̄(x2)‖)|y1 − y2| ] |\n+ 2|E [ (‖R∗(x1)−R∗(x2)‖ − ‖R̄θ̄(x1)− R̄θ̄(x2)‖)|y1 − y3| ] |\n+ |E [ ‖R∗(x1)−R∗(x2)‖ − ‖R̄θ̄(x1)− R̄θ̄(x2) ] E [‖y1 − y2‖] |\n≤ 8C1E [ |‖R∗(x1)−R∗(x2)‖ − ‖R̄θ̄(x1)− R̄θ̄(x2)‖| ] ≤ 16C1E [ |‖R∗(x)− R̄θ̄(x)‖\n] ≤ 16C1(E [ ‖R̃∗(x)− R̄θ̄(x)‖ ] + E [ ‖R∗(x)1R∗(x)∈Ballc(0,logn)‖ ] ),\nwhere in the first and third inequalities we use the triangle inequality, and second one follows from the boundedness of y. By (13), the first term in the last line is bounded by 2560C1B1L1 √ pdn− 1 p+2 . Some direct calculation shows that\nE [ ‖R∗(x)1R∗(x)∈Ballc(0,logn)‖ ] ≤ C2 (log n)d\nn .\nWe finish the proof by comparing the order of the above two terms, i.e., C2 (logn)d\nn ≤ 20C1B1L1 √ pdn− 2 p+2 .\nLemma B.4. |Df (µR̄θ̄(x)‖γd)| ≤ 20C1B1L1 √ pdn− 2 p+2 . (15)\nProof. By Lemma B.2 R̄θ̄ can approximate R ∗ arbitrary well, the desired result follows from the fact that Df (µR∗(x)‖γd) = 0 and the continuity of Df (µR(x)‖γd) on R. We present the sketch of the proof and omit the details here. Let r∗(z) = dµR∗(x)dγd (z) and r̄(z) = dµR̄θ̄(x) dγd (z). By definition we have\nDf (µR∗(x)‖γd) = EW∼γd [f(r∗(W ))] = EW∼γd [f(r∗(W ))1W∈Ball(0,logn)] + EW∼γd [f(r∗(W ))1W∈Ballc(0,logn)].\n(We can represent Df (µR̄θ̄‖γd) similarly. ) Then\n|Df (µR̄θ̄(x)‖γd)| = |Df (µR̄θ̄(x)‖γd)− Df (µR∗(x)‖γd)| ≤ EW∼γd [|f(r∗(W ))− f(r̄(W ))|1W∈Ball(0,logn)] + EW∼γd [|f(r∗(W ))− f(r∗(W ))|1W∈Ballc(0,logn)]\n≤ ∫ ‖z‖≤logn |f ′(r̃(z))||r∗(z)− r̄(z)|dγd(z) + ∫ ‖z‖>logn |f ′(r̃(z))||r∗(z)− r̄(z)|dγd(z)\n≤ C3 ∫ ‖z‖≤logn |r∗(z)− r̄(z)|dγd(z) + C4 ∫ ‖z‖>logn |r∗(z)− r̄(z)|\nThe first term in the above display is small due to R̄θ̄ can approximate R ∗ well. The second term is small due to the boundedness of r̄ and the exponential decay of the Gaussian tails." }, { "heading": "B.4.2 THE STATISTICAL ERROR", "text": "" }, { "heading": "Lemma B.5.", "text": "sup R∈ND,W,S,B\n|L(R)− L̂(R)| ≤ C15(B1(L1 + L2) √ pd)n− 2 2+p + (L2 √ d+B2 +B3) log nn − 22+d )\n(16)\nProof. By the definition and the triangle inequality we have\nE[ sup R∈ND,W,S,B |L(R)− L̂(R)|]\n≤ E[ sup R∈ND,W,S,B |V̂n[R(x),y]− V[(R(x),y)|]\n+ E[ sup R∈ND,W,S,B |D̂f (µR(x)||γd)− Df (µR(x)||γd)|].\nWe finish the proof based on (17) in Lemma B.6 and (22) in Lemma B.7, which will be proved below." }, { "heading": "Lemma B.6.", "text": "E[ sup R∈ND,W,S,B\n|V̂n[R(x),y]− V[R(x),y]|] ≤ 4C6C7C10B1L1 √ pdn− 2 p+2 . (17)\nProof. We first fix some notation for simplicity. Denote O = (x,y) ∈ Rp × R1 and Oi = (xi,yi), i = 1, ...n are i.i.d copy of O, and denote µx,y and P ⊗ n as P and Pn, respectively. ∀R ∈ ND,W,S,B, let Õ = (R(x),y) and Õi = (R(xi),yi), i = 1, ...n are i.i.d copy of Õ. Define centered kernel h̄R : (Rp × R1) ⊗ 4 → R as\nh̄R(Õ1, Õ2, Õ3, Õ4) = 1 4 ∑ 1≤i,j≤4, i6=j ‖R(xi)−R(xj)‖|yi − yj |\n− 14 ∑4 i=1 (∑ 1≤j≤4, j 6=i ‖R(xi)−R(xj)‖ ∑ 1≤j≤4, i6=j |yi − yj | )\n+ 124 ∑\n1≤i,j≤4, i6=j\n‖R(xi)−R(xj)‖ ∑\n1≤i,j≤4, i6=j\n|yi − yj | − V[R(x),y] . (18)\nThen, the centered U -statistics V̂n[R(x),y]− V[R(x),y] can be represented as\nUn(h̄R) = 1\nC4n ∑ 1≤i1<i2<i3<i4≤n h̄R(Õi1 , Õi2 , Õi3 , Õi4).\nOur goal is to bound the supremum of the centeredU -process Un(h̄R) with the nondegenerate kernel h̄R. By the symmetrization randomization Theorem 3.5.3 in De la Pena & Giné (2012), we have\nE[ sup R∈ND,W,S,B |Un(h̄R)|] ≤ C5E[ sup R∈ND,W,S,B | 1 C4n ∑ 1≤i1<i2<i3<i4≤n i1 h̄R(Õi1 , Õi2 , Õi3 , Õi4)|],\n(19) where, i1 , i1 = 1, ...n are i.i.d Rademacher variables that are also independent with Õi, i = 1, . . . , n. We finish the proof by upper bounding the above Rademacher process with the matric entropy of ND,W,S,B. To this end we need the following lemma.\nLemma B.7. If ξi, i = 1, ...m are m finite linear combinations of Rademacher variables j , j = 1, ..J . Then\nE j ,j=1,...J max 1≤i≤m |ξi| ≤ C6(logm)1/2 max 1≤i≤m\n( Eξ2i )1/2 . (20)\nProof. This result follows directly from Corollary 3.2.6 and inequality (4.3.1) in De la Pena & Giné (2012) with Φ(x) = exp(x2).\nBy the boundedness assumption on y and the boundedness of R ∈ ND,W,S,B, we have that the kernel h̄R is also bounded, say\n‖h̄R‖L∞ ≤ C7(2B1L1 √ p+ log n) √ d. (21)\n∀R, R̃ ∈ ND,W,S,B define a random empirical measure (depends on Oi, i = 1, . . . , n)\nen,1(R, R̃) = E i1 ,i1=1,...,n| 1\nC4n ∑ 1≤i1<i2<i3<i4≤n i1(h̄R − h̄R̃)(Õi1 , . . . , Õi4)|.\nCondition on Oi, i = 1, . . . , n, let C(N , en,1, δ)) be the covering number ofND,W,S,B with respect to the empirical distance en,1 at scale of δ > 0. Denote Nδ as the covering set of ND,W,S,B with\ncardinality of C(N , en,1, δ)). Then,\nE i1 [ sup R∈ND,W,S,B | 1 C4n ∑ 1≤i1<i2<i3<i4≤n i1 h̄R(Õi1 , Õi2 , Õi3 , Õi4)|]\n≤ δ + E i1 [ sup R∈Nδ | 1 C4n ∑ 1≤i1<i2<i3<i4≤n i1 h̄R(Õi1 , Õi2 , Õi3 , Õi4)|]\n≤ δ + C6 1\nC4n (logC(N , en,1, δ))1/2 max R∈Nδ [ n∑ i1=1 ∑ i2<i3<i4 (h̄R(Õi1 , Õi2 , Õi3 , Õi4)) 2]1/2\n≤ δ + C6C7(2B1L1 √ p+ log n) √ d(logC(N , en,1, δ))1/2 1\nC4n [\nn(n!)2\n((n− 3)!)2 ]1/2\n≤ δ + 2C6C7(2B1L1 √ p+ log n) √ d(logC(N , en,1, δ))1/2/ √ n\n≤ δ + 2C6C7(2B1L1 √ p+ log n) √ d(VCN log 2eBn δVCN )1/2/ √ n ≤ δ + C6C7C10(B1L1 √ p+ log n) √ d(DS logS log Bn\nδDS logS )1/2/\n√ n.\nwhere the first inequality follows from the triangle inequality, the second inequality uses (20), the third and fourth inequalities follow after some algebra, and the fifth inequality holds due to C(N , en,1, δ) ≤ C(N , en,∞, δ) and the relationship between the metric entropy and the VCdimension of the ReLU networks ND,W,S,B (Anthony & Bartlett, 2009), i.e.,\nlogC(N , en,∞, δ)) ≤ VCN log 2eBn δVCN ,\nand the last inequality holds due to the upper bound of VC-dimension for the ReLU network ND,W,S,B satisfying C8DS logS ≤ VCN ≤ C9DS logS, see Bartlett et al. (2019). Then (17) holds by the selection of the network parameters and set δ = 1n and some algebra." }, { "heading": "Lemma B.8.", "text": "E[ sup R∈ND,W,S,B\n|D̂f (µR(x)||γd)−Df (µR(x)||γd)|] ≤ C14(L2 √ d+B2 +B3)(n − 2 2+p + lognn− 2 2+d ) (22)\nProof. ∀R ∈ ND,W,S,B, let r(z) = dµR(x) dγd (z), gR(z) = f ′(r(z)). By assumption gR(z) : Rd → R is Lipschitz continuous with the Lipschitz constant L2 and ‖gR‖L∞ ≤ B2. Without loss of generality, we assume supp(gR) ⊆ [− log n, log n]d. Then, similar to the proof of Lemma B.2 we can show that there exists a D̄φ̄ ∈ MD̃,W̃,S̃,B̃ with depth D̃ = 9 log n + 12, width W̃ = max{8d(n d 2+d / log n) 1 d + 4d, 12n d 2+d / log n + 14}, and size S̃ = n d−2 d+2 /(log4 npd), B̃ = 2L2 √ d log n+B2 such that for z ∼ γd and z ∼ µR(x)\nEz[|D̄φ̄(z)− gR(z)|] ≤ 160L2 √ d log nn− 2 d+2 . (23)\n∀g : Rd → R, define\nE(g) = Ex∼µx [g(R(x))]− EW∼γd [f∗(g(W ))],\nÊ(g) = Ê(g,R) = 1 n n∑ i=1 [g(R(xi))− f∗(g(Wi))].\nBy (6) we have E(gR) = Df (µR(x)||γd) = sup\nmeasureable D:Rd→R E(D). (24)\nThen,\n|Df (µR(x)||γd)− D̂f (µR(x)||γd)|\n= |E(gR)− max Dφ∈MD̃,W̃,S̃,B̃ Ê(Dφ)|\n≤ |E(gR)− sup Dφ∈MD̃,W̃,S̃,B̃ E(Dφ)|+ | sup Dφ∈MD̃,W̃,S̃,B̃ E(Dφ)− max Dφ∈MD̃,W̃,S̃,B̃ Ê(Dφ)|\n≤ |E(gR)− E(D̄φ̄)|+ sup Dφ∈MD̃,W̃,S̃,B̃ |E(Dφ)− Ê(Dφ)|\n≤ Ez∼µR(x) [|gR − D̄φ̄|(z)] + EW∼γd [|f ∗(gR)− f∗(D̄φ̄)|(W )] + sup Dφ∈MD̃,W̃,S̃,B̃ |E(Dφ)− Ê(Dφ)| ≤ 160(1 +B3)L2 √ d log nn− 2 d+2 + sup\nDφ∈MD̃,W̃,S̃,B̃ |E(Dφ)− Ê(Dφ)|\nwhere we use the triangle inequality in the first inequality, and we use E(gR) ≥ supDφ∈MD̃,W̃,S̃,B̃ E(Dφ) followed from (24) and the triangle inequality in the second inequality, the third inequality follows from the triangle inequality, and the last inequality follows from (23) and the mean value theorem. We finish the proof via bounding the empirical process\nU(D,R) = E[ sup R∈ND,W,S,B,D∈MD̃,W̃,S̃,B̃ |E(D)− Ê(D)|].\nLet S = (x, z) ∼ µx ⊗ γd and Si, i = 1, . . . , n be n i.i.d copy of S. Denote b(D,R;S) = D(R(x))− f∗(D(z)). Then E(D,R) = ES [b(D,R;S)] and\nÊ(D,R) = 1 n n∑ i=1 b(D,R;Si).\nLet\nG(M×N ) = 1 n E{Si, i}ni\n[ sup\nR∈ND,W,S,B,D∈MD̃,W̃,S̃,B̃ | n∑ i=1 ib(D,R;Si)| ] be the Rademacher complexity of MD̃,W̃,S̃,B̃ × ND,W,S,B (Bartlett & Mendelson, 2002). Let C(M× N , en,1, δ)) be the covering number of MD̃,W̃,S̃,B̃ × ND,W,S,B with respect to the empirical distance (depends on Si)\ndn,1((D,R), (D̃, R̃)) = 1\nn E i [ n∑ i=1 | i(b(D,R;Si)− b(D̃, R̃;Si))|]\nat scale of δ > 0. LetMδ ×Nδ be such a converging set ofMD̃,W̃,S̃,B̃ ×ND,W,S,B. Then,\nU(D,R) = 2G(M×N ) = 2ES1,...,Sn [E i,i=1,...,n[G(N ×M)|(S1, ..., Sn)]]\n≤ 2δ + 2 n ES1,...,Sn [E i,i=1,...,n[ sup (D,R)∈Mδ×Nδ | n∑ i=1 ib(D,R;Si)||(S1, . . . , Sn)]\n≤ 2δ + C12 1\nn ES1,...,Sn [(logC(M×N , en,1, δ))1/2 max (D,R)∈Mδ×Nδ [ n∑ i=1 b2(D,R;Si)] 1/2]\n≤ 2δ + C12 1\nn ES1,...,Sn [(logC(M×N , en,1, δ))1/2\n√ n(2L2 √ d log n+B2 +B3)]\n≤ 2δ + C12 1√ n\n(2L2 √ d log n+B2 +B3)(logC(M, en,1, δ) + logC(N , dn,1, δ))1/2\n≤ 2δ + C13 L2 √ d log n+B2 +B3√\nn (DS logS log Bn δDS logS + D̃S̃ log S̃ log B̃n δD̃S̃ log S̃ )1/2\nwhere the first equality follows from the standard symmetrization technique, the second equality holds due to the iteration law of conditional expectation, the first inequality follows from the triangle inequality, and the second inequality uses equation 20, the third inequality uses the fact that b(D,R;S) is bounded, i.e., ‖b(D,R;S)‖L∞ ≤ 2L2 √ d log n+B2 +B3, and the fourth inequality follows from some algebra, and the fifth inequality follows from C(N , en,1, δ) ≤ C(N , en,∞, δ) (similar result for M) and logC(N , en,∞, δ)) ≤ VCN log 2eBnδVCN , and ND,W,S,B satisfying C8DS logS ≤ VCN ≤ C9DS logS, see Bartlett et al. (2019). Then (22) follows from the above display with the selection of the network parameters ofMD̃,W̃,S̃,B̃,ND,W,S,B and with δ = 1 n .\nFinally, Theorem 4.3 is a direct consequence of (11) in Lemma B.1 and (16) in Lemma B.5. This completes the proof of Theorem 4.3." } ]
2,020
null
SP:bd4bc912bd62fdcf54adeb77330f6cfbe4bb0352
[ "Post-discussion update: The authors only partially adressed my concerns in their rebuttal. The paper suffers from lack of comparisons: only 2 baselines are compared, and only on few systems. Crucially the new Navier-Stokes experiment lacks comparisons. The authors also couldn't respond to my questions about research context or scope: it's difficult to assess what this work actually claims in relation to competing methods. For a machine learning paper this is not enough." ]
We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs. Our key idea is to leverage the prior of “translational similarity” of numerical PDE differential operators to drastically reduce the scale of learning model and training data. We implemented three central network components, including a neural functional convolution operator, a Picard forward iterative procedure, and an adjoint backward gradient calculator. Our novel paradigm fully leverages the multifaceted priors that stem from the sparse and smooth nature of the physical PDE solution manifold and the various mature numerical techniques such as adjoint solver, linearization, and iterative procedure to accelerate the computation. We demonstrate the efficacy of our method by robustly discovering the model and accurately predicting the solutions of various types of PDEs with small-scale networks and training sets. We highlight that all the PDE examples we showed were trained with up to 8 data samples and within 325 network parameters.
[]
[ { "authors": [ "Anurag Ajay", "Jiajun Wu", "Nima Fazeli", "Maria Bauza", "Leslie P Kaelbling", "Joshua B Tenenbaum", "Alberto Rodriguez" ], "title": "Augmenting physical simulators with stochastic neural networks: Case study of planar pushing and bouncing", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2018 }, { "authors": [ "Brandon Amos", "J Zico Kolter" ], "title": "Optnet: Differentiable optimization as a layer in neural networks", "venue": "arXiv preprint arXiv:1703.00443,", "year": 2017 }, { "authors": [ "Xiaoli Bai" ], "title": "Modified Chebyshev-Picard iteration methods for solution of initial value and boundary value problems", "venue": "Texas A&M University,", "year": 2010 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Steven L Brunton", "Bernd R Noack", "Petros Koumoutsakos" ], "title": "Machine learning for fluid mechanics", "venue": "Annual Review of Fluid Mechanics,", "year": 2020 }, { "authors": [ "Graham F Carey", "Bo-Nan Jiang" ], "title": "Element-by-element linear and nonlinear solution schemes", "venue": "Communications in Applied Numerical Methods,", "year": 1986 }, { "authors": [ "Michael B Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Zhenglin Geng", "Daniel Johnson", "Ronald Fedkiw" ], "title": "Coercing machine learning to output physically accurate results", "venue": "Journal of Computational Physics,", "year": 2020 }, { "authors": [ "Jiequn Han", "Arnulf Jentzen", "E Weinan" ], "title": "Solving high-dimensional partial differential equations using deep learning", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Jiequn Han" ], "title": "Deep learning approximation for stochastic control problems", "venue": "arXiv preprint arXiv:1611.07422,", "year": 2016 }, { "authors": [ "Jun-Ting Hsieh", "Shengjia Zhao", "Stephan Eismann", "Lucia Mirabella", "Stefano Ermon" ], "title": "Learning neural pde solvers with convergence guarantees", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Ameya D. Jagtap", "Kenji Kawaguchi", "George Em Karniadakis" ], "title": "Adaptive activation functions accelerate convergence in deep and physics-informed neural networks", "venue": "Journal of Computational Physics,", "year": 2020 }, { "authors": [ "Ameya D. Jagtap", "Ehsan Kharazmi", "George Em Karniadakis" ], "title": "Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems", "venue": "Computer Methods in Applied Mechanics and Engineering,", "year": 2020 }, { "authors": [ "Kyong Hwan Jin", "Michael T McCann", "Emmanuel Froustey", "Michael Unser" ], "title": "Deep convolutional neural network for inverse problems in imaging", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Thomas Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yann LeCun", "Yoshua Bengio" ], "title": "Convolutional networks for images, speech, and time series", "venue": "The handbook of brain theory and neural networks,", "year": 1995 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yunzhu Li", "Jiajun Wu", "Jun-Yan Zhu", "Joshua B Tenenbaum", "Antonio Torralba", "Russ Tedrake" ], "title": "Propagation networks for model-based control under partial observation", "venue": "In International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Yunzhu Li", "Hao He", "Jiajun Wu", "Dina Katabi", "Antonio Torralba" ], "title": "Learning compositional koopman operators for model-based control", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Zhen Li", "Zuoqiang Shi" ], "title": "Deep residual learning and pdes on manifold", "venue": "arXiv preprint arXiv:1708.05115,", "year": 2017 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Zichao Long", "Yiping Lu", "Bin Dong" ], "title": "Pde-net 2.0: Learning pdes from data with a numericsymbolic hybrid deep network", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Lu Lu", "Xuhui Meng", "Zhiping Mao", "George E Karniadakis" ], "title": "Deepxde: A deep learning library for solving differential equations", "venue": null, "year": 1907 }, { "authors": [ "Damian Mrowca", "Chengxu Zhuang", "Elias Wang", "Nick Haber", "Li F Fei-Fei", "Josh Tenenbaum", "Daniel L Yamins" ], "title": "Flexible neural representation for physics prediction", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Samira Pakravan", "Pouria A. Mistani", "Miguel Angel Aragon-Calvo", "Frederic Gibou" ], "title": "Solving inverse-pde problems with physics-aware neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Guofei Pang", "Liu Yang", "George Em Karniadakis" ], "title": "Neural-net-induced gaussian process regression for function approximation and pde solution", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Maziar Raissi" ], "title": "Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations", "venue": "arXiv preprint arXiv:1804.07010,", "year": 2018 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Maziar Raissi", "Alireza Yazdani", "George Em Karniadakis" ], "title": "Hidden fluid mechanics: Learning velocity and pressure fields from flow", "venue": "visualizations. Science,", "year": 2020 }, { "authors": [ "Ben Stevens", "Tim Colonius" ], "title": "Finitenet: A fully convolutional lstm network architecture for time-dependent partial differential equations", "venue": "arXiv preprint arXiv:2002.03014,", "year": 2020 }, { "authors": [ "Andreas Wächter" ], "title": "Short tutorial: getting started with ipopt in 90 minutes", "venue": "In Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik,", "year": 2009 }, { "authors": [ "Yufei Wang", "Ziju Shen", "Zichao Long", "Bin Dong" ], "title": "Learning to discretize: Solving 1d scalar conservation laws via deep reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jin-Long Wu", "Karthik Kashinath", "Adrian Albert", "Dragos Chirila", "Heng Xiao" ], "title": "Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems", "venue": "Journal of Computational Physics,", "year": 2020 }, { "authors": [ "Li Xu", "Jimmy SJ Ren", "Ce Liu", "Jiaya Jia" ], "title": "Deep convolutional neural network for image deconvolution", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kai Zhang", "Wangmeng Zuo", "Shuhang Gu", "Lei Zhang" ], "title": "Learning deep cnn denoiser prior for image restoration", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "PINN Lu" ], "title": "2019) in the setting of Helmholtz equation system. Both models are trained and tested in 32× 32 grid. Figure 8 shows the prediction results for PINN and our framework", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION\n(1+ ) Problem definition We aim to devise a learning paradigm to solve the inverse PDE identification problem. By observing a small data set in the PDE’s solution space with an unknown form of equations, we want to generate an effective neural representation that can precisely reconstruct the hidden structure of the target PDE system. This neural representation will further facilitate the prediction of the PDE solution with different boundary conditions. The right inset figure shows a typical example of our target problem: by observing a small part (4 samples in the figure) of the solution space of a nonlinear PDE system F(x) = b, without knowing its analytical equations, our neural representation will depict the hidden differential operators underpinning F (e.g., to represent the unknown differential operator∇ · (1 + x2)∇ by training the model on the solution of∇ · (1 + x2)∇x = b.\nChallenges to solve The nonlinearity and the curse of dimensionality of the target PDE’s solution manifold are the two main challenges for the design of a high-performance neural discretization. An effective neural representation of a PDE system plays an essential role to solve these challenges. In retrospect, the design of neural PDE representations has been evolving from the raw, unstructured networks (e.g., by direct end-to-end data fitting) to various structured ones with proper mathematical priors embedded. Examples include the residual-based loss function (e.g., physics-informed networks Raissi et al., 2020; Lu et al., 2019; Raissi et al., 2019), learnable convolution kernels (e.g., PDE-Nets Long et al., 2018a;b; 2019), and hybrid of numerical stencils and MLP layers (e.g., see Amos & Kolter, 2017; Pakravan et al., 2020; Geng et al., 2020; Stevens & Colonius, 2020). Following this line of research, we aim to devise a lightweighted neural PDE representation that fuses the mathematical equation’s essential structure, the numerical solvers’ computational efficiency, and the neural networks’ expressive power. In particular, we want to aggressively reduce the scale of both model parameters and training data to some extremal extent, while extending the scope of the targeted PDE systems to a broad range, encompassing equations that are both linear and nonlinear, both steady-state and dynamic.\ni, j\nFlatten vector of , ,\n. . .\nOutput of , , Translational similarity of differential operators Our neural PDE representation design is motivated by the historical successes of the various sparse, iterative numerical solvers in solving nonlinear PDEs over the past decades. The key observation we have made is that the efficacy of a classical numerical PDE solver relies on the translational similarity of its discretized, local differential operators. Namely, the form of a differential operator can be written as a functional C(x,p) with respect to the the PDE unknown x and the local position p, which is showed in the right inset figure. For example, for a linear Poisson system ∇ · ∇x = b, C is a constant function; for a linear Poisson system with embedded boundary conditions, C is a function of position p; for a nonlinear PDE∇· (1 +x2)∇x = b, C is a function of PDE unknown x (or both x and p if it has embedded boundaries). For most numerical PDEs, these local functional operators can be parameterized and built on-the-fly within the solver iterations. Such operators’ locality further inspired the design of a variety of computationally efficient PDE solvers, among which the most famous one is the matrix-free scheme that has been used widely in solving large-scale physical systems on GPU. These local procedures for stencil creation have demonstrated their extreme performance in accommodating PDE solvers. From a machine learning perspective, these “translational similar” differential operators resemble the concept of convolution operators that function as the cornerstone to embed the “translational invariant” priors into neural networks ( see LeCun et al., 1995; 1998).\nMethod overview In this work, we leverage the PDE differential operators’ “translational similarity” in a reverse manner by devising a local neural representation that can uncover and describe the global structure of the target PDE. At the heart of our approach lies in a differential procedure to simultaneously describe the spatial coupling and the temporal evolution of a local data point. Such procedure is implemented as a parameterized micro network, which is embedded in our iterative solving architecture, to learn the numerical process of converging from an initial guess to a final steady state for a PDE solution. We name these embedded micro networks “functional convolutions,” for two reasons. First, fitting the parameters of these local embedded networks amounts to the exploration of the optimal function that best describes the observed solution of the unknown nonlinear PDE within a functional space. Second, the local differential operators that span this functional space can be treated as numerically employing convolution kernels Hsieh et al. (2018); Lin et al. (2013). Based on these functional convolutions, we are able to devise a learning framework by embedding the micro network architecture within an iterative procedure to 1) backwardly learn the underpinning, spatially varying structures of a nonlinear PDE system by iteratively applying the adjoint linear solvers and 2) forwardly predict the steady states of a PDE system by partially observing data samples of its equilibrium. We show that our model can simultaneously discover structures and predict solutions for different types of nonlinear PDEs. We particularly focus on solving elliptic boundary value problems that were less explored in the current literature." }, { "heading": "2 MOTIVATING EXAMPLE: FORWARD NUMERICAL PDE", "text": "Naming convention We first show a motivating example to demonstrate the standard process of a forward numerical PDE solver. We take the simplest Poisson equation with Dirichlet boundary conditions as an example. The mathematical equation of a Poisson system can be written as ∇ · ∇x = b for x ∈ Ω, with x as the PDE unknowns, b as the right-hand side, and Ω as the problem’s domain. The boundary conditions are enforced in a Dirichlet way (by assigning values directly) as x = x̂ on the domain boundary, with x̂ as the specified boundary values. To create a discretized, numerical system to solve the equation, we use the symbol p to denote the position within the domain. The numerical solution of the PDE amounts to seeking an unknown function x(p) that can specify the value of x in an arbitrary position p within Ω.\nLinear PDE As shown in Figure 1, we illustrate how to solve the Poisson system using a finitedifference method. We first subdivide the domain into n cell (segment intervals in 1D and squares in 2D) with the cell size of ∆p. Taking the 2D case for example, we can derive the discretized Poisson equation by approximating the Laplacian operator on each grid cell using the central finite difference method (−xi−1,j − xi+1,j + 4xi,j − xi,j−1 − xi,j+1)/∆p2 = bi,j .The discretization of each cell forms one row in the linear system, and the combination of all the rows (cells) forms a sparse\n1D Poisson equation\n2D Poisson equation\nlinear system Ax = b to solve. For a linear Poisson system, each row of A can be translated into a convolutional stencil instantiated with constant parameters, e.g., (-1,2,1) in 1D and (-1,-1,4,-1,-1) in 2D. This convolution perspective can be used to accelerate the numerical solver by maintaining a “conceptual” A matrix without storing the redundant matrix element values in memory (we refer the readers to the matrix-free method Carey & Jiang (1986) for more details). This matrix-free nature indicates an extremely concise representation of a numerical Poisson system (i.e., the matrix A)—using a 1× 3 or 3× 3 convolution kernel with fixed values. We use the symbol C to denote the convolutional representation of A. For a linear system, C is independent from the values of p and x.\nNonlinear PDE and Picard interation The nonlinear case of the Poisson system is a bit complicated. We can still use the matrix-free representation to describe a nonlinear Poisson system, but the parameters of this convolutional stencil now depends on both the local position p and the local unknown p(x). This dependency is nonlinear and therefore we cannot find the solution by solving a single Ax = b. Here we present an iterative scheme—the Picard method—to solve a numerical nonlinear PDE system. (see Picard, 1893; Bai, 2010) Let’s consider the nonlinear Poisson equation as: ∇ · α(x)∇x = b for x ∈ Ω and Dirichlet boundary conditions on the boundary. The source of nonlinearity in this PDE includes the coefficient α(x) which is dependent on the solution of x, e.g., α(x) = 1 + x2. A simple and effective fixed-point procedure to discretize and solve the nonlinear Poisson equation, named Picard iteration, can be sketched as follows:\nwhile: |xn − xn−1| > xn+1 = A −1(xn)b, (1)\nwith the matrix A representing the current discretized nonlinear Laplacian operator approximated by the value of unknowns from the previous iteration. The key idea is to employ a linear approximation of a nonlinear problem and enforce such approximation iteratively to evolve to a steady state (see Figure 2). To uncover the underlying structure of A, which can evolve both spatially and temporally, we make a prior assumption that A can be described by a kernel function C(x(p),p). Such prior applies to most of the elliptic PDE systems where the spatial terms can be expressed as the combination of one or several differential operators. From a numerical perspective, C describes the local discretized interaction between an element and its neighbours. It amounts to a function that returns all the non-zero elements for each row i in A (think of A in a matrix-free way)." }, { "heading": "3 METHOD: BACKWARD NEURAL PDE", "text": "Overview In this section, we present our neural PDE design motivated by the forward Picard solver with convolutional representation of the nonlinear system. Our framework consists of three key components: the neural representation of the convolution operator, the embedded Picard forward iterative solver, and the adjoint backward derivatives. The key idea is to differentiate the forward nonlinear Picard solver and build the neural representation for its sparse linearization step. This differentiation is implemented by our functional convolution scheme on the linearization level and the adjoint Picard for the nonlinear iterations." }, { "heading": "3.1 FUNCTIONAL CONVOLUTION", "text": "In a conventional ML method, C can be approximated by the combination of a set of kernels Long et al. (2018a) or by solving a deconvolution problem Xu et al. (2014); Jin et al. (2017); Zhang et al. (2017). However, these strategies do not suit our scenario, where the instant linear effects of the system should be approximated by extracting the nonlinear effects of C. A natural choice to approximate this spatially-and-temporally varying kernel function is to devise a neural network, which takes the form of C(x(p),p, θ), with θ as the network parameters. Numerically, the global matrix A can be fully parameterized by C(x(p),p, θ) by assuming the fact that C is a non-weight-shared convolution operator in the spatial domain. As illustrated in Figure 3, such neural network can be further incorporated into a conventional nonlinear Picard solver to obtain the forward steady-state solution by solving the linear system A(xn, θ)xn+1 = b(xn), where a black-box sparse linear solver can be used as in a conventional simulation program. The formal definition of functional convolution written in the kernel way is\nA(x(pi), θ) = ∑\npj∈N (pi)\n[C(x(N (pi)),N (pi), θ)]e(pj) (2)\nwhere A(x(pi), θ) is the ith row of matrix A, N (pi) is the neighboring positions of pi, x(N (pi)) is all the neighboring elements (all channels) around the position pi and e(pj) is the unit vector representing the jth column in A.\nTo specify the 2D example, equation (2) has the following form\nA(xm,n), θ) = 1∑ i=−1 1∑ j=−1 [C(N (xm,n), θ)]i,jem+i,n+j (3)\nwhere xm,n is the element that lies in row m, column n of the feature map. The input N (xm,n) = {xm+i,n+j for i, j = −1, 0, 1} is the flatten vector of neighboring elements of pixel xm,n. After a\nsimple neural network C with parameters θ, we obtain the output C(N (xm,n), θ), which is a vector with the same length of N (xm,n)." }, { "heading": "3.2 ADJOINT DERIVATIVES", "text": "To train our functional convolution operator in an end-to-end fashion, we calculate the derivatives of the loss function L regarding the network parameters θ following the chain rule. For a neural PDE with n layers for the outer Picard loop (see Figure 3), we can calculate the sensitivity ∂L/∂θ as:\n∂L ∂θ = ∂L\n∂xn\n∂xn ∂θ + ∂L\n∂xn ∂xn ∂xn−1 ∂xn−1 ∂θ + · · ·+ ∂L ∂xn ∂xn ∂xn−1 · · · ∂x2 ∂x1 ∂x1 ∂θ\n(4)\nFor layer i that maps from xi → xi+1 by solving A(xi, θ)xi+1 = b, we can express its backward derivative yi as\nyi = ∂xi+1 ∂xi = −A−1(xi, θ) ∂A(xi, θ) ∂xi xi+1, (5)\nwhich can be solved by two linear systems, including a forward equation and a backward adjoint equation:\nA(xi, θ)xi+1 = b (6a)\nA(xi, θ)yi = ∂A(xi, θ)\n∂xi xi+1 (6b)\nSimilarly, we can get ∂L/∂θ as:\nzi = ∂xi+1 ∂θ = −A−1(xi, θ) ∂A(xi, θ) ∂xi xi+1 (7)\nwhich can be calculated by solving one additional adjoint linear system:\nA(xi, θ)zi = ∂A(xi, θ)\n∂θ xi+1 (8)\nAlgorithm 1 Backward derivative of a nonlinear PDE boundary value problem\nInput: x1, b, C, L Output: ∂L/∂θ\n//Forward linearization: for i = 0→ N − 1 do\nSolve A(xi, θ)xi+1 = b; end for //Backward adjoint: for i = N − 2→ 1 do\nSolve adjoints (6b) and (7) for ∂xi+1/∂θ ∂L/∂θ + = (∂L/∂xi+1)(∂xi+1/∂θ)\nend for\nTo calculate the terms ∂A/∂xi and ∂A/∂θ in the chain, we take advantage of the sparse nature of A by calculating ∂C/∂xi and ∂C/∂θ first and then distribute their values into the non-zero entries of the global matrix. Because C(x(p),p, θ) is a functional convolution operator represented by a standard neural network, its derivatives regarding the firstlayer input x and the parameters θ can be calculated in a straightforward way by the auto-differentiation mechanism. The overall algorithm is summarized as in Algorithm 1." }, { "heading": "4 NETWORK ARCHITECTURE AND DATA TRAINING", "text": "The network we use in this study is summarized in Table 2. We use a hybrid method of IpOpt Wächter (2009) and Adam optimizer to optimize the parameters int the neural networks. Despite its fast converging rate, IpOpt optimizer typically converges to a local minimum or fail to converge due to a bad initialization. By introducing the hybrid optimization method, we are able to solve the problem. The optimized parameters from IpOpt are then oscillated and refined by Adam optimizer for a certain number of iterations. In case that the loss not converge, we then use the optimized parameters from Adam as the initial guess to warm-start the IpOpt and repeat this IpOpt-Adam process until the loss converges to the tolerance. For the Adam optimizer, we set the parameters to\nbe learning rate = 1e− 3, β1 = 0.9, β2 = 0.999, = 1e− 8. We compare the performance of our hybrid optimization method with Adam and a typical SGDM method in solving the 1D Poisson problem in Figure 6. The results show that the hybrid method not only obtains a faster converging rate, but also converges to an extremely small loss compared with other methods." }, { "heading": "5 RESULTS", "text": "We test the efficacy of our method by using it to fit a set of PDEs, ranging from the standard linear Poisson’s problem to highly nonlinear temporally evolving PDEs. We build our neural network architecture by implementing both the differentiable Picard iterations and a simple fully-connected network in C++ 17. The training process for all the examples was run on an i7-8665 four-core desktop. The PDE and boundary conditions, sample size, and test results were summarized in Table 2 in Appendix." }, { "heading": "5.1 1D EXAMPLES", "text": "Constant kernels We first test our framework by fitting the underlying structure of a standard 1D Poisson problem ∇ · ∇x = b with Dirichlet boundary conditions. Here the target true solution is set to be x = (ap)2, with p as the spatial coordinate of the data sample and a as a coefficient that varies for different data sample. The training dataset consists of four sample points, observed from the two solutions of the equations parameterized with different values of a and boundary conditions. The 4-size training data is sampled on a 128× 1 grid with two layers of picard network. After training, we run 16 PDE tests with varying a and boundary conditions and obtain an averaged MSE loss as 8.3e-20. The predicted solutions in all 16 test cases match the analytical curves with the maximum MSE 1.5e-14, as shown in in Figure 9a.\nNext, we test the same framework on another two Poisson problems. The first problem is∇ ·∇x = 0 with Neumann boundary conditions which does not have an analytical solution (see Figure 9b). The prediction results from our model are shown in Figure 9b with an averaged MSE 5.0e-29. The other problem is with the target solution as x = sin(ap). The results are shown in Figure 4 with the 9.8e-4 MSE loss compared with the true solution.\nStability test with noisy input We further conduct a stability test by adding noise values to the setting of the 1D constant kernel experiments. The training dataset consists of two sample points, observed from the two noisy solutions of the equations. We test our framework with various scales of noise from [−.1, .1] to [−.35, .35] with step size of .05 with respect to the extreme values in the target solution. The training data is sampled on a 32× 1 grid with two layers of picard network. We compare our framework with denoised solutions in Figure 10. The figure shows the framework can automatically obtain precise solution even though our framework cannot access accurate solutions during the training procedure.\nSpatially varying coefficients Next we test our model by predicting the solutions of a Poisson PDE with spatially varying coefficients. The PDE has the analytical formula∇ · (1 + |πp|)∇x = 0, with p as the spatial coordinate of the data sample. We build the training dataset by solving the equation with randomly set boundary conditions on a 32× 1 grid. The data sample size is 4 and each data sample has the full observation of the solution space. The input of the network is the current values of x and the sample positions p. We show that our model can precisely uncover the distribution of the hidden latent space behind the Laplacian operator with spatially varying coefficients by fitting a functional convolution operator that predicts the solutions of the PDE in the forward process (see Figure 3). The average MSE loss is 1.3e-06 for this case.\nNon-linear equations In this example, we demonstrate the ability of our neural PDE solver by solving a non-linear PDE with the form: ∇ · (1 + |x|+ sin(|x| ∗ .001))∇x = 0 Dirichlet boundary conditions are set on the two endpoints of the domain. The training dataset is generated by solving the PDE with standard Picard iterations. The number of the neural PDE network layers is set to be 5. We employ 4 solution samples on a 32 × 1 discretization for training. As shown in Figure 11, our model can precisely uncover the hidden nonlinear structures of the PDE kernel and predicts the numerical solution by employing the learned functional convolution through the Picard iterations." }, { "heading": "5.2 2D EXAMPLES", "text": "Poisson equation We then expand our model to solve 2D problems. Similarly, we start from employing our model to predict the solutions for two standard 2D Poisson problems ∇ · ∇x = b with Dirichlet boundary conditions, whose target true solutions are set to be x = (apu)3 + ap2v and x = sin(apu + apv) respectively. Here pu and pv refer to the x axis and y axis coordinates of the data sample. We use 4-size data samples, which are sampled on a 32*32 grid, to train the neural network in both cases. To evaluate, we run 16 test cases for both the two problems and the results are shown in Figure 4. We obtain an averaged MSE loss at 5.4e-14 for the first problem and 3.7e-3 for the second problem.\nHelmholtz equation In this example, we test our model’s performance by predicting the solution of Helmholtz equation∇ · ∇x + x = 0. We set two different types of varying Dirichlet boundary conditions for this problem, one as x = −a/(p2u+p2v) and another as x = a∗sin((0.02pu+0.02pv)) with a varies across the data. We use 4 data sample with respect to two types of boundary conditions with varying coefficients to train the neural network. For each type of boundary conditions, the training data is sampled on a 32 × 32 grid in two different domains respectively. The results are exhibited in Figure 5. In predicting solution,we achieve an averaged MSE of 5.25e-27 and 9.3e-29 in the two specific domains of the first type and For 3.5e-25 and 3.9e-18 for the second type.\nWave equation In this example, we demonstrate the ability of our Neural PDE solver by solving a time-dependent wave equation: ∇ · ∇x = ∂\n2x ∂t2 We use a standard finite difference method to\ngenerate the training data, which is 6-size data sample with each training sample indicating the target solution at the nth time step (1 < n < 6). Our model is trained to map x from the n− 1 frame to the n frame. The training data is sampled on a 49× 49 grid with a source of x = sin(60(n ∗ dt)) at the center of the grid, where dt is the time interval. With this observation of previous frames of a time-related wave function, our model is able to predict the following frames. The model’s performance is tested by predicting the following 42 frames after the first 6 training frames. The training data and the predicting results are showed in Figure 13 and Figure 14. With an average MSE loss of 6.9e-4, we show that our model can precisely uncover the intrinsic structures of the kernel with sparse observation and can predict the numerical solution of it in a period of time.\nNavier-Stokes equation We further demonstrate our model’s ability in solving the Navier-Stokes equation:\n∂~x ∂t + ~x · ∇~x+∇p = ~f ∇ · ~x = 0 (9)\nwhere, ~x stands for the velocity of the fluid, p indicates the pressure and f the body force. In each time step, our model is trained to accomplish the projection step which has a form of Poisson equation through the finite difference discretization method. The 6-size training data is sampled on a 32× 32 grid with a fluid source in the left bottom corner of the domain, and the model is tested for 50 frames. The training data and the predicting results are showed in Figure12. With an averaged MSE loss of 4.09e-5, we show that our model can precisely uncover the intrinsic structures of the projection step in solving the Navier-stokes equation with sparse observation." }, { "heading": "5.3 COMPARISON WITH OTHER METHODS", "text": "We compare our framework with a CNN baseline and PINN Lu et al. (2019).\nComparison with CNN baseline We evaluate our functional convolution model by comparing its performance with other naive convolutional neural network (CNN) structures in solving two typical problems targeted in this study: 1) 1D Poisson problem and 2) 2D time-dependent wave equation. To solve the 1D Poisson equation, we set up the CNN baseline structure as a 5-layer network consisting of three 1D convolution layers with a ReLU layer in between each two. The 2D time-dependent wave equation is specified by Equation∇ · ∇x = ∂\n2x ∂t2 . The CNN baseline structure is a 5-layer network,\nwhich is described in Table 1. Figure 7 shows the results. The figure shows that our framework converges fast and reduce the loss dramatically compared with the baseline. The details of the experiment could be found in Section B.1 in Appendix.\nComparison with PINN We compare our framework with PINN Lu et al. (2019) in the setting of Helmholtz equation system. Specifically, the comparison is conducted with the Helmholtz equation ∇ · ∇x + x = 0. The comparison uses the Dirichlet boundary of x = −1/(p2u + p2v). Both our framework and PINN are trained and tested on a 32× 32 grid. Figure 8 shows the prediction results for PINN and our framework. Our framework achieves MSE of 6.05e− 15, while PINN achieves MSE of 1.66e− 6. The details of this experiment could be found in Section B.2 in Appendix." }, { "heading": "6 RELATED WORKS", "text": "PDE networks Long et al. (2018a; 2019) explore using neural networks to solve the Partial differential equations (PDEs). Li & Shi (2017) formulate the ResNet as a control problem of PDEs on manifold. Raissi et al. (2019) embeds physical priors in neural networks to solve the nonlinear PDEs. Han et al. (2018) handles general high-dimensional parabolic PDEs. Brunton et al. (2020) gives an overview of neural networks in turbulence applications. Wu et al. (2020) train generative adversarial\nnetworks to model chaotic dynamical systems. Han et al. (2016) solves high-dimensional stochastic control problems based on Monte-Carlo sampling. Raissi (2018) approximate a deterministic function of time and space by a deep neural network for backward stochastic differential equations with random terminal condition to be satisfied. Wang et al. (2019) first propose to use reinforcement learning (RL) to aid the process of solving the conservation laws. Jagtap et al. (2020b) propose a neural network on discrete domains for nonlinear conservation laws. Jagtap et al. (2020a) employ adaptive activation functions to solve PDEs and approximate smooth and discontinuous functions. Pang et al. (2019) combines neural networks and Gaussian process to solve PDEs.\nPrior-embedded neural simulators Many recent learning physics works are based on building networks to describe interactions among objects or components (see Battaglia et al. (2018) for a survey). The pioneering works done by Battaglia et al. (2016) and Chang et al. (2017) predict different particle dynamics such as N-body by learning the pairwise interactions. Following this, the interaction networks are enhanced by a graph network by Kipf et al. (2017) for different applications. Specialized hierarchical representations by Mrowca et al. (2018), residual corrections by Ajay et al. (2018), propagation mechanisms by Li et al. (2019), linear transitions by Li et al. (2020) were employed to reason various physical systems. On another front, modeling neural networks under a dynamic system’s perspective has drawn increasingly attention. In 2018 Chen et al. (2018) solves the neural networks as ordinary differential equations (ODEs)." }, { "heading": "7 CONCLUSION", "text": "In this paper, we introduced neural PDE, a machine learning approach to learn the intrinsic properties of PDEs by learning and employing a novel functional convolution and adjoint equations to enable the end-to-end training. Our model resents strong robustness against the arbitrary noise. The main limitation of our work is that we assume the PDE systems are sparse. That is, the relation is restricted locally. To enable the constrain in larger area, one can enlarge the kernel size, but this can potentially cost much more computation and memory. For the future applications, we plan to apply the method to solve real-world 3D applications, such as incompressible fluids and nonlinear soft materials. We also plan to scale up the system by leveraging the sparse iterative linear solver for solving the linearized PDEs in each iteration." }, { "heading": "A ABLATION TESTS", "text": "The neural networks are trained by a hybrid method of IpOpt and Adam optimizer. Here we demonstrate our hybrid optimizer’s capability to outperform a typical Adam optimizer and a typical SGDM optimizer, which is shown in Figure 6." }, { "heading": "B COMPARISON WITH OTHER METHODS", "text": "B.1 COMPARISON WITH NAIVE CNN STRUCTURE\nWe evaluate our functional convolution model by comparing its performance with other naive convolutional neural network structures in solving two typical problems targeted in this study. For the first case to solve the 1D Poisson equation, we set up the baseline structure as a 5-layer network\nconsisting of three 1D convolution layers with a ReLU layer in between each two. The second problem is the 2D time-dependent wave equation specified by Equation∇ ·∇x = ∂\n2x ∂t2 . The baseline\nfor this problem is also set to be a 5-layer network, which is described in Table 1. For each of the baselines, the input is set to be the right hand side value and the output is the predicted solution. The results are shown in Figure 7, which demonstrates that our functional convolution model outperforms the naive convolutional neural network both in accuracy and converging rate.\nB.2 COMPARISON WITH PINN\nWe compare our framework with PINN Lu et al. (2019) in the setting of Helmholtz equation system. Both models are trained and tested in 32× 32 grid. Figure 8 shows the prediction results for PINN and our framework.\nC 1D EXAMPLES\nC.1 1D POISSON EXAMPLE\nWe employ our functional convolutional model with the aim to solve typical 1D Poisson problems with constant kernels, the testing results of which are shown in Figure 9.\nC.2 1D STABILITY TEST WITH NOISY INPUT\nWe conduct a stability test by adding noise values to the setting of the 1D constant kernel experiments. We test our framework with various scales of noise from [−.1, .1] to [−.35, .35] with step size of .05 with respect to the extreme values in the target solution. We compare our framework with denoised solutions in Figure 10. The figure shows the framework can automatically obtain precise solution even though our framework cannot access accurate solutions during the training procedure.\nC.3 1D SPATIAL VARYING AND NONLINEAR EXAMPLE\nWe also train the model to predict the solution of∇ · (1 + |πp|)∇x = 0 and∇ · (1 + |x|+ sin(|x| ∗ .001))∇x = 0. The results are shown in Figure 11.\nC.4 2D NAVIER STOKES EXAMPLE\nThe 6-size training data is sampled on a 32× 32 grid with a fluid source in the left bottom corner of the domain, and the model is tested for 50 frames. The functional convolution network is set to be 3× 6× 6× 6× 6× 3. We show the ground truth by a typical fluid solver, our prediction, and the absolute error between the two of Frame 1, 25, and 50 in Figure 12." }, { "heading": "D WAVE EQUATION", "text": "Figure 13 shows the training data of the wave equation. Figure 14 shows the predicted solution of time dependent wave equation.\n∂t2 . (a-c) The target solution at timestep 1, 3, 5, with top view and 3D view. The equation\nis solved in domain = [0, 1]× [0, 1], here we only show the plots in [0.2, 0.8]× [0.2, 0.8]." }, { "heading": "E DETAILS ON NEURAL NETWORK STRUCTURES", "text": "We show the details of the naive CNN structures in Table 1 which are trained as baseline compared to our convolution model. We also refer readers to Table 2 for the details of neural network structure and training parameters across different examples." } ]
2,020
null
SP:e7c149067b48a63680ae063c880c00a304309b90
[ "This paper tries to interpret neural networks with chain graphs that provides theoretical analysis on various neural network components. Furthermore, this chain graph interpretation has been used to propose a new approach (architecture), which is a partially collapsed feed-forward. A layered chain graph representation is adopted to formulate the neural networks with layered chain graphs. This further establishes to interpret feed-forward as an approximate probabilistic inference with using linear approximations. Some concrete examples are shown to be analyzed based on the chain graph formulation. " ]
The last decade has witnessed a boom of deep learning research and applications achieving state-of-the-art results in various domains. However, most advances have been established empirically, and their theoretical analysis remains lacking. One major issue is that our current interpretation of neural networks (NNs) as function approximators is too generic to support in-depth analysis. In this paper, we remedy this by proposing an alternative interpretation that identifies NNs as chain graphs (CGs) and feed-forward as an approximate inference procedure. The CG interpretation specifies the nature of each NN component within the rich theoretical framework of probabilistic graphical models, while at the same time remains general enough to cover real-world NNs with arbitrary depth, multibranching and varied activations, as well as common structures including convolution / recurrent layers, residual block and dropout. We demonstrate with concrete examples that the CG interpretation can provide novel theoretical support and insights for various NN techniques, as well as derive new deep learning approaches such as the concept of partially collapsed feed-forward inference. It is thus a promising framework that deepens our understanding of neural networks and provides a coherent theoretical formulation for future deep learning research.
[]
[ { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PLOS ONE, 10(7):1–46,", "year": 2015 }, { "authors": [ "Wray L. Buntine" ], "title": "Operations for learning with graphical models", "venue": "J. Artif. Int. Res.,", "year": 1994 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L. Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": null, "year": 2018 }, { "authors": [ "Lenaic Chizat", "Francis Bach" ], "title": "On the global convergence of gradient descent for overparameterized models using optimal transport", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Visualizing higher-layer features of a deep", "venue": null, "year": 2009 }, { "authors": [ "Morten Frydenberg" ], "title": "The chain graph markov property", "venue": "Scandinavian Journal of Statistics, pp", "year": 1990 }, { "authors": [ "Aude Genevay", "Gabriel Peyré", "Marco Cuturi" ], "title": "GAN and VAE from an optimal transport point of view", "venue": "arXiv preprint arXiv:1706.01807,", "year": 2017 }, { "authors": [ "I.J. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "G. Hinton", "L. Deng", "D. Yu", "G.E. Dahl", "A. Mohamed", "N. Jaitly", "A. Senior", "V. Vanhoucke", "P. Nguyen", "T.N. Sainath", "B. Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Geoffrey E. Hinton", "Simon Osindero", "Yee-Whye Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural Comput.,", "year": 2006 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Daphne Koller", "Nir Friedman" ], "title": "Probabilistic Graphical Models: Principles and Techniques", "venue": null, "year": 2009 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Guillaume Lample", "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ], "title": "Neural architectures for named entity recognition", "venue": "In NACCL,", "year": 2016 }, { "authors": [ "Steffen Lilholt Lauritzen", "Nanny Wermuth" ], "title": "Graphical models for associations between variables, some of which are qualitative and some quantitative", "venue": "The annals of Statistics,", "year": 1989 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Jaehoon Lee", "Yasaman Bahri", "Roman Novak", "Samuel S. Schoenholz", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Deep neural networks as gaussian processes", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Shuai Li", "Wanqing Li", "Chris Cook", "Ce Zhu", "Yanbo Gao" ], "title": "Independently recurrent neural network (indrnn): Building a longer and deeper RNN", "venue": null, "year": 2018 }, { "authors": [ "Shuai Li", "Wanqing Li", "Chris Cook", "Yanbo Gao", "Ce Zhu" ], "title": "Deep independently recurrent neural network (indrnn)", "venue": "arXiv preprint arXiv:1910.06251,", "year": 2019 }, { "authors": [ "Zachary C. Lipton" ], "title": "The mythos of model interpretability", "venue": "Commun. ACM,", "year": 2018 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Vinod Nair", "Geoffrey E. Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Radford M. Neal" ], "title": "Connectionist learning of belief", "venue": "networks. AI,", "year": 1992 }, { "authors": [ "Radford M. Neal" ], "title": "Bayesian Learning for", "venue": "Neural Networks. Springer-Verlag,", "year": 1996 }, { "authors": [ "Anh Nguyen", "Alexey Dosovitskiy", "Jason Yosinski", "Thomas Brox", "Jeff Clune" ], "title": "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks", "venue": null, "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Hoifung Poon", "Pedro Domingos" ], "title": "Sum-product networks: A new deep architecture", "venue": "In UAI,", "year": 2011 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey Hinton" ], "title": "An efficient learning procedure for deep boltzmann machines", "venue": "Neural Comput.,", "year": 2012 }, { "authors": [ "Yuesong Shen", "Tao Wu", "Csaba Domokos", "Daniel Cremers" ], "title": "Probabilistic discriminative learning with layered graphical models", "venue": null, "year": 1902 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree search", "venue": null, "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": null, "year": 1929 }, { "authors": [ "C. Szegedy", "Wei Liu", "Yangqing Jia", "P. Sermanet", "S. Reed", "D. Anguelov", "D. Erhan", "V. Vanhoucke", "A. Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Charlie Tang", "Russ R Salakhutdinov" ], "title": "Learning stochastic feedforward neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Shuai Zheng", "Sadeep Jayasumana", "Bernardino Romera-Paredes", "Vibhav Vineet", "Zhizhong Su", "Dalong Du", "Chang Huang", "Philip H.S. Torr" ], "title": "Conditional random fields as recurrent neural networks", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Li" ], "title": "B.5 PROOF OF PROPOSITION 5 Again, to match the typical deep learning formulations and ease", "venue": "RNN layer (c.f. Eq", "year": 2018 }, { "authors": [ "Shen" ], "title": "feed-forward process only pass through feature expectations, we can also directly interpret an input data sample as a feature expectation, meaning as a resulting average rather than a single sample", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "During the last decade, deep learning (Goodfellow et al., 2016), the study of neural networks (NNs), has achieved ground-breaking results in diverse areas such as computer vision (Krizhevsky et al., 2012; He et al., 2016; Long et al., 2015; Chen et al., 2018), natural language processing (Hinton et al., 2012; Vaswani et al., 2017; Devlin et al., 2019), generative modeling (Kingma & Welling, 2014; Goodfellow et al., 2014) and reinforcement learning (Mnih et al., 2015; Silver et al., 2016), and various network designs have been proposed. However, neural networks have been treated largely as “black-box” function approximators, and their designs have chiefly been found via trialand-error, with little or no theoretical justification. A major cause that hinders the theoretical analysis is the current overly generic modeling of neural networks as function approximators: simply interpreting a neural network as a composition of parametrized functions provides little insight to decipher the nature of its components or its behavior during the learning process.\nIn this paper, we show that a neural network can actually be interpreted as a probabilistic graphical model (PGM) called chain graph (CG) (Koller & Friedman, 2009), and feed-forward as an efficient approximate probabilistic inference on it. This offers specific interpretations for various neural network components, allowing for in-depth theoretical analysis and derivation of new approaches." }, { "heading": "1.1 RELATED WORK", "text": "In terms of theoretical understanding of neural networks, a well known result based on the function approximator view is the universal approximation theorem (Goodfellow et al., 2016), however it only establishes the representational power of NNs. Also, there have been many efforts on alternative NN interpretations. One prominent approach identifies infinite width NNs as Gaussian processes (Neal, 1996; Lee et al., 2018), enabling kernel method analysis (Jacot et al., 2018). Other works also employ theories such as optimal transport (Genevay et al., 2017; Chizat & Bach, 2018) or mean field (Mei et al., 2019). These approaches lead to interesting findings, however they tend to only hold under limited or unrealistic settings and have difficulties interpreting practical real-world NNs.\nAlternatively, some existing works study the post-hoc interpretability (Lipton, 2018), proposing methods to analyze the empirical behavior of trained neural networks: activation maximization (Erhan et al., 2009), typical input synthesis (Nguyen et al., 2016), deconvolution (Zeiler & Fergus, 2014), layer-wise relevance propagation (Bach et al., 2015), etc. These methods can offer valuable insights to the practical behavior of neural networks, however they represent distinct approaches and focuses, and are all limited within the function approximator view.\nOur work links neural networks to probabilistic graphical models (Koller & Friedman, 2009), a rich theoretical framework that models and visualizes probabilistic systems composed of random variables (RVs) and their interdependencies. There are several types of graphical models. The chain graph model (also referred to as the LWF chain graph model) (Koller & Friedman, 2009; Lauritzen & Wermuth, 1989; Frydenberg, 1990) used in our work is a general form that unites directed and undirected variants, visualized as a partially directed acyclic graph (PDAG). Interestingly, there exists a series of works on constructing hierarchical graphical models for data-driven learning problems, such as sigmoid belief network (Neal, 1992), deep belief network (Hinton et al., 2006), deep Boltzmann machine (Salakhutdinov & Hinton, 2012) and sum product network (Poon & Domingos, 2011). As alternatives to neural networks, these models have shown promising potentials for generative modeling and unsupervised learning. Nevertheless, they are yet to demonstrate competitive performances over neural network for discriminative learning.\nNeural networks and graphical models have so far been treated as two distinct approaches in general. Existing works that combine them (Zheng et al., 2015; Chen et al., 2018; Lample et al., 2016) mainly treat either neural networks as function approximators for amortized inference, or graphical models as post-processing steps. Tang & Salakhutdinov (2013) create a hybrid model, the stochastic feedforward neural network (SFNN), by concatenating deterministic neurons with stochastic Bernoulli random variables, in order to represent multimodal distributions. Some also consider neural networks as graphical models with deterministic hidden nodes (Buntine, 1994). However this is an atypical degenerate regime. To the best of our knowledge, our work provides the first rigorous and comprehensive formulation of a (non-degenerate) graphical model interpretation for neural networks in practical use." }, { "heading": "1.2 OUR CONTRIBUTIONS", "text": "The main contributions of our work are summarized as follows:\n• We propose a layered chain graph representation of neural networks, interpret feed-forward as an approximate probabilistic inference procedure, and show that this interpretation provides an extensive coverage of practical NN components (Section 2);\n• To illustrate its advantages, we show with concrete examples (residual block, RNN, dropout) that the chain graph interpretation enables coherent and in-depth theoretical support, and provides additional insights to various empirically established network structures (Section 3); • Furthermore, we demonstrate the potential of the chain graph interpretation for discovering new approaches by using it to derive a novel stochastic inference method named partially collapsed feed-forward, and establish experimentally its empirical effectiveness (Section 4)." }, { "heading": "2 CHAIN GRAPH INTERPRETATION OF NEURAL NETWORKS", "text": "Without further delay, we derive the chain graph interpretation of neural networks in this section. We will state and discuss the main results here and leave the proofs in the appendix." }, { "heading": "2.1 THE LAYERED CHAIN GRAPH REPRESENTATION", "text": "We start by formulating the so called layered chain graph that corresponds to neural networks we use in practice: Consider a system represented by L layers of random variables (X1, . . . ,XL), where X li is the i-th variable node in the l-th layer, and denote N\nl the number of nodes in layer l. We assume that nodes X li in the same layer l have the same distribution type characterized by a feature function Tl that can be multidimensional. Also, we assume that the layers are ordered topologically and denote Pa(Xl) the parent layers of Xl. To ease our discussion, we assume that X1 is the input layer and XL the output layer (our formulation can easily extend to multi-input/output cases). A layered chain graph is then defined as follows:\nDefinition 1. A layered chain graph that involves L layers of random variables (X1, . . . ,XL) is a chain graph that encodes the overall distribution P (X2, . . . ,XL|X1) such that: 1. It can be factored into layerwise chain components P (Xl|Pa(Xl)) following the topological or-\nder, and nodes X li within each chain component P (X l|Pa(Xl)) are conditionally independent given their parents (this results in bipartite chain components), thus allowing for further decomposition into nodewise conditional distributions P (X li |Pa(Xl)) . This means we have\nP (X2, . . . ,XL|X1) = L∏ l=2 P (Xl|Pa(Xl)) = L∏ l=2 N l∏ i=1 P (X li |Pa(Xl)); (1)\n2. For each layer l with parent layers Pa(Xl) = {Xp1 , . . .Xpn}, p1, . . . , pn ∈ {1, . . . , l − 1}, its nodewise conditional distributions P (X li |Pa(Xl)) are modeled by pairwise conditional random fields (CRFs) with with unary (bli) and pairwise (W p,l j,i ) weights (as we will see, they actually\ncorrespond to biases and weights in NN layers): P (X li |Pa(Xl)) = f l ( Tl(X li), e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) )) (2)\nwith eli ( Tp1(Xp1), . . . ,Tpn(Xpn) ) = bli + pn∑ p=p1 Np∑ j=1 Wp,lj,iT p(Xpj ). (3)\nFigure 1 Left illustrates an example three-layer network as layered chain graph and its chain component factorization. In Eq. (2), f l is an arbitrary function that represents a probability distribution. For exponential family distributions (Koller & Friedman, 2009), Eq. (2) simply becomes P (X li |Pa(Xl)) ∝ exp ( Tl(X li) · eli ( Tp1(Xp1), . . . ,Tpn(Xpn) )) .\nNote that layered chain graph has a globally directed graph structure and has an equivalent modeling based on directed graphical model (Bayesian network) (Koller & Friedman, 2009), we elaborate on this point for interested readers in Appendix A." }, { "heading": "2.2 FEED-FORWARD AS APPROXIMATE PROBABILISTIC INFERENCE", "text": "To identify layered chain graphs with real-world neural networks, we need to show that they can behave the same way during inference and learning. For this, we establish the fact that feed-forward can actually be seen as performing an approximate probabilistic inference on a layered chain graph:\nGiven an input sample x̃1, we consider the problem of inferring the marginal distribution Qli of a node X li and its expected features q l i, defined as\nQli(x l i|x̃1) = P (X li = xli|X1 = x̃1); qli = EQli [T l(X li)] (q 1 = x̃1). (4)\nConsider a non-input layer l with parent layers p1, . . . , pn, the independence assumptions encoded by the layered chain graph lead to the following recursive expression for marginal distributions Q:\nQli(x l i|x̃1) = EQp1 ,...,Qpn [P (xli|Pa(Xl))]. (5)\nHowever, the above expression is in general intractable, as it integrates over the entire admissible states of all parents nodes in Pa(Xl). To proceed further, simplifying approximations are needed. Interestingly, by using linear approximations, we can obtain the following results (in case of discrete random variable the integration in Eq. 7 is replaced by summation): Proposition 1. If we make the assumptions that the corresponding expressions are approximately linear w.r.t. parent features Tp1(Xp1), . . . ,Tpn(Xpn), we obtain the following approximations:\nQli(x l i|x̃1) ≈ f l ( Tl(xli), e l i(q p1 , . . . ,qpn) ) ; (6)\nqli ≈ ∫ xli Tl(xli)f l ( Tl(xli), e l i(q p1 , . . . ,qpn) ) dxli := g l(eli(q p1 , . . . ,qpn)). (7)\nEspecially, Eq. (7) is a feed-forward expression for expected features qli with activation function g l determined by Tl and f l, i.e. the distribution type of random variable nodes in layer l.\nThe proof is provided in Appendix B.1. This allows us to identify feed-forward as an approximate probabilistic inference procedure for layered chain graphs. For learning, the loss function is typically a function of (QL,qL) obtainable via feed-forward, and we can follow the same classical neural network parameter update using stochastic gradient descent and backpropagation. Thus we are able to replicate the exact neural network training process with this layered chain graph framework.\nThe following corollary provides concrete examples of some common activation functions g (we emphasize their names in bold, detailed formulations and proofs are given in Appendix B.2): Corollary 2. We have the following node distribution - activation function correspondences:\n1. Binary nodes taking values {α, β} results in sigmoidal activations, especially, we obtain sigmoid with α = 0, β = 1 and tanh with α = −1, β = 1 (α, β are interchangeable); 2. Multilabel nodes characterized by label indicator features result in the softmax activation; 3. Variants of (leaky) rectified Gaussian distributions (T li (X l i) = X l i = max( Y l i , Y l i ) with Y l i ∼\nN ( eli, (s l i(e l i)) 2 ) ) can approximate activations such as softplus ( = 0, sli ≈ 1.7761) and -leaky rectified linear unit (ReLU) (sli = tanh(eli)) including ReLU ( = 0) and identity ( = 1).\nFigure 1 Right illustrates activation functions approximated by various rectified Gaussian variants. We also plotted (in orange) an alternative approximation of ReLU with sigmoid-modulated standard deviation proposed by Nair & Hinton (2010) which is less accurate around the kink at the origin.\nThe linear approximations, needed for feed-forward, is coarse and only accurate for small pairwise weights (‖W‖ 1) or already linear regions. This might justify weight decay beyond the general “anti-overfit” argument and the empirical superiority of piecewise linear activations like ReLU (Nair & Hinton, 2010). Conversely, as a source of error, it might explain some “failure cases” of neural networks such as their vulnerability against adversarial samples, see e.g., Goodfellow et al. (2015)." }, { "heading": "2.3 GENERALITY OF THE CHAIN GRAPH INTERPRETATION", "text": "The chain graph interpretation formulated in Sections 2.1 and 2.2 is a general framework that can describe many practical network structures. To demonstrate this, we list here a wide range of neural network designs (marked in bold) that are chain graph interpretable.\n• In terms of network architecture, it is clear that the chain graph interpretation can model networks of arbitrary depth, and with general multi-branched structures such as inception modules (Szegedy et al., 2015) or residual blocks (He et al., 2016; He et al., 2016) discussed in Section 3.1. Also, it is possible to built up recurrent neural networks (RNNs) for sequential data\nlearning, as we will see in Section 3.2. Furthermore, the modularity of chain components justifies transfer learning via partial reuse of pre-trained networks, e.g., backbones trained for image classification can be reused for segmentation (Chen et al., 2018). • In terms of layer structure, we are free to employ sparse connection patterns and shared/fixed weight, so that we can obtain not only dense connections, but also connections like convolution, average pooling or skip connections. Moreover, as shown in Section 3.3, dropout can be reproduced by introducing and sampling from auxiliary random variables, and normalization layers like batch normalization (Ioffe & Szegedy, 2015) can be seen as reparametrizations of node distributions and fall within the general form (Eq. (2)). Finally, we can extend the layered chain graph model to allow for intra-layer connections, which enables non-bipartite CRF layers which are typically used on output layers for structured prediction tasks like image segmentation (Zheng et al., 2015; Chen et al., 2018) or named entity recognition (Lample et al., 2016). However, feed-forward is no longer applicable through these intra-connected layers. • Node distributions can be chosen freely, leading to a variety of nonlinearities (e.g., Corollary 2)." }, { "heading": "3 SELECTED CASE STUDIES OF EXISTING NEURAL NETWORK DESIGNS", "text": "The proposed chain graph interpretation offers a detailed description of the underlying mechanism of neural networks. This allows us to obtain novel theoretical support and insights for various network designs which are consistent within a unified framework. We illustrate this with the following concrete examples where we perform in-depth analysis based on the chain graph formulation." }, { "heading": "3.1 RESIDUAL BLOCK AS REFINEMENT MODULE", "text": "The residual block, proposed originally in He et al. (2016) and improved later (He et al., 2016) with the preactivation form, is an effective design for building up very deep networks. Here we show that a preactivation residual block corresponds to a refinement module within a chain graph. We use modules to refer to encapsulations of layered chain subgraphs as input–output mappings without specifying their internal structures. A refinement module is defined as follows:\nDefinition 2. Given a base submodule from layer Xl−1 to layer Xl, a refinement module augments this base submodule with a side branch that chains a copy of the base submodule (sharing weight with its original) from Xl−1 to a duplicated layer X̃l, and then a refining submodule from X̃l to Xl.\nProposition 3. A refinement module corresponds to a preactivation residual block.\nWe provide a proof in Appendix B.3 and illustrate this correspondence in Figure 2. An interesting remark is that the refinement process can be recursive: the base submodule of a refinement module can be a refinement module itself. This results in a sequence of consecutive residual blocks.\nWhile a vanilla layered chain component encodes a generalized linear model during feed-forward (c.f. Eqs. (7),(3)), the refinement process introduces a nonlinear extension term to the previously linear output preactivation, effectively increasing the representational power. This provides a possible explanation to the empirical improvement generally observed when using residual blocks.\nNote that it is also possible to interpret the original postactivation residual blocks, however in a somewhat artificial manner, as it requires defining identity connections with manually fixed weights." }, { "heading": "3.2 RECURRENT NEURAL NETWORKS", "text": "Recurrent neural networks (RNNs) (Goodfellow et al., 2016) are widely used for handling sequential data. An unrolled recurrent neural network can be interpreted as a dynamic layered chain graph constructed as follows: a given base layered chain graph is copied for each time step, then these copies are connected together through recurrent chain components following the Markov assumption (Koller & Friedman, 2009): each recurrent layer Xl,t at time t is connected by its corresponding layer Xl,t−1 from the previous time step t − 1. Especially, denoting Pat(Xl,t) the non-recurrent parent layers of Xl,t in the base chain graph, we can easily interpret the following two variants: Proposition 4. Given a recurrent chain component that encodes P (Xl,t|Pat(Xl,t),Xl,t−1)," }, { "heading": "1. It corresponds to a simple (or vanilla / Elman) recurrent layer (Goodfellow et al., 2016) if the", "text": "connection from Xl,t−1 to Xl,t is dense; 2. It corresponds to an independently RNN (IndRNN) (Li et al., 2018) layer if the conditional inde-\npendence assumptions among the nodes X l,ti within layer l are kept through time:\n∀i ∈ {1, . . . , N l}, P (X l,ti |Pa t(Xl,t),Xl,t−1) = P (X l,ti |Pa t(Xl,t), X l,t−1i ). (8)\nWe provide a proof in Appendix B.4 and illustrates both variants in Figure 3.\nThe simple recurrent layer, despite its exhaustive dense recurrent connection, is known to suffer from vanishing/exploding gradient and can not handle long sequences. The commonly used long-short term memory (Hochreiter & Schmidhuber, 1997) and gated recurrent unit (Cho et al., 2014) alleviate this issue via long term memory cells and gating. However, they tend to result in bloated structures, and still cannot handle very long sequences (Li et al., 2018). On the other hand, IndRNNs can process much longer sequences and significantly outperform not only simple RNNs, but also LSTMbased variants (Li et al., 2018; 2019). This indicates that the assumption of intra-layer conditional independence through time, analogue to the local receptive fields of convolutional neural networks, could be an essential sparse network design tailored for sequential modeling." }, { "heading": "3.3 DROPOUT", "text": "Dropout (Srivastava et al., 2014) is a practical stochastic regularization method commonly used especially for regularizing fully connected layers. As we see in the following proposition, from the chain graph point of view, dropout corresponds to introducing Bernoulli auxiliary random variables that serve as noise generators for feed-forward during training: Proposition 5. Adding dropout with drop rate 1 − pl to layer l corresponds to the following chain graph construction: for each nodeX li in layer l we introduce an auxiliary Bernoulli random variable Dli ∼ Bernoulli(pl) and multiply it with the pairwise interaction terms in all preactivations (Eq. (3)) involving X li as parent (this makes D l i a parent of all child nodes of X l i and extend their pairwise interactions with X li to ternary ones). The behavior of dropout is reproduced exactly if:\n• During training, we sample auxiliary nodesDli during each feed-forward. This results in dropping each activation qli of node X l i with probability 1− pl; • At test time, we marginalize auxiliary nodes Dli during each feed-forward. This leads to deterministic evaluations with a constant scaling of pl for the node activations qli.\nWe provide a proof in Appendix B.5. Note that among other things, this chain graph interpretation of dropout provides a theoretical justification of the constant scaling at test time. This was originally proposed as a heuristic in Srivastava et al. (2014) to maintain consistent behavior after training." }, { "heading": "4 PARTIALLY COLLAPSED FEED-FORWARD", "text": "The theoretical formulation provided by the chain graph interpretation can also be used to derive new approaches for neural networks. It allows us to create new deep learning methods following a coherent framework that provides specific semantics to the building blocks of neural networks. Moreover, we can make use of the abundant existing work from the PGM field, which also serves as a rich source of inspiration. As a concrete example, we derive in this section a new stochastic inference procedure called partially collapsed feed-forward (PCFF) using the chain graph formulation." }, { "heading": "4.1 PCFF: CHAIN GRAPH FORMULATION", "text": "A layered chain graph, which can represent a neural network, is itself a probabilistic graphical model that encodes an overall distribution conditioned on the input. This means that, to achieve stochastic behavior, we can directly draw samples from this distribution, instead of introducing additional “noise generators” like in dropout. In fact, given the globally directed structure of layered chain graph, and the fact that the conditioned input nodes are ancestral nodes without parent, it is a wellknown PGM result that we can apply forward sampling (or ancestral sampling) (Koller & Friedman, 2009) to efficiently generate samples: given an input sample x̃1, we follow the topological order and sample each non-input node X li using its nodewise distribution (Eq. (2)) conditioned on the samples (xp1 , . . . ,xpn) of its parents. Compared to feed-forward, forward sampling also performs a single forward pass, but generates instead an unbiased stochastic sample estimate.\nWhile in general an unbiased estimate is preferable and the stochastic behavior can also introduce regularization during training (Srivastava et al., 2014), forward sampling can not directly replace feed-forward, since the sampling operation is not differentiable and will jeopardize the gradient flow during backpropagation. To tackle this, one idea is to apply the reparametrization trick (Kingma & Welling, 2014) on continuous random variables (for discrete RVs the Gumbel softmax trick (Jang et al., 2017) can be used but requires additional continuous relaxation). An alternative solution is to only sample part of the nodes as in the case of dropout.\nThe proposed partially collapse feed-forward follows the second idea: we simply “mix up” feedforward and forward sampling, so that for each forward inference during training, we randomly select a portion of nodes to sample and the rest to compute deterministically with feed-forward. Thus for a node X li with parents (X p1 , . . . ,Xpn), its forward inference update becomes\nqli ← { gl(eli(q\np1 , . . . ,qpn)) if collapsed (feed-forward); Tl(xli), x l i ∼ f l ( Tl(X li), e l i(q p1 , . . . ,qpn) ) if uncollapsed (forward sampling). (9)\nFollowing the collapsed sampling (Koller & Friedman, 2009) terminology, we call this method the partially collapsed feed-forward (PCFF). PCFF is a generalization over feed-forward and forward sampling, which can be seen as its fully collapsed / uncollapsed extremes. Furthermore, it offers a bias–variance trade-off, and can be combined with the reparametrization trick to achieve unbiased estimates with full sampling, while simultaneously maintaining the gradient flow.\nRelation to stochastic feedforward neural network While PCFF can also be seen as a stochastic generalization of the feed-forward inference, it represents a substantially distinct approach compared to SFNN: Apart from the clear difference that PCFF uses forward sampling and SFNN uses importance sampling, a major dissimilarity is that SFNN makes a clear distinction between deterministic neurons and stochastic random variables, whereas PCFF identifies neurons with random variables thanks to the layered chain graph interpretation. This is why PCFF can freely choose a different subset of nodes to sample during each forward pass. From the chain graph interpretation perspective, SFNN can be seen as a layered chain graph having a fixed subset of nodes with stochastic behavior, and it performs a hybrid of feed-forward and importance sampling for inference." }, { "heading": "4.2 PCFF: EXPERIMENTAL VALIDATION", "text": "In the previous sections, we have been discussing existing approaches whose empirical evaluations have been thoroughly covered by prior work. The novel PCFF approach proposed in this section, however, requires experiments to check its practical effectiveness. For this we conduct here a series\nof experiments1. Our emphasis is to understand the behavior of PCFF under various contexts and not to achieve best result for any specific task. We only use chain graph interpretable components, and we adopt the reparameterization trick (Kingma & Welling, 2014) for ReLU PCFF samples.\nThe following experiments show that PCFF is overall an effective stochastic regularization method. Compared to dropout, it tends to produce more consistent performance improvement, and can sometimes outperform dropout. This confirms that our chain graph based reasoning has successfully found an interesting novel deep learning method.\nSimple dense network We start with a simple network with two dense hidden layers of 1024 nodes to classify MNIST (Lecun et al., 1998) and FashionMNIST (Xiao et al., 2017) images. We use PyTorch (Paszke et al., 2017), train with stochastic gradient descent (learning rate 0.01, momentum 0.9), and set up 20% of training data as validation set for performance monitoring and early-stopping. We set drop rate to 0.5 for dropout, and for PCFF we set the sample rate to 0.4 for tanh and 1.0 (full sampling) for ReLU. Figure 4 Left reports the test errors with different activation functions and stochastic regularizations.\nWe see that dropout and PCFF are overall comparable, and both improve the results in most cases. Also, the ReLU activation consistently produces better results that tanh. Additional experiments show that PCFF and dropout can be used together, which sometimes yields improved performance.\nConvolutional residual network To figure out the applicability of PCFF in convolutional residual networks, we experiment on CIFAR-10 (Krizhevsky, 2009) image classification. For this we adapt an existing implementation (Idelbayev) to use the preactivation variant. We focus on the ResNet20 structure, and follow the original learning rate schedule except for setting up a validation set of 10% training data to monitor training performance. Figure 4 Right summarizes the test errors under different drop/sample rates.\nWe observe that in this case PCFF can improve the performance over a wide range of sample rates, whereas dropout is only effective with drop rate 0.1, and large drop rates in this case significantly deteriorate the performance. We also observe a clear trade-off of the PCFF sample rate, where a partial sampling of 0.3 yields the best result.\nIndependently RNN We complete our empirical evaluations of PCFF with an RNN test case. For this we used IndRNNs with 6 layers to solve the sequential/permuted MNIST classification problems based on an existing Implementation2 provided by the authors of IndRNN (Li et al., 2018; 2019). We tested over dropout with drop rate 0.1 and PCFF with sample rate 0.1 and report the average test accuracy of three runs. We notice that, while in the permuted MNIST case both dropout (0.9203) and PCFF (0.9145) improves the result (0.9045), in the sequential MNIST case, dropout (0.9830) seems to worsen the performance (0.9841) whereas PCFF (0.9842) delivers comparable result.\n1Implementation available at: (Github link placeholder, provided as supplementary material.) 2https://github.com/Sunnydreamrain/IndRNN_pytorch" }, { "heading": "5 CONCLUSIONS AND DISCUSSIONS", "text": "In this work, we show that neural networks can be interpreted as layered chain graphs, and that feedforward can be viewed as an approximate inference procedure for these models. This chain graph interpretation provides a unified theoretical framework that elucidates the underlying mechanism of real-world neural networks and provides coherent and in-depth theoretical support for a wide range of empirically established network designs. Furthermore, it also offers a solid foundation to derive new deep learning approaches, with the additional help from the rich existing work on PGMs. It is thus a promising alternative neural network interpretation that deepens our theoretical understanding and unveils a new perspective for future deep learning research.\nIn the future, we plan to investigate a number of open questions that stem from this work, especially:\n• Is the current chain graph interpretation sufficient to capture the full essence of neural networks? Based on the current results, we are reasonably optimistic that the proposed interpretation can cover an essential part of the neural network mechanism. However, compared to the function approximator view, it only covers a subset of existing techniques. Is this subset good enough? • On a related note: can we find chain graph interpretations for other important network designs (or otherwise some chain graph interpretable alternatives with comparable or better performance)? The current work provides a good start, but it is by no means an exhaustive study. • Finally, what other new deep learning models and procedures can we build up based on the chain graph framework? The partially collapsed feed-forward inference proposed in this work is just a simple illustrative example, and we believe that many other promising deep learning techniques can be derived from the proposed chain graph interpretation." }, { "heading": "A GLOBALLY DIRECTED STRUCTURE OF LAYERED CHAIN GRAPH", "text": "With the absence of (undirected) intra-layer connection in the chain components P (Xl|Pa(Xl)), the layered chain graph defined in Definition 1 has a globally directed structure. This means that equivalently it can also be modeled by a directed graphical model (Bayesian network) which admits the same nodewise decomposition\nP (X2, . . . ,XL|X1) = L∏ l=2 N l∏ i=1 P (X li |Pa(Xl)) (10)\nand whose factorized nodewise conditional distributions P (X li |Pa(Xl)) are additionally modeled by pairwise CRFs:\nP (X li |Pa(Xl)) = f l ( Tl(X li), e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) )) . (11)\nThe expressions Eq. (10), (11) are identical to Eq. (1), (2), which shows the equivalence. This is a rather straightforward result as directed graphical model is just a special case of chain graph. We employ the more general chain graph modeling in this paper out of two concerns:\n1. Neural network designs rely heavily on the notion of layers. The layered chain graph formulation provides us with the notion of chain component that can correspond exactly to a neural network layer. This is missing in a directed graphical model formulation; 2. As we have discussed in Section 2.3, using the layered chain graph formulation allows for a straightforward extension from the classical neural network case (layered chain graph) to the more complex case with intra-connected layers that corresponds to general chain graphs with non-bipartite CRF chain components, which can be useful for, e.g., structured prediction tasks." }, { "heading": "B PROOFS", "text": "B.1 PROOF OF PROPOSITION 1\nThe main idea behind the proof is that for a linear function, its expectation can be moved inside and directly applied on its arguments. With this in mind let’s start the actual deductions:\n• To obtain Eq. (6), we start from Eqs. (5) and (2):\nQli(x l i|x̃1) = EQp1 ,...,Qpn [P (xli|Pa(Xl))] (12) = EQp1 ,...,Qpn [f l ( Tl(X li), e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) )) ]. (13)\nNow, we make the assumption that the following mapping is approximately linear: (v1, . . . ,vn) 7→ f l ( Tl(X li), e l i(v1, . . . ,vn) ) . (14)\nThis allows us to move the expectation inside, resulting in (racall the definition of q in Eq. (4))\nQli(x l i|x̃1) ≈ f l ( Tl(X li), e l i ( EQp1 [Tp1(Xp1)], . . . ,EQpn [Tpn(Xpn)] )) (15)\n≈ f l ( Tl(X li), e l i(q p1 , . . . ,qpn) ) . (16)\n• To obtain Eq. (7), we go through a similar procedure (for discrete RVs we replace integrations by summations): From Eqs. (4), (5) and (2) we have\nqli = EQli [T l(X li)] (17)\n= ∫ xli Tl(xli)Q l i(x l i|x̃1)dxli (18)\n= ∫ xli Tl(xli)EQp1 ,...,Qpn [f l ( Tl(xli), e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) )) ]dxli (19)\n= EQp1 ,...,Qpn [ ∫\nxli\nTl(xli)f l ( Tl(xli), e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) )) dxli ] . (20)\nDefining the activation function gl as (thus eli corresponds to the preactivation)\ngl(v) = ∫ xli Tl(xli)f l ( Tl(xli),v ) dxli, (21)\nwe then have qli = EQp1 ,...,Qpn [ gl ( eli ( Tp1(Xp1), . . . ,Tpn(Xpn) ))] . (22)\nAgain, we make another assumption that the following mapping is approximately linear: (v1, . . . ,vn) 7→ gl ( eli(v1, . . . ,vn) ) . (23)\nThis leads to the following approximation in a similar fashion: qli ≈ gl ( eli ( EQp1 [Tp1(Xp1)], . . . ,EQpn [Tpn(Xpn)] )) (24)\n≈ gl ( eli(q p1 , . . . ,qpn) ) . (25)\nB.2 PROOF (AND OTHER DETAILS) OF COROLLARY 2\nLet’s consider a node X li connected by parent layers X p1 , . . . ,Xpn . To lessen the notations we use the shorthands eli for e l i ( Tp1(Xp1), . . . ,Tpn(Xpn) ) and ēli for e l i(q p1 , . . . ,qpn).\n1. For the binary case we have\nT l(X li) = X l i ∈ {α, β}, (26)\nP (X li |Pa(Xl)) = f l(X li , eli) = 1\nZ(Pa(Xl)) exp(X li e l i) (27)\nwith the partition function\nZ(Pa(Xl)) = exp(α eli) + exp(β e l i) (28)\nthat makes sure P (X li |Pa(Xl)) is normalized. This means that, since X li can either be α or β, we can equivalently write (σ : x 7→ 1/(1 + exp(−x)) denotes the sigmoid function)\nf l(X li , e l i) = P (x l i|Pa(xl)) = { σ((α− β) eli) if xli = α σ((β − α) eli) if xli = β = σ((2xli − α− β) eli). (29)\nUsing the feed-forward expression (Eq. (7)), we have qli ≈ ∑\nxli∈{α,β}\nxli f l(xli, ē l i) = α σ((α− β) ēli) + β σ((β − α) ēli) (30)\n= β − α\n2 tanh (β − α 2 · ēli ) + α+ β 2 . (31)\nEspecially,\nWhen α = 0, β = 1, we have qli ≈ σ(ēli); (32) When α = −1, β = 1, we have qli ≈ tanh(ēli). (33)\nFurthermore, the roles of α and β are interchangeable. 2. For the multilabel case, let’s assume that the node X li can take one of the c labels {1, . . . , c}. In\nthis case, we have an indicator feature function which outputs a length-c feature vector\nTl(X li) = (1Xli=1, . . . ,1Xli=c) >. (34)\nThis means that for any given label j ∈ {1, . . . , c}, Tl(j) is a one-hot vector indicating the j-th position. Also, eli and ē l i will both be vectors of length c, and we denote e l i,j and ē l i,j their j-th entries. We have then\nP (X li |Pa(Xl)) = f l(Tl(X li), eli) = 1\nZ(Pa(Xl)) exp(Tl(X li) · eli) (35)\nwith the normalizer (i.e. partition function)\nZ(Pa(Xl)) = c∑ j=1 exp(eli,j). (36)\nThis means that ∀j ∈ {1, . . . , c}, f l(Tl(j), eli) = ( softmax(eli,1, . . . , e l i,c) ) j , (37)\nand, using the feed-forward expression (Eq. (7)), we have\nqli ≈ c∑\nxli=1\nf l(Tl(xli), ē l i)T l(xli) = c∑ j=1 ( softmax(ēli,1, . . . , ē l i,c) ) j Tl(j) (38)\n= softmax(ēli,1, . . . , ē l i,c), (39)\ni.e. the expected features qli of the multi-labeled node X l i is a length-c vector that encodes the\nresult of a softmax activation. 3. The analytical forms of the activation functions are quite complicated for rectified Gaussian\nnodes. Luckily, it is straight-forward to sample from rectified Gaussian distributions (get Gaussian samples, then rectify). meaning that we can easily evaluate them numerically with sample averages. A resulting visualization is displayed in Figure 1 Right. Specifically: • The ReLU nonlinearity can be approximated reasonably well by a rectified Gaussian node with\nno leak ( = 0) and tanh-modulated standard deviation (sli = tanh(e l i)), as shown by the red\nplot in Figure 1 Right; • Similar to the ReLU case, the leaky ReLU nonlinearity can be approximated by a leaky ( 6= 0)\nrectified Gaussian node with tanh-modulated standard deviation (sli = tanh(e l i)). See the\ngreen plot in Figure 1 Right which depict the case with leaky factor = 1/3; • We discover that a rectified Gaussian node with no leak ( = 0) and an appropriately-chosen\nconstant standard deviation sli can closely approximate the softplus nonlinearity (see the blue plot in Figure 1 Right). We numerically evaluate sli = 1.776091849725427 to minimize the maximum pointwise approximation error. Averaging over more samples would of course lead to more accurate (visually thinner) plots, however in Figure 1 Right we deliberately only average over 200 samples, because we also want to visualize their stochastic behaviors: the perceived thickness of a plot can provide a hint to the output sample variance given the preactivation eli.\nB.3 PROOF OF PROPOSITION 3\nGiven a refinement module (c.f. Definition 2) that augments a base submodule m from layer Xl−1 to layer Xl using a refining submodule r from X̃l to layer Xl, denote gl the activation function corresponding to the distribution of nodes in layer Xl, we assume that these two submodules alone would represent the following mappings during feed-forward{\nql = gl(em(ql−1)) (base submodule) ql = gl(er(q̃l)) (refining submodule)\n(40)\nwhere em and er represent the output preactivations of the base submodule and the refining submodule respectively. Then, given an input activation ql−1 from Xl−1, the output preactivation of the overall refinement module should sum up contributions from both the main and the side branches (c.f. Eq. (3)), meaning that the refinement module computes the output as\nql = gl(em(ql−1) + er(q̃l)) (41) with q̃l the output of the duplicated base submodule with shared weight, given by\nq̃l = gl(em(ql−1)). (42) We have thus (Id denotes the identity function)\nql = gl ◦ (Id +er ◦ gl) ◦ em(ql−1) (43) where the function Id +er ◦gl describes a preactivation residual block that arises naturally from the refinement module structure.\nB.4 PROOF OF PROPOSITION 4\nTo match the typical deep learning formulations and ease the derivation, we assume that the base layered chain graph has a sequential structure, meaning that Pat(Xl,t) contains only Xl−1,t, and we have\nP (X l,ti |Pa t(Xl,t),Xl,t−1) = P (X l,ti |X l−1,t,Xl,t−1) (44) = f l,t ( Tl(X l,ti ), e l,t i ( Tl−1(Xl−1,t),Tl(Xl,t−1) )) . (45)\n1. When the connection from Xl,t−1 to Xl,t is dense, we have that for each i ∈ {1, . . . , N l},\nel,ti ( Tl−1(Xl−1,t),Tl(Xl,t−1) ) = N l−1∑ j=1 Wlj,iT l−1(X l−1,tj ) + N l∑ k=1 Ulk,iT l(X l,t−1k ) +b l i. (46)\nThus the feed-forward update for layer l at time t is\n∀i ∈ {1, . . . , N l}, ql,ti ≈ g l (N l−1∑ j=1 Wlj,iq l−1,t j + N l∑ k=1 Ulk,iq l,t−1 k + b l i ) (47)\nwhich corresponds to the update of a simple recurrent layer. 2. The assumption of intra-layer conditional independence through time means that we have\nP (X l,ti |Pa t(Xl,t),Xl,t−1) = P (X l,ti |X l−1,t, X l,t−1i ), (48)\nwhich in terms of preactivation function means that for each i ∈ {1, . . . , N l}, el,ti ( Tl−1(Xl−1,t),Tl(Xl,t−1) ) = el,ti ( Tl−1(Xl−1,t),Tl(X l,t−1i ) ) (49)\n= N l−1∑ j=1 Wlj,iT l−1(X l−1,tj ) + U l iT l(X l,t−1i ) + b l i. (50)\nIn this case the feed-forward update for layer l at time t is\n∀i ∈ {1, . . . , N l}, ql,ti ≈ g l (N l−1∑ j=1 Wlj,iq l−1,t j + U l iq l,t−1 i + b l i ) (51)\nwhich corresponds to the update of an independently RNN layer (c.f. Eq. (2) of Li et al. (2018)).\nB.5 PROOF OF PROPOSITION 5\nAgain, to match the typical deep learning formulations and ease the derivation, we assume that the layered chain graph has a sequential structure, meaning that Xl is only the parent layer of Xl+1. With the introduction of the auxiliary Bernoulli RVs Dl, the l + 1-th chain component represents\nP (Xl+1|Xl,Dl) = N l+1∏ j=1 f l+1j ( Tl+1(X l+1j ), e l+1 j ( Tl(Xl),Dl )) (52)\nwith\nel+1j ( Tl(Xl),Dl ) = N l−1∑ i=1 DliW l+1 i,j T l(X li) + b l j . (53)\n• For a feed-forward pass during training, we draw a sample dli for each Bernoulli RV Dli, and the feed-forward update for layer l + 1 becomes\n∀j ∈ {1, . . . , N l+1}, ql+1j ≈ g l+1 ( N l∑ i=1 dliW l+1 i,j q l i + b l j ) . (54)\nSince dli = 1 with probability p l and dli = 0 with probability 1 − pl, each activation qli will be “dropped out” (i.e. dliq l i = 0) with probability 1− pl (this affects all q l+1 j simultaneously).\n• For a feed-forward pass at test time, we marginalize each Bernoulli RV Dli. Since we have\n∀i ∈ {1, . . . , N l}, E[Dli] = pl, (55)\nthe feed-forward update for layer l + 1 in this case becomes\n∀j ∈ {1, . . . , N l+1}, ql+1j ≈ g l+1 ( N l∑ i=1 plWl+1i,j q l i + b l j ) (56)\nwhere we see the appearance of the constant scale pl." }, { "heading": "C REMARK ON INPUT MODELING", "text": "A technical detail that was not elaborated in the discussion of chain graph interpretation (Section 2) is the input modeling: How to encode an input data sample? Ordinarily, an input data sample is treated as a sample drawn from some distribution that represents the input. In our case however, since the feed-forward process only pass through feature expectations, we can also directly interpret an input data sample as a feature expectation, meaning as a resulting average rather than a single sample. Using this fact, Shen et al. (2019) propose the “soft clamping” approach to encode real valued input taken from an interval, such as pixel intensity, simply as an expected value of a binary node which chooses between the interval boundary values.\nThis said, since only the conditional distribution P (X2, . . . ,XL|X1) is modeled, our discriminative setting actually do not require specifying an input distribution P (X1)." } ]
2,020
A CHAIN GRAPH INTERPRETATION
SP:46354d6dca2faa7f4553f9a00059c86178ab87e2
[ "The authors consider planning for Markov Decision Process. Precisely they study the benefit of convex regularization in Monte-Carlo Tree Search (MCTS). They generalize the E2W by xiao et al., 2019 by considering any strictly convex function as regularizer instead of the intial negative entropy. They provide a regret analysis of this algorithm named E3W and prove that EW3 converges at an exponential rate to the solution of the regularized objective function. Then they consider three particular instances MENTS with the Shannon entropy as a regularizer, RENTS with relative entropy to the previous policy as regularizer, and TENTS with the Tsallis entropy. They compare empirically these algorithms with PUCT as policy search in Alpha-go style MCTS on CartPol, Acrobot, and Atari games." ]
Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making. The recent AlphaGo and AlphaZero algorithms have shown how to successfully combine these two paradigms to solve large scale sequential decision problems. These methodologies exploit a variant of the well-known UCT algorithm to trade off the exploitation of good actions and the exploration of unvisited states, but their empirical success comes at the cost of poor sample-efficiency and high computation time. In this paper, we overcome these limitations by studying the benefit of convex regularization in Monte-Carlo Tree Search (MCTS) to drive exploration efficiently and to improve policy updates, as already observed in RL. First, we introduce a unifying theory on the use of generic convex regularizers in MCTS, deriving the first regret analysis of regularized MCTS and showing that it guarantees an exponential convergence rate. Second, we exploit our theoretical framework to introduce novel regularized backup operators for MCTS, based on the relative entropy of the policy update and on the Tsallis entropy of the policy. We provide an intuitive demonstration of the effect of each regularizer empirically verifying the consequence of our theoretical results on a toy problem. Finally, we show how our framework can easily be incorporated in AlphaGo and AlphaZero, and we empirically show the superiority of convex regularization w.r.t. representative baselines, on well-known RL problems across several Atari games.
[]
[ { "authors": [ "Peter Auer", "Nicolo Cesa-Bianchi", "Paul Fischer" ], "title": "Finite-time analysis of the multiarmed bandit problem", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Richard Bellman" ], "title": "The theory of dynamic programming", "venue": "Technical report, Rand corp santa monica ca,", "year": 1954 }, { "authors": [ "Lars Buesing", "Nicolas Heess", "Theophane Weber" ], "title": "Approximate inference in discrete distributions with monte carlo tree search and value functions", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Guillaume Chaslot", "Mark Winands", "Jaap Van Den Herik", "Jos Uiterwijk", "Bruno Bouzy" ], "title": "Progressive strategies for monte-carlo tree search", "venue": "New Mathematics and Natural Computation,", "year": 2008 }, { "authors": [ "Benjamin E Childs", "James H Brodeur", "Levente Kocsis" ], "title": "Transpositions and move groups in monte carlo tree search", "venue": "IEEE Symposium On Computational Intelligence and Games. IEEE,", "year": 2008 }, { "authors": [ "Pierre-Arnaud Coquelin", "Rémi Munos" ], "title": "Bandit algorithms for tree search", "venue": "arXiv preprint cs/0703062,", "year": 2007 }, { "authors": [ "Rémi Coulom" ], "title": "Efficient selectivity and backup operators in monte-carlo tree search", "venue": "In International conference on computers and games,", "year": 2006 }, { "authors": [ "Matthieu Geist", "Bruno Scherrer", "Olivier Pietquin" ], "title": "A theory of regularized markov decision processes", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sylvain Gelly", "David Silver" ], "title": "Combining online and offline knowledge in uct", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Sylvain Gelly", "Yizao Wang" ], "title": "Exploration exploitation in go: Uct for monte-carlo go", "venue": "In NIPS: Neural Information Processing Systems Conference On-line trading of Exploration and Exploitation Workshop,", "year": 2006 }, { "authors": [ "Jean-Bastien Grill", "Florent Altché", "Yunhao Tang", "Thomas Hubert", "Michal Valko", "Ioannis Antonoglou", "Rémi Munos" ], "title": "Monte-carlo tree search as regularized policy optimization", "venue": "arXiv preprint arXiv:2007.12509,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "David P Helmbold", "Aleatha Parker-Wood" ], "title": "All-moves-as-first heuristics in monte-carlo go", "venue": "In IC-AI, pp", "year": 2009 }, { "authors": [ "Jean-Baptiste Hoock", "Chang-Shing Lee", "Arpad Rimmel", "Fabien Teytaud", "Mei-Hui Wang", "Oliver Teytaud" ], "title": "Intelligent agents for the game of go", "venue": "IEEE Computational Intelligence Magazine,", "year": 2010 }, { "authors": [ "Piyush Khandelwal", "Elad Liebman", "Scott Niekum", "Peter Stone" ], "title": "On the analysis of complex backup strategies in monte carlo tree search", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Kyungjae Lee", "Sungjoon Choi", "Songhwai Oh" ], "title": "Sparse markov decision processes with causal sparse tsallis entropy regularization for reinforcement learning", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Richard J Lorentz" ], "title": "Improving monte–carlo tree search in havannah", "venue": "In International Conference on Computers and Games,", "year": 2010 }, { "authors": [ "Jincheng Mei", "Chenjun Xiao", "Ruitong Huang", "Dale Schuurmans", "Martin Müller" ], "title": "On principled entropy exploration in policy optimization", "venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Arthur Mensch", "Mathieu Blondel" ], "title": "Differentiable dynamic programming for structured prediction and attention", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "William H Montgomery", "Sergey Levine" ], "title": "Guided policy search via approximate mirror descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ofir Nachum", "Bo Dai" ], "title": "Reinforcement learning via fenchel-rockafellar duality", "venue": "CoRR, abs/2001.01866,", "year": 2020 }, { "authors": [ "Vlad Niculae", "Mathieu Blondel" ], "title": "A regularized framework for sparse and structured neural attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lacra Pavel" ], "title": "An extension of duality to a game-theoretic framework", "venue": null, "year": 2007 }, { "authors": [ "Gavin Adrian" ], "title": "Rummery. Problem solving with reinforcement learning", "venue": "PhD thesis, University of Cambridge Ph. D. dissertation,", "year": 1995 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel", "Timothy Lillicrap", "David Silver" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model, 2019", "venue": null, "year": 2019 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "John Schulman", "Xi Chen", "Pieter Abbeel" ], "title": "Equivalence between policy gradients and soft qlearning", "venue": "arXiv preprint arXiv:1704.06440,", "year": 2017 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Shai Shalev-Shwartz", "Yoram Singer" ], "title": "Convex repeated games and fenchel duality", "venue": "Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 135", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Gerald Tesauro", "VT Rajan", "Richard Segal" ], "title": "Bayesian inference in monte-carlo tree search", "venue": "arXiv preprint arXiv:1203.3519,", "year": 2012 }, { "authors": [ "Fabien Teytaud", "Olivier Teytaud" ], "title": "On the huge benefit of decisive moves in monte-carlo tree search algorithms", "venue": "In Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games,", "year": 2010 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "David Tom" ], "title": "Investigating uct and rave: Steps towards a more robust method", "venue": null, "year": 2010 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI conference on artificial intelligence,", "year": 2016 }, { "authors": [ "Tom Vodopivec", "Spyridon Samothrakis", "Branko Ster" ], "title": "On monte carlo tree search and reinforcement learning", "venue": "Journal of Artificial Intelligence Research,", "year": 2017 }, { "authors": [ "Martin J Wainwright" ], "title": "High-dimensional statistics: A non-asymptotic viewpoint, volume 48", "venue": null, "year": 2019 }, { "authors": [ "Chenjun Xiao", "Ruitong Huang", "Jincheng Mei", "Dale Schuurmans", "Martin Müller" ], "title": "Maximum entropy monte-carlo planning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Timothy Yee", "Viliam Lisỳ", "Michael H Bowling", "S Kambhampati" ], "title": "Monte carlo tree search in continuous action spaces with execution uncertainty", "venue": "In IJCAI,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Monte-Carlo Tree Search (MCTS) is a well-known algorithm to solve decision-making problems through the combination of Monte-Carlo planning with an incremental tree structure (Coulom, 2006). Although standard MCTS is only suitable for problems with discrete state and action spaces, recent advances have shown how to enable MCTS in continuous problems (Silver et al., 2016; Yee et al., 2016). Most remarkably, AlphaGo (Silver et al., 2016) and AlphaZero (Silver et al., 2017b;a) couple MCTS with neural networks trained using Reinforcement Learning (RL) (Sutton & Barto, 1998) methods, e.g., Deep Q-Learning (Mnih et al., 2015), to speed up learning of large scale problems with continuous state space. In particular, a neural network is used to compute value function estimates of states as a replacement of time-consuming Monte-Carlo rollouts, and another neural network is used to estimate policies as a probability prior for the therein introduced PUCT action selection method, a variant of well-known UCT sampling strategy commonly used in MCTS for exploration (Kocsis et al., 2006). Despite AlphaGo and AlphaZero achieving state-of-the-art performance in games with high branching factor like Go (Silver et al., 2016) and Chess (Silver et al., 2017a), both methods suffer from poor sample-efficiency, mostly due to the polynomial convergence rate of PUCT (Xiao et al., 2019). This problem, combined with the high computational time to evaluate the deep neural networks, significantly hinder the applicability of both methodologies.\nIn this paper, we provide a unified theory of the use of convex regularization in MCTS, which proved to be an efficient solution for driving exploration and stabilizing learning in RL (Schulman et al., 2015; 2017a; Haarnoja et al., 2018; Buesing et al., 2020). In particular, we show how a regularized objective function in MCTS can be seen as an instance of the Legendre-Fenchel transform, similar to previous findings on the use of duality in RL (Mensch & Blondel, 2018; Geist et al., 2019; Nachum & Dai, 2020) and game theory (Shalev-Shwartz & Singer, 2006; Pavel, 2007). Establishing our theoretical framework, we can derive the first regret analysis of regularized MCTS, and prove that a generic convex regularizer guarantees an exponential convergence rate to the solution of the reg-\nularized objective function, which improves on the polynomial rate of PUCT. These results provide a theoretical ground for the use of arbitrary entropy-based regularizers in MCTS until now limited to maximum entropy (Xiao et al., 2019), among which we specifically study the relative entropy of policy updates, drawing on similarities with trust-region and proximal methods in RL (Schulman et al., 2015; 2017b), and the Tsallis entropy, used for enforcing the learning of sparse policies (Lee et al., 2018). Moreover, we provide an empirical analysis of the toy problem introduced in Xiao et al. (2019) to intuitively evince the practical consequences of our theoretical results for each regularizer. Finally, we empirically evaluate the proposed operators in AlphaGo and AlphaZero on problems of increasing complexity, from classic RL problems to an extensive analysis of Atari games, confirming the benefit of our novel operators compared to maximum entropy and, in general, the superiority of convex regularization in MCTS w.r.t. classic methods." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 MARKOV DECISION PROCESSES", "text": "We consider the classical definition of a finite-horizon Markov Decision Process (MDP) as a 5- tuple M = 〈S,A,R,P, γ〉, where S is the state space, A is the finite discrete action space, R : S × A × S → R is the reward function, P : S × A → S is the transition kernel, and γ ∈ [0, 1) is the discount factor. A policy π ∈ Π : S × A → R is a probability distribution of the event of executing an action a in a state s. A policy π induces a value function corresponding to the expected cumulative discounted reward collected by the agent when executing action a in state s, and following the policy π thereafter: Qπ(s, a) , E [∑∞ k=0 γ kri+k+1|si = s, ai = a, π ] , where ri+1 is the reward obtained after the i-th transition. An MDP is solved finding the optimal policy π∗, which is the policy that maximizes the expected cumulative discounted reward. The optimal policy corresponds to the one satisfying the optimal Bellman equation (Bellman, 1954) Q∗(s, a) , ∫ S P(s\n′|s, a) [R(s, a, s′) + γmaxa′ Q∗(s′, a′)] ds′, and is the fixed point of the optimal Bellman operator T ∗Q(s, a) , ∫ S P(s\n′|s, a) [R(s, a, s′) + γmaxa′ Q(s′, a′)] ds′. Additionally, we define the Bellman operator under the policy π as TπQ(s, a) ,∫ S P(s ′|s, a) [ R(s, a, s′) + γ ∫ A π(a ′|s′)Q(s′, a′)da′ ] ds′, the optimal value function V ∗(s) , maxa∈AQ ∗(s, a), and the value function under the policy π as V π(s) , maxa∈AQπ(s, a)." }, { "heading": "2.2 MONTE-CARLO TREE SEARCH AND UPPER CONFIDENCE BOUNDS FOR TREES", "text": "Monte-Carlo Tree Search (MCTS) is a planning strategy based on a combination of Monte-Carlo sampling and tree search to solve MDPs. MCTS builds a tree where the nodes are the visited states of the MDP, and the edges are the actions executed in each state. MCTS converges to the optimal policy (Kocsis et al., 2006; Xiao et al., 2019), iterating over a loop composed of four steps:\n1. Selection: starting from the root node, a tree-policy is executed to navigate the tree until a node with unvisited children, i.e. expandable node, is reached;\n2. Expansion: the reached node is expanded according to the tree policy; 3. Simulation: run a rollout, e.g. Monte-Carlo simulation, from the visited child of the cur-\nrent node to the end of the episode;\n4. Backup: use the collected reward to update the action-values Q(·) of the nodes visited in the trajectory from the root node to the expanded node.\nThe tree-policy used to select the action to execute in each node needs to balance the use of already known good actions, and the visitation of unknown states. The Upper Confidence bounds for Trees (UCT) sampling strategy (Kocsis et al., 2006) extends the use of the well-known UCB1 sampling strategy for multi-armed bandits (Auer et al., 2002), to MCTS. Considering each node corresponding to a state s ∈ S as a different bandit problem, UCT selects an action a ∈ A applying an upper bound to the action-value function\nUCT(s, a) = Q(s, a) +\n√ logN(s)\nN(s, a) , (1)\nwhere N(s, a) is the number of executions of action a in state s, N(s) = ∑ aN(s, a), and is a constant parameter to tune exploration. UCT asymptotically converges to the optimal action-value function Q∗, for all states and actions, with the probability of executing a suboptimal action at the root node approaching 0 with a polynomial rate O( 1t ), for a simulation budget t (Kocsis et al., 2006; Xiao et al., 2019)." }, { "heading": "3 REGULARIZED MONTE-CARLO TREE SEARCH", "text": "The success of RL methods based on entropy regularization comes from their ability to achieve state-of-the-art performance in decision making and control problems, while enjoying theoretical guarantees and ease of implementation (Haarnoja et al., 2018; Schulman et al., 2015; Lee et al., 2018). However, the use of entropy regularization is MCTS is still mostly unexplored, although its advantageous exploration and value function estimation would be desirable to reduce the detrimental effect of high-branching factor in AlphaGo and AlphaZero. To the best of our knowledge, the MENTS algorithm (Xiao et al., 2019) is the first and only method to combine MCTS and entropy regularization. In particular, MENTS uses a maximum entropy regularizer in AlphaGo, proving an exponential convergence rate to the solution of the respective softmax objective function and achieving state-of-the-art performance in some Atari games (Bellemare et al., 2013). In the following, motivated by the success in RL and the promising results of MENTS, we derive a unified theory of regularization in MCTS based on the Legendre-Fenchel transform (Geist et al., 2019), that generalizes the use of maximum entropy of MENTS to an arbitrary convex regularizer. Notably, our theoretical framework enables to rigorously motivate the advantages of using maximum entropy and other entropy-based regularizers, such as relative entropy or Tsallis entropy, drawing connections with their RL counterparts TRPO (Schulman et al., 2015) and Sparse DQN (Lee et al., 2018), as MENTS does with Soft Actor-Critic (SAC) (Haarnoja et al., 2018)." }, { "heading": "3.1 LEGENDRE-FENCHEL TRANSFORM", "text": "Consider an MDP M = 〈S,A,R,P, γ〉, as previously defined. Let Ω : Π → R be a strongly convex function. For a policy πs = π(·|s) andQs = Q(s, ·) ∈ RA, the Legendre-Fenchel transform (or convex conjugate) of Ω is Ω∗ : RA → R, defined as:\nΩ∗(Qs) , max πs∈Πs TπsQs − τΩ(πs), (2)\nwhere the temperature τ specifies the strength of regularization. Among the several properties of the Legendre-Fenchel transform, we use the following (Mensch & Blondel, 2018; Geist et al., 2019).\nProposition 1 Let Ω be strongly convex.\n• Unique maximizing argument: ∇Ω∗ is Lipschitz and satisfies ∇Ω∗(Qs) = arg max\nπs∈Πs TπsQs − τΩ(πs). (3)\n• Boundedness: if there are constants LΩ and UΩ such that for all πs ∈ Πs, we have LΩ ≤ Ω(πs) ≤ UΩ, then\nmax a∈A Qs(a)− τUΩ ≤ Ω∗(Qs) ≤ max a∈A Qs(a)− τLΩ. (4)\n• Contraction: for any Q1, Q2 ∈ RS×A\n‖ Ω∗(Q1)− Ω∗(Q2) ‖∞≤ γ ‖ Q1 −Q2 ‖∞ . (5)\nAlthough the Legendre-Fenchel transform Ω∗ applies to every strongly convex function, for the purpose of this work we only consider a representative set of entropic regularizers." }, { "heading": "3.2 REGULARIZED BACKUP AND TREE POLICY", "text": "In MCTS, each node of the tree represents a state s ∈ S and contains a visitation count N(s, a). Given a trajectory, we define n(sT ) as the leaf node corresponding to the reached state sT . Let\ns0, a0, s1, a1..., sT be the state action trajectory in a simulation, where n(sT ) is a leaf node of T . Whenever a node n(sT ) is expanded, the respective action values (Equation 6) are initialized as QΩ(sT , a) = 0, and N(sT , a) = 0 for all a ∈ A. For all nodes in the trajectory, the visitation count is updated by N(st, at) = N(st, at) + 1, and the action-values by\nQΩ(st, at) = { r(st, at) + γρ if t = T r(st, at) + γΩ ∗(QΩ(st+1)/τ)) if t < T (6)\nwhere QΩ(st+1) ∈ RA with components QΩ(st+1, a),∀a ∈ A, and ρ is an estimate returned from an evaluation function computed in sT , e.g. a discounted cumulative reward averaged over multiple rollouts, or the value-function of node n(sT+1) returned by a value-function approximator, e.g. a neural network pretrained with deep Q-learning (Mnih et al., 2015), as done in (Silver et al., 2016; Xiao et al., 2019). We revisit the E2W sampling strategy limited to maximum entropy regularization (Xiao et al., 2019) and, through the use of the convex conjugate in Equation (6), we derive a novel sampling strategy that generalizes to any convex regularizer\nπt(at|st) = (1− λst)∇Ω∗(QΩ(st)/τ)(at) + λst |A| , (7)\nwhere λst = |A|/log( ∑ aN(st,a)+1) with > 0 as an exploration parameter, and ∇Ω∗ depends on the measure in use (see Table 1 for maximum, relative, and Tsallis entropy). We call this sampling strategy Extended Empirical Exponential Weight (E3W) to highlight the extension of E2W from maximum entropy to a generic convex regularizer." }, { "heading": "3.3 CONVERGENCE RATE TO REGULARIZED OBJECTIVE", "text": "We show that the regularized value VΩ can be effectively estimated at the root state s ∈ S , with the assumption that each node in the tree has a σ2-subgaussian distribution. This result extends the analysis provided in (Xiao et al., 2019), which is limited to the use of maximum entropy.\nTheorem 1 At the root node s where N(s) is the number of visitations, with > 0, VΩ(s) is the estimated value, with constant C and Ĉ, we have\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ C exp{− N(s)\nĈσ(log(2 +N(s)))2 }, (8)\nwhere VΩ(s) = Ω∗(Qs) and V ∗Ω(s) = Ω ∗(Q∗s). From this theorem, we obtain that the convergence rate of choosing the best action a∗ at the root node, when using the E3W strategy, is exponential.\nTheorem 2 Let at be the action returned by E3W at step t. For large enough t and constants C, Ĉ\nP(at 6= a∗) ≤ Ct exp{− t\nĈσ(log(t))3 }. (9)" }, { "heading": "4 ENTROPY-REGULARIZATION BACKUP OPERATORS", "text": "From the introduction of a unified view of generic strongly convex regularizers as backup operators in MCTS, we narrow the analysis to entropy-based regularizers. For each entropy function, Table 1 shows the Legendre-Fenchel transform and the maximizing argument, which can be respectively replaced in our backup operation (Equation 6) and sampling strategy E3W (Equation 7). Using maximum entropy retrieves the maximum entropy MCTS problem introduced in the MENTS algorithm (Xiao et al., 2019). This approach closely resembles the maximum entropy RL framework used to encourage exploration (Haarnoja et al., 2018; Schulman et al., 2017a). We introduce two novel MCTS algorithms based on the minimization of relative entropy of the policy update, inspired by trust-region (Schulman et al., 2015) and proximal optimization methods (Schulman et al., 2017b) in RL, and on the maximization of Tsallis entropy, which has been more recently introduced in RL as an effective solution to enforce the learning of sparse policies (Lee et al., 2018). We call these algorithms RENTS and TENTS. Contrary to maximum and relative entropy, the definition of the\nLegendre-Fenchel and maximizing argument of Tsallis entropy is non-trivial, being Ω∗(Qt) = τ · spmax(Qt(s, ·)/τ), (10)\n∇Ω∗(Qt) = max\n( Qt(s, a) τ − ∑ a∈KQt(s, a)/τ − 1 |K| , 0 ) , (11)\nwhere spmax is defined for any function f : S ×A → R as\nspmax(f(s, ·)) , ∑ a∈K\n( f(s, a)2\n2 −\n( ∑ a∈K f(s, a)− 1)2\n2|K|2\n) + 1\n2 , (12)\nand K is the set of actions that satisfy 1 + if(s, ai) > ∑i j=1 f(s, aj), with ai indicating the action with the i-th largest value of f(s, a) (Lee et al., 2018)." }, { "heading": "4.1 REGRET ANALYSIS", "text": "At the root node, let each children node i be assigned with a random variable Xi, with mean value Vi, while the quantities related to the optimal branch are denoted by ∗, e.g. mean value V ∗. At each timestep n, the mean value of variable Xi is Vin . The pseudo-regret (Coquelin & Munos, 2007) at the root node, at timestep n, is defined as RUCTn = nV ∗ − ∑n t=1 Vit . Similarly, we define the regret of E3W at the root node of the tree as\nRn = nV ∗ − n∑ t=1 Vit = nV ∗ − n∑ t=1 I(it = i)Vit = nV ∗ − ∑ i Vi n∑ t=1 π̂t(ai|s), (13)\nwhere π̂t(·) is the policy at time step t, and I(·) is the indicator function. Theorem 3 Let κi = ∇Ω∗(ai|s) + Lp √ Ĉσ2 log Cδ/2n, and χi = ∇Ω∗(ai|s) − Lp √ Ĉσ2 log Cδ/2n, where∇Ω∗(.|s) is the policy with respect to the mean value vector V (·) at the root node s. For any δ > 0, with probability at least 1− δ, ∃ constant L, p, C, Ĉ so that the pseudo regret Rn satisfies\nnV ∗ − n ∑ i Vi ( κi + L p (τ(UΩ − LΩ) 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L p (τ(UΩ − LΩ) 1− γ )) .\nThis theorem provides bounds for the regret of E3W using a generic convex regularizer Ω; thus, we can easily retrieve from it the regret bound for each entropy regularizer. Let m = mina∇Ω∗(a|s).\nCorollary 1 Maximum entropy: nV ∗ − n ∑ i Vi ( κi + L ( τ log |A| 1−γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L ( τ log |A| 1−γ )) .\nCorollary 2 Relative entropy: nV ∗ − n ∑ i Vi ( κi + L ( τ(log |A|− 1m ) 1−γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L ( τ(log |A|− 1m ) 1−γ )) .\nCorollary 3 Tsallis entropy: nV ∗ − n ∑ i Vi ( κi + L 2 ( |A| − 1 2|A| τ 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L2 ( |A| − 1 2|A| τ 1− γ )) .\nRemarks. The regret bound of UCT and its variance have already been analyzed for nonregularized MCTS with binary tree (Coquelin & Munos, 2007). On the contrary, our regret bound analysis in Theorem 3 applies to generic regularized MCTS. From the specialized bounds in the corollaries, we observe that the maximum and relative entropy share similar results, although the bounds for relative entropy are slightly smaller due to 1m . Remarkably, the bounds for Tsallis entropy become tighter for increasing number of actions, which translates in limited regret in problems with high branching factor. This result establishes the advantage of Tsallis entropy in complex problems w.r.t. to other entropy regularizers, as empirically confirmed by the positive results in several Atari games described in Section 5." }, { "heading": "4.2 ERROR ANALYSIS", "text": "We analyse the error of the regularized value estimate at the root node n(s) w.r.t. the optimal value: εΩ = VΩ(s)− V ∗(s).\nTheorem 4 For any δ > 0 and generic convex regularizer Ω, with some constant C, Ĉ, with probability at least 1− δ, εΩ satisfies\n−\n√ Ĉσ2 log Cδ\n2N(s) − τ(UΩ − LΩ) 1− γ ≤ εΩ ≤\n√ Ĉσ2 log Cδ\n2N(s) . (14)\nTo give a better understanding of the effect of each entropy regularizer in Table 1, we specialize the bound in Equation 14 to each of them. From (Lee et al., 2018), we know that for maximum entropy Ω(πt) = ∑ a πt log πt, we have − log |A| ≤ Ω(πt) ≤ 0; for relative entropy Ω(πt) = KL(πt||πt−1), if we define m = mina πt−1(a|s), then we can derive 0 ≤ Ω(πt) ≤ − log |A| + log 1m ; and for Tsallis entropy Ω(πt) = 1 2 (‖ πt ‖ 2 2 −1), we have − |A|−1 2|A| ≤ Ω(πt) ≤ 0. Then,\nCorollary 4 maximum entropy error: −\n√ Ĉσ2 log Cδ\n2N(s) − τ log |A| 1− γ ≤ εΩ ≤\n√ Ĉσ2 log Cδ\n2N(s) .\nCorollary 5 relative entropy error: −\n√ Ĉσ2 log Cδ\n2N(s) − τ(log |A| − log 1m ) 1− γ ≤ εΩ ≤\n√ Ĉσ2 log Cδ\n2N(s) .\nCorollary 6 Tsallis entropy error: −\n√ Ĉσ2 log Cδ\n2N(s) − |A| − 1 2|A| τ 1− γ ≤ εΩ ≤\n√ Ĉσ2 log Cδ\n2N(s) .\nThese results show that when the number of actions |A| is large, TENTS enjoys the smallest error; moreover, we also see that lower bound of RENTS is always smaller than for MENTS." }, { "heading": "5 EMPIRICAL EVALUATION", "text": "In this section, we empirically evaluate the benefit of the proposed entropy-based MCTS regularizers. First, we complement our theoretical analysis with an empirical study of the synthetic tree toy problem introduced in Xiao et al. (2019), which serves as a simple scenario to give an interpretable demonstration of the effects of our theoretical results in practice. Second, we compare to AlphaGo and AlphaZero (Silver et al., 2016; 2017a), recently introduced to enable MCTS to solve large scale problems with high branching factor. Our implementation is a simplified version of the original algorithms, where we remove various tricks in favor of better interpretability. For the same reason, we do not compare with the most recent and state-of-the-art variant of AlphaZero known as MuZero (Schrittwieser et al., 2019), as this is a slightly different solution highly tuned to maximize performance, and a detailed description of its implementation is not available." }, { "heading": "5.1 SYNTHETIC TREE", "text": "This toy problem is introduced in Xiao et al. (2019) to highlight the improvement of MENTS over UCT. It consists of a tree with branching factor k and depth d. Each edge of the tree is assigned\na random value between 0 and 1. At each leaf, a Gaussian distribution is used as an evaluation function resembling the return of random rollouts. The mean of the Gaussian distribution is the sum of the values assigned to the edges connecting the root node to the considered leaf, while the standard deviation is σ = 0.051. For stability, all the means are normalized between 0 and 1. As in Xiao et al. (2019), we create 5 trees on which we perform 5 different runs in each, resulting in 25 experiments, for all the combinations of branching factor k = {2, 4, 6, 8, 10, 12, 14, 16} and depth d = {1, 2, 3, 4, 5}, computing: (i) the value estimation error at the root node w.r.t. the regularized optimal value: εΩ = VΩ−V ∗; (ii) the value estimation error at the root node w.r.t. the unregularized optimal value: εUCT = VΩ − V ∗UCT; (iii) the regret R as in Equation (13). For a fair comparison, we use fixed τ = 0.1 and = 0.1 across all algorithms. Figure 1 and 2 show how UCT and each regularizer behave for different configurations of the tree. We observe that, while RENTS and MENTS converge slower for increasing tree sizes, TENTS is robust w.r.t. the size of the tree and almost always converges faster than all other methods to the respective optimal value. Notably, the optimal value of TENTS seems to be very close to the one of UCT, i.e. the optimal value of the\n1The value of the standard deviation is not provided in Xiao et al. (2019). After trying different values, we observed that our results match the one in Xiao et al. (2019) when using σ = 0.05.\nunregularized objective, and also converges faster than the one estimated by UCT, while MENTS and RENTS are considerably further from this value. In terms of regret, UCT explores less than the regularized methods and it is less prone to high regret, at the cost of slower convergence time. Nevertheless, the regret of TENTS is the smallest between the ones of the other regularizers, which seem to explore too much. These results show a general superiority of TENTS in this toy problem, also confirming our theoretical findings about the advantage of TENTS in terms of approximation error (Corollary 6) and regret (Corollary 3), in problems with many actions." }, { "heading": "5.2 ENTROPY-REGULARIZED ALPHAZERO", "text": "In its standard form, AlphaZero (Silver et al., 2017a) uses the PUCT sampling strategy, a variant of UCT (Kocsis et al., 2006) that samples actions according to the policy\nPUCT (s, a) = Q(s, a) + P (s, a)\n√ N(s)\n1 +N(s, a) , (15)\nwhere P is a prior probability on action selection, and is an exploration constant. A value network and a policy network are used to compute, respectively, the action-value function Q and the prior policy P . We use a single neural network, with 2 hidden layers composed of 128 ELU units, and two output layer respectively for the action-value function and the policy. We run 500 AlphaZero episodes, where each episode is composed of 300 steps. A step consists of running 32 MCTS simulations from the root node, as defined in Section 2, using the action-value function computed by the value network instead of using Monte-Carlo rollouts. At the end of each cycle, the average action-value of the root node is computed and stored, the tree is expanded using the given sampling strategy, and the root node is updated with the reached node. At the end of the episode, a minibatch of 32 samples is built from the 300 stored action-values, and the network is trained with one step of gradient descent using RMSProp with learning rate 0.001. The entropy-regularized variants of AlphaZero can be simply derived replacing the average backup operator, with the desired entropy function, and replacing PUCT with E3W using the respective maximizing argument and = 0.1.\nCartpole and Acrobot. Figure 3 shows the cumulative reward of standard AlphaZero based on PUCT, and the three entropy-regularized variants, on the Cartpole and Acrobot discrete control problems (Brockman et al., 2016). While standard AlphaZero clearly lacks good convergence and stability, the entropy-based variants behave differently according to the problem. First, although not significantly superior, RENTS exhibits the most stable learning and faster convergence, confirming the benefit of relative entropy in control problems as already known for trust-region methods in RL (Schulman et al., 2015). Second, considering the small number of discrete actions in the problems, TENTS cannot benefit from the learning of sparse policies and shows slightly unstable learning in Cartpole, even though the overall performance is satisfying in both problems. Last, MENTS solves the problems slightly slower than RENTS, but reaches the same final performance. Although the results on these simple problems are not conclusive to assert the superiority of one method over the other, they definitely confirm the advantage of regularization in MCTS, and hint at the benefit of the use of relative entropy in control problems. Further analysis on more complex\ncontrol problems will be desirable (e.g. MuJoCo (Todorov et al., 2012)), but the need to account for continuous actions, a non-trivial setting for MCTS, makes it out of the scope of this paper." }, { "heading": "5.3 ENTROPY-REGULARIZED ALPHAGO", "text": "The learning time of AlphaZero can be slow in problems with high branching factor, due to the need of a large number of MCTS simulations for obtaining good estimates of the randomly initialized action-values. To overcome this problem, AlphaGo (Silver et al., 2016) initializes the action-values using the values retrieved from a pretrained network, which is kept fixed during the training.\nAtari. Atari 2600 (Bellemare et al., 2013) is a popular benchmark for testing deep RL methodologies (Mnih et al., 2015; Van Hasselt et al., 2016; Bellemare et al., 2017) but still relatively disregarded in MCTS. We use a Deep Q-Network, pretrained using the same experimental setting of Mnih et al. (2015), to initialize the action-value function of each node after expansion as Qinit(s, a) = (Q(s, a)− V (s)) /τ , for MENTS and TENTS, as done in Xiao et al. (2019). For RENTS we init Qinit(s, a) = logPprior(a|s)) + (Q(s, a)− V (s)) /τ , where Pprior is the Boltzmann distribution induced by action-values Q(s, .) computed from the network. Each experimental run consists of 512 MCTS simulations. The temperature τ is optimized for each algorithm and game via grid-search between 0.01 and 1. The discount factor is γ = 0.99, and for PUCT the exploration constant is c = 0.1. Table 2 shows the performance, in terms of cumulative reward, of standard AlphaGo with PUCT and our three regularized versions, on 22 Atari games. Moreover, we test also AlphaGo using the MaxMCTS backup (Khandelwal et al., 2016) for further comparison with classic baselines. We observe that regularized MCTS dominates other baselines, in particular TENTS achieves the highest scores in all the 22 games, showing that sparse policies are more effective in Atari. This can be explained by Corollary 6 which shows that Tsallis entropy can lead to a lower error at the root node even with a high number of actions compared to relative or maximum entropy." }, { "heading": "6 CONCLUSION", "text": "We introduced a theory of convex regularization in Monte-Carlo Tree Search (MCTS) based on the Legendre-Fenchel transform. Exploiting this theoretical framework, we studied the regret of MCTS when using a generic strongly convex regularizer, and we proved that it has an exponential convergence rate. We use these results to motivate the use of entropy regularization in MCTS, particularly considering maximum, relative, and Tsallis entropy. Finally, we test regularized MCTS algorithms in discrete control problems and Atari games, showing its advantages over other methods." }, { "heading": "A RELATED WORK", "text": "Entropy regularization is a common tool for controlling exploration in Reinforcement Learning (RL) and has lead to several successful methods (Schulman et al., 2015; Haarnoja et al., 2018; Schulman et al., 2017a; Mnih et al., 2016). Typically specific forms of entropy are utilized such as maximum entropy (Haarnoja et al., 2018) or relative entropy (Schulman et al., 2015). This approach is an instance of the more generic duality framework, commonly used in convex optimization theory. Duality has been extensively studied in game theory (Shalev-Shwartz & Singer, 2006; Pavel, 2007) and more recently in RL, for instance considering mirror descent optimization (Montgomery & Levine, 2016; Mei et al., 2019), drawing the connection between MCTS and regularized policy optimization (Grill et al., 2020), or formalizing the RL objective via Legendre-Rockafellar duality (Nachum & Dai, 2020). Recently (Geist et al., 2019) introduced regularized Markov Decision Processes, formalizing the RL objective with a generalized form of convex regularization, based on the Legendre-Fenchel transform. In this paper, we provide a novel study of convex regularization in MCTS, and derive relative entropy (KL-divergence) and Tsallis entropy regularized MCTS algorithms, i.e. RENTS and TENTS respectively. Note that the recent maximum entropy MCTS algorithm MENTS (Xiao et al., 2019) is a special case of our generalized regularized MCTS. Unlike MENTS, RENTS can take advantage of any action distribution prior, in the experiments the prior is derived using Deep Q-learning (Mnih et al., 2015). On the other hand, TENTS allows for sparse action exploration and thus higher dimensional action spaces compared to MENTS. In experiments, both RENTS and TENTS outperform MENTS.\nSeveral works focus on modifying classical MCTS to improve exploration. UCB1-tuned (Auer et al., 2002) modifies the upper confidence bound of UCB1 to account for variance in order to improve exploration. (Tesauro et al., 2012) proposes a Bayesian version of UCT, which obtains better estimates of node values and uncertainties given limited experience. Many heuristic approaches\nbased on specific domain knowledge have been proposed, such as adding a bonus term to value estimates (Gelly & Wang, 2006; Teytaud & Teytaud, 2010; Childs et al., 2008; Kozelek, 2009; Chaslot et al., 2008) or prior knowledge collected during policy search (Gelly & Silver, 2007; Helmbold & Parker-Wood, 2009; Lorentz, 2010; Tom, 2010; Hoock et al., 2010). (Khandelwal et al., 2016) formalizes and analyzes different on-policy and off-policy complex backup approaches for MCTS planning based on RL techniques. (Vodopivec et al., 2017) proposes an approach called SARSAUCT, which performs the dynamic programming backups using SARSA (Rummery, 1995). Both (Khandelwal et al., 2016) and (Vodopivec et al., 2017) directly borrow value backup ideas from RL to estimate the value at each tree node, but they do not provide any proof of convergence." }, { "heading": "B PROOFS", "text": "Let r̂ and r be respectively the average and the the expected reward at the leaf node, and the reward distribution at the leaf node be σ2-sub-Gaussian.\nLemma 1 For the stochastic bandit problem E3W guarantees that, for t ≥ 4,\nP ( ‖ r − r̂t ‖∞≥\n2σ\nlog(2 + t)\n) ≤ 4|A| exp ( − t\n(log(2 + t))3\n) .\nProof 1 Let us define Nt(a) as the number of times action a have been chosen until time t, and N̂t(a) = ∑t s=1 πs(a), where πs(a) is the E3W policy at time step s. By choosing λs = |A| log(1+s) , it follows that for all a and t ≥ 4,\nN̂t(a) = t∑ s=1 πs(a) ≥ t∑ s=1\n1\nlog(1 + s) ≥ t∑ s=1\n1 log(1 + s) − s/(s+ 1) (log(1 + s))2\n≥ ∫ 1+t\n1\n1 log(1 + s) − s/(s+ 1) (log(1 + s))2 ds = 1 + t log(2 + t) − 1 log 2 ≥ t 2 log(2 + t) .\nFrom Theorem 2.19 in Wainwright (2019), we have the following concentration inequality:\nP(|Nt(a)− N̂t(a)| > ) ≤ 2 exp{− 2 2 ∑t s=1 σ 2 s } ≤ 2 exp{−2 2 t },\nwhere σ2s ≤ 1/4 is the variance of a Bernoulli distribution with p = πs(k) at time step s. We define the event\nE = {∀a ∈ A, |N̂t(a)−Nt(a)| ≤ },\nand consequently\nP(|N̂t(a)−Nt(a)| ≥ ) ≤ 2|A| exp(− 2 2\nt ). (16)\nConditioned on the event E , for = t4 log(2+t) , we have Nt(a) ≥ t 4 log(2+t) . For any action a by the definition of sub-gaussian,\nP ( |r(a)− r̂t(a)| > √ 8σ2 log( 2δ ) log(2 + t)\nt\n) ≤ P ( |r(a)− r̂t(a)| > √ 2σ2 log( 2δ )\nNt(a)\n) ≤ δ\nby choosing a δ satisfying log( 2δ ) = 1 (log(2+t))3 , we have\nP ( |r(a)− r̂t(a)| > √ 2σ2 log( 2δ )\nNt(a)\n) ≤ 2 exp ( − 1\n(log(2 + t))3\n) .\nTherefore, for t ≥ 2\nP ( ‖ r − r̂t ‖∞>\n2σ\nlog(2 + t)\n) ≤ P ( ‖ r − r̂t ‖∞>\n2σ\nlog(2 + t) ∣∣∣∣∣E ) + P(EC )\n≤ ∑ k\n( P ( |r(a)− r̂t(a)| >\n2σ\nlog(2 + t)\n) + P(EC ) ≤ 2|A| exp ( − 1\n(log(2 + t))3\n))\n+ 2|A| exp ( − t\n(log(2 + t))3\n) = 4|A| exp ( − t\n(log(2 + t))3\n) .\nLemma 2 Given two policies π(1) = ∇Ω∗(r(1)) and π(2) = ∇Ω∗(r(2)),∃L, such that\n‖ π(1) − π(2) ‖p≤ L ‖ r(1) − r(2) ‖p .\nProof 2 This comes directly from the fact that π = ∇Ω∗(r) is Lipschitz continuous with `p-norm. Note that p has different values according to the choice of regularizer. Refer to Niculae & Blondel (2017) for a discussion of each norm using Shannon entropy and Tsallis entropy regularizer. Relative entropy shares the same Properties with Shannon Entropy.\nLemma 3 Consider the E3W policy applied to a tree. At any node s of the tree with depth d, Let us define N∗t (s, a) = π ∗(a|s).t, and N̂t(s, a) = ∑t s=1 πs(a|s), where πk(a|s) is the policy at time step k. There exists some C and Ĉ such that\nP ( |N̂t(s, a)−N∗t (s, a)| > Ct\nlog t\n) ≤ Ĉ|A|t exp{− t\n(log t)3 }.\nProof 3 We denote the following event,\nErk = {‖ r(s′, .)− r̂k(s′, .) ‖∞< 2σ\nlog(2 + k) }.\nThus, conditioned on the event ⋂t i=1Ert and for t ≥ 4, we bound |N̂t(s, a)−N∗t (s, a)| as\n|N̂t(s, a)−N∗t (s, a)| ≤ t∑\nk=1\n|π̂k(a|s)− π∗(a|s)|+ t∑\nk=1\nλk\n≤ t∑\nk=1\n‖ π̂k(.|s)− π∗(.|s) ‖∞ + t∑\nk=1\nλk\n≤ t∑\nk=1\n‖ π̂k(.|s)− π∗(.|s) ‖p + t∑\nk=1\nλk\n≤ L t∑\nk=1\n‖ Q̂k(s′, .)−Q(s′, .) ‖p + t∑\nk=1\nλk(Lemma 2)\n≤ L|A| 1 p t∑ k=1 ‖ Q̂k(s′, .)−Q(s′, .) ‖∞ + t∑ k=1 λk( Property of p-norm)\n≤ L|A| 1 p γd t∑ k=1 ‖ r̂k(s′′, .)− r(s′′, .) ‖∞ + t∑ k=1 λk(Contraction 3.1)\n≤ L|A| 1 p γd t∑ k=1\n2σ\nlog(2 + k) + t∑ k=1 λk\n≤ L|A| 1 p γd ∫ t k=0\n2σ\nlog(2 + k) dk + ∫ t k=0 |A| log(1 + k) dk\n≤ Ct log t .\nfor some constant C depending on |A|, p, d, σ, L, and γ . Finally,\nP(|N̂t(s, a)−N∗t (s, a)| ≥ Ct\nlog t ) ≤ t∑ i=1 P(Ecrt) = t∑ i=1 4|A| exp(− t (log(2 + t))3 )\n≤ 4|A|t exp(− t (log(2 + t))3 )\n= O(t exp(− t (log(t))3 )).\nLemma 4 Consider the E3W policy applied to a tree. At any node s of the tree, Let us define N∗t (s, a) = π\n∗(a|s).t, and Nt(s, a) as the number of times action a have been chosen until time step t. There exists some C and Ĉ such that\nP ( |Nt(s, a)−N∗t (s, a)| > Ct\nlog t\n) ≤ Ĉt exp{− t\n(log t)3 }.\nProof 4 Based on the result from Lemma 3, we have\nP ( |Nt(s, a)−N∗t (s, a)| > (1 + C) t\nlog t\n) ≤ Ct exp{− t\n(log t)3 }\n≤ P ( |N̂t(s, a)−N∗t (s, a)| > Ct\nlog t\n) + P ( |Nt(s, a)− N̂t(s, a)| > t\nlog t ) ≤ 4|A|t exp{− t\n(log(2 + t))3 }+ 2|A| exp{− t (log(2 + t))2 }(Lemma 3 and (16))\n≤ O(t exp(− t (log t)3 )).\nTheorem 1 At the root node s of the tree, defining N(s) as the number of visitations and VΩ∗(s) as the estimated value at node s, for > 0, we have\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ C exp{− N(s)\nĈ(log(2 +N(s)))2 }.\nProof 5 We prove this concentration inequality by induction. When the depth of the tree is D = 1, from Proposition 1, we get\n|VΩ(s)− V ∗Ω(s)| =‖ Ω∗(QΩ(s, .))− Ω∗(Q∗Ω(s, .)) ‖∞≤ γ ‖ r̂ − r∗ ‖∞ (Contraction)\nwhere r̂ is the average rewards and r∗ is the mean reward. So that\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ r̂ − r∗ ‖∞> ).\nFrom Lemma 1, with = 2σγlog(2+N(s)) , we have\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ r̂ − r∗ ‖∞> ) ≤ 4|A| exp{− N(s)\n2σγ(log(2 +N(s)))2 }\n= C exp{− N(s) Ĉ(log(2 +N(s)))2 }.\nLet assume we have the concentration bound at the depth D − 1, Let us define VΩ(sa) = QΩ(s, a), where sa is the state reached taking action a from state s. then at depth D − 1\nP(|VΩ(sa)− V ∗Ω(sa)| > ) ≤ C exp{− N(sa)\nĈ(log(2 +N(sa)))2 }. (17)\nNow at the depth D, because of the Contraction Property, we have\n|VΩ(s)− V ∗Ω(s)| ≤ γ ‖ QΩ(s, .)−Q∗Ω(s, .) ‖∞ = γ|QΩ(s, a)−Q∗Ω(s, a)|.\nSo that\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ QΩ(s, a)−Q∗Ω(s, a) ‖> )\n≤ Ca exp{− N(sa)\nĈa(log(2 +N(sa)))2 }\n≤ Ca exp{− N(sa)\nĈa(log(2 +N(s)))2 }.\nFrom (17), we can have limt→∞N(sa) = ∞ because if ∃L,N(sa) < L, we can find > 0 for which (17) is not satisfied. From Lemma 4, when N(s) is large enough, we have N(sa) → π∗(a|s)N(s) (for example N(sa) > 12π ∗(a|s)N(s)), that means we can find C and Ĉ that satisfy\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ C exp{− N(s)\nĈ(log(2 +N(s)))2 }.\nLemma 5 At any node s of the tree, N(s) is the number of visitations. We define the event\nEs = {∀ a in A, |N(s, a)−N∗(s, a)| < N∗(s, a)\n2 } where N∗(s, a) = π∗(a|s)N(s),\nwhere > 0 and VΩ∗(s) is the estimated value at node s. We have\nP(|VΩ(s)− V ∗Ω(s)| > |Es) ≤ C exp{− N(s)\nĈ(log(2 +N(s)))2 }.\nProof 6 The proof is the same as in Theorem 2. We prove the concentration inequality by induction. When the depth of the tree is D = 1, from Proposition 1, we get\n|VΩ(s)− V ∗Ω(s)| =‖ Ω∗(QΩ(s, .))− Ω∗(Q∗Ω(s, .)) ‖≤ γ ‖ r̂ − r∗ ‖∞ (Contraction Property) where r̂ is the average rewards and r∗ is the mean rewards. So that\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ r̂ − r∗ ‖∞> ).\nFrom Lemma 1, with = 2σγlog(2+N(s)) and given Es, we have\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ r̂ − r∗ ‖∞> ) ≤ 4|A| exp{− N(s)\n2σγ(log(2 +N(s)))2 }\n= C exp{− N(s) Ĉ(log(2 +N(s)))2 }.\nLet assume we have the concentration bound at the depth D − 1, Let us define VΩ(sa) = QΩ(s, a), where sa is the state reached taking action a from state s, then at depth D − 1\nP(|VΩ(sa)− V ∗Ω(sa)| > ) ≤ C exp{− N(sa)\nĈ(log(2 +N(sa)))2 }.\nNow at depth D, because of the Contraction Property and given Es, we have\n|VΩ(s)− V ∗Ω(s)| ≤ γ ‖ QΩ(s, .)−Q∗Ω(s, .) ‖∞ = γ|QΩ(s, a)−Q∗Ω(s, a)|(∃a, satisfied).\nSo that\nP(|VΩ(s)− V ∗Ω(s)| > ) ≤ P(γ ‖ QΩ(s, a)−Q∗Ω(s, a) ‖> )\n≤ Ca exp{− N(sa)\nĈa(log(2 +N(sa)))2 }\n≤ Ca exp{− N(sa)\nĈa(log(2 +N(s)))2 }\n≤ C exp{− N(s) Ĉ(log(2 +N(s)))2 }(because of Es)\n.\nTheorem 2 Let at be the action returned by algorithm E3W at iteration t. Then for t large enough, with some constants C, Ĉ,\nP(at 6= a∗) ≤ Ct exp{− t\nĈσ(log(t))3 }.\nProof 7 Let us define event Es as in Lemma 5. Let a∗ be the action with largest value estimate at the root node state s. The probability that E3W selects a sub-optimal arm at s is\nP(at 6= a∗) ≤ ∑ a P(VΩ(sa)) > VΩ(sa∗)|Es) + P(Ecs)\n= ∑ a P((VΩ(sa)− V ∗Ω(sa))− (VΩ(sa∗)− V ∗Ω(sa∗)) ≥ V ∗Ω(sa∗)− V ∗Ω(sa)|Es) + P(Ecs).\nLet us define ∆ = V ∗Ω(sa∗)− V ∗Ω(sa), therefore for ∆ > 0, we have P(at 6= a∗) ≤ ∑ a P((VΩ(sa)− V ∗Ω(sa))− (VΩ(sa∗)− V ∗Ω(sa∗)) ≥ ∆|Es) + +P(Ecs)\n≤ ∑ a P(|VΩ(sa)− V ∗Ω(sa)| ≥ α∆|Es) + P(|VΩ(sa∗)− V ∗Ω(sa∗)| ≥ β∆|Es) + P(Ecs)\n≤ ∑ a Ca exp{− N(s)(α∆) Ĉa(log(2 +N(s)))2 }+ Ca∗ exp{−\nN(s)(β∆)\nĈa∗(log(2 +N(s)))2 }+ P(Ecs),\nwhere α+β = 1, α > 0, β > 0, and N(s) is the number of visitations the root node s. Let us define 1 Ĉ = min{ (α∆)Ca , (β∆) Ca∗ }, and C = 1|A| max{Ca, Ca∗} we have\nP(a 6= a∗) ≤ C exp{− t Ĉσ(log(2 + t))2 }+ P(Ecs).\nFrom Lemma 4, ∃C ′ , Ĉ ′ for which\nP(Ecs) ≤ C ′ t exp{− t\nĈ ′(log(t))3 },\nso that\nP(a 6= a∗) ≤ O(t exp{− t (log(t))3 }).\nTheorem 3 Consider an E3W policy applied to the tree. Let κi = ∇Ω∗(ai|s) + Lp √ Ĉσ2 log Cδ/2n,\nχi = ∇Ω∗(ai|s) − Lp √ Ĉσ2 log Cδ/2n, where ∇Ω∗(.|s) is the policy with respect to the mean value\nvector V (·) at the root node s. For any δ > 0, with probability at least 1− δ, ∃ constant L, p, C, Ĉ so that the pseudo regret Rn satisfies\nnV ∗ − n ∑ i Vi ( κi + L p (τ(UΩ − LΩ) 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L p (τ(UΩ − LΩ) 1− γ )) .\nProof 8 From Lemma 2 given two policies π(1) = ∇Ω∗(r(1)) and π(2) = ∇Ω∗(r(2)),∃L, such that\n‖ π(1) − π(2) ‖p≤ L ‖ r(1) − r(2) ‖p≤ L 1\np ‖ r(1) − r(2) ‖∞ .\nFrom (13), we have the regret\nRn = nV ∗ − ∑ i Vi n∑ t=1 π̂t(ai|s), (18)\nwhere π̂t(·) is the policy at time step t, and I(·) is the indicator function. V ∗ is the optimal branch at the root node, Vi is the mean value function of the branch with respect to action i, V (·) is the |A|\nvector of value function at the root node. V̂ (·) is the |A| estimation vector of value function at the root node. π(.|s) = ∇Ω∗(V (·)) is the policy with respect to the V (·) vector at the root node. Then for any δ > 0, with probability at least 1− δ, we have\n|π(ai|s)− π̂t(ai|s)| ≤‖ π(.|s)− π̂t(.|s) ‖∞≤ L\np ‖ V (·)− V̂ (·) ‖∞ (Lemma 2) (19)\n≤ L p |V (·)− V̂ (·)| ≤ L p\n( τ(UΩ − LΩ)\n1− δ +\n√ Ĉσ2 log Cδ\n2N(s)\n) (Theorem 4)\nSo that\nπ(ai|s)− L\np\n( τ(UΩ − LΩ)\n1− δ +\n√ Ĉσ2 log Cδ\n2N(s)\n) ≤ π̂t(ai|s) ≤ π(ai|s) + L\np\n( τ(UΩ − LΩ)\n1− δ +\n√ Ĉσ2 log Cδ\n2N(s) ) so that\nRn = nV ∗ − ∑ i Vi n∑ t=1 π̂t(ai|s) ≤ nV ∗ − ∑ i Vi n∑ t=1 ( π(ai|s)− L p (τ(UΩ − LΩ) 1− δ +\n√ Ĉσ2 log Cδ\n2n\n))\nRn ≤ nV ∗ − ∑ i Vi n∑ t=1 ( π(ai|s)− L p (τ(UΩ − LΩ) 1− δ +\n√ Ĉσ2 log Cδ\n2n\n))\nRn ≤ nV ∗ − n ∑ i Vi ( π(ai|s)− L p (τ(UΩ − LΩ) 1− δ +\n√ Ĉσ2 log Cδ\n2n\n)) (20)\nAnd\nRn ≥ nV ∗ − ∑ i Vi n∑ t=1 ( π(ai|s) + L p (τ(UΩ − LΩ) 1− δ +\n√ Ĉσ2 log Cδ\n2n\n))\nRn ≥ nV ∗ − n ∑ i Vi ( π(ai|s) + L p (τ(UΩ − LΩ) 1− δ +\n√ Ĉσ2 log Cδ\n2n )) In case of Maximum Entropy and Relative Entropy p = 1, because\n‖ π(1) − π(2) ‖∞≤ L ‖ r(1) − r(2) ‖∞ . So that we have for MENTS\nnV ∗ − n ∑ i Vi ( κi + L (τ log |A| 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L (τ log |A| 1− γ )) .\nFor RENTS, we have nV ∗ − n ∑ i Vi ( κi + L (τ(log |A| − 1m ) 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L (τ(log |A| − 1m ) 1− γ )) where m = mina π(a|s). In case of Tsallis Entropy p = 2 ( Niculae & Blondel (2017)), so that\nnV ∗ − n ∑ i Vi ( κi + L 2 ( |A| − 1 2|A| τ 1− γ )) ≤ Rn ≤ nV ∗ − n ∑ i Vi ( χi − L 2 ( |A| − 1 2|A| τ 1− γ ))\nBefore derive the next theorem, we state here the Theorem 2 in Geist et al. (2019)\n• Boundedness: for two constants LΩ and UΩ such that for all π ∈ Π, we have LΩ ≤ Ω(π) ≤ UΩ, then\nV ∗(s)− τ(UΩ − LΩ) 1− γ ≤ V ∗Ω(s) ≤ V ∗(s). (21)\nWhere τ is the temperature and γ is the discount constant.\nTheorem 4 For any δ > 0, with probability at least 1− δ, the εΩ satisfies\n−\n√ Ĉσ2 log Cδ\n2N(s) − τ(UΩ − LΩ) 1− γ ≤ εΩ ≤\n√ Ĉσ2 log Cδ\n2N(s) .\nProof 9 From Theorem 2, let us define δ = C exp{− 2N(s) 2\nĈσ2 }, so that =\n√ Ĉσ2 log Cδ\n2N(s) then for any\nδ > 0, we have\nP(|VΩ(s)− V ∗Ω(s)| ≤\n√ Ĉσ2 log Cδ\n2N(s) ) ≥ 1− δ.\nThen, for any δ > 0, with probability at least 1− δ, we have\n|VΩ(s)− V ∗Ω(s)| ≤\n√ Ĉσ2 log Cδ\n2N(s)\n−\n√ Ĉσ2 log Cδ\n2N(s) ≤ VΩ(s)− V ∗Ω(s) ≤\n√ Ĉσ2 log Cδ\n2N(s)\n−\n√ Ĉσ2 log Cδ\n2N(s) + V ∗Ω(s) ≤ VΩ(s) ≤\n√ Ĉσ2 log Cδ\n2N(s) + V ∗Ω(s).\nFrom Proposition 1, we have\n−\n√ Ĉσ2 log Cδ\n2N(s) + V ∗(s)− τ(UΩ − LΩ) 1− γ ≤ VΩ(s) ≤\n√ Ĉσ2 log Cδ\n2N(s) + V ∗(s)." } ]
2,020
null
SP:222cf20bdaa0a95c8dd13031acf16dd19ca3f318
[ "The paper proposes a weight-encoded neural implicit representation for 3D shapes. The idea is to encode every shape in the network weights of its own designated small MLP network, instead of trying to learn a latent space of shapes. This leads to a really compact shape representation based on signed distance fields that could be interesting for many applications. The approach uses importance sampling to speed up training and robust losses." ]
A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface. Many prior works have focused on latentencoded neural implicits, where a latent vector encoding of a specific shape is also fed as input. While affording latent-space interpolation, this comes at the cost of reconstruction accuracy for any single shape. Training a specific network for each 3D shape, a weight-encoded neural implicit may forgo the latent vector and focus reconstruction accuracy on the details of a single shape. While previously considered as an intermediary representation for 3D scanning tasks or as a toy-problem leading up to latent-encoding tasks, weight-encoded neural implicits have not yet been taken seriously as a 3D shape representation. In this paper, we establish that weight-encoded neural implicits meet the criteria of a first-class 3D shape representation. We introduce a suite of technical contributions to improve reconstruction accuracy, convergence, and robustness when learning the signed distance field induced by a polygonal mesh — the de facto standard representation. Viewed as a lossy compression, our conversion outperforms standard techniques from geometry processing. Compared to previous latentand weight-encoded neural implicits we demonstrate superior robustness, scalability, and performance.
[]
[ { "authors": [ "cent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Matan Atzmon", "Yaron Lipman" ], "title": "Sal: Sign agnostic learning of shapes from raw data", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Matan Atzmon", "Yaron Lipman" ], "title": "Sal++: Sign agnostic learning with derivatives, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Gavin Barill", "Neil Dickson", "Ryan Schmidt", "David I.W. Levin", "Alec Jacobson" ], "title": "Fast winding numbers for soups and clouds", "venue": "ACM Transactions on Graphics,", "year": 2018 }, { "authors": [ "Angel X Chang", "Thomas Funkhouser", "Leonidas Guibas", "Pat Hanrahan", "Qixing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "arXiv preprint arXiv:1512.03012,", "year": 2015 }, { "authors": [ "Zhiqin Chen", "Hao Zhang" ], "title": "Learning implicit fields for generative shape modeling", "venue": "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Michael Garland", "Paul S Heckbert" ], "title": "Surface simplification using quadric error metrics", "venue": "In Proceedings of the 24th annual conference on Computer graphics and interactive techniques,", "year": 1997 }, { "authors": [ "Amos Gropp", "Lior Yariv", "Niv Haim", "Matan Atzmon", "Yaron Lipman" ], "title": "Implicit geometric regularization for learning", "venue": null, "year": 2020 }, { "authors": [ "John C Hart" ], "title": "Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces", "venue": "The Visual Computer,", "year": 1996 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Alec Jacobson", "Ladislav Kavan", "Olga Sorkine-Hornung" ], "title": "Robust inside-outside segmentation using generalized winding numbers", "venue": "ACM Transactions on Graphics (TOG),", "year": 2013 }, { "authors": [ "Alec Jacobson", "Daniele Panozzo", "C Schüller", "Olga Diamanti", "Qingnan Zhou", "N Pietroni" ], "title": "libigl: A simple c++ geometry processing library, 2016", "venue": null, "year": 2016 }, { "authors": [ "H. Kahn", "T.E. Harris" ], "title": "Estimation of particle transmission by random sampling", "venue": "National Bureau of Standards applied mathematics series,", "year": 1951 }, { "authors": [ "Andrew Kerr", "Duane Merrill", "Julien Demouth", "John Tran" ], "title": "Cutlass: Fast linear algebra in cuda c, Sep 2018", "venue": "URL https://devblogs.nvidia.com/ cutlass-linear-algebra-cuda/", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Gidi Littwin", "Lior Wolf" ], "title": "Deep meta functionals for shape representation", "venue": "CoRR, abs/1908.06277,", "year": 2019 }, { "authors": [ "Shaohui Liu", "Yinda Zhang", "Songyou Peng", "Boxin Shi", "Marc Pollefeys", "Zhaopeng Cui" ], "title": "Dist: Rendering deep implicit signed distance function with differentiable sphere tracing", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Daniel Maturana", "Sebastian Scherer" ], "title": "Voxnet: A 3d convolutional neural network for real-time object recognition", "venue": "In Ieee/rsj International Conference on Intelligent Robots and Systems,", "year": 2015 }, { "authors": [ "Lars Mescheder", "Michael Oechsle", "Michael Niemeyer", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Occupancy networks: Learning 3d reconstruction in function space", "venue": "In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Ben Mildenhall", "Pratul P. Srinivasan", "Matthew Tancik", "Jonathan T. Barron", "Ravi Ramamoorthi", "Ren Ng" ], "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Yutaka Ohtake", "Alexander Belyaev", "Marc Alexa", "Greg Turk", "Hans-Peter Seidel" ], "title": "Multi-level partition of unity implicits", "venue": "In Acm Siggraph 2005 Courses, pp. 173–es", "year": 2005 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard Newcombe", "Steven Lovegrove" ], "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Nasim Rahaman", "Aristide Baratin", "Devansh Arpit", "Felix Draxler", "Min Lin", "Fred Hamprecht", "Yoshua Bengio", "Aaron Courville" ], "title": "On the spectral bias of neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vincent Sitzmann", "Eric R. Chan", "Richard Tucker", "Noah Snavely", "Gordon Wetzstein" ], "title": "Metasdf: Meta-learning signed distance functions", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Vincent Sitzmann", "Julien N.P. Martel", "Alexander W. Bergman", "David B. Lindell", "Gordon Wetzstein" ], "title": "Implicit neural representations with periodic activation functions", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Matthew Tancik", "Pratul P Srinivasan", "Ben Mildenhall", "Sara Fridovich-Keil", "Nithin Raghavan", "Utkarsh Singhal", "Ravi Ramamoorthi", "Jonathan T Barron", "Ren Ng" ], "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "venue": "arXiv preprint arXiv:2006.10739,", "year": 2020 }, { "authors": [ "Peng-Shuai Wang", "Chun-Yu Sun", "Yang Liu", "Xin Tong" ], "title": "Adaptive O-CNN: A Patch-based Deep Representation of 3D Shapes", "venue": "ACM Transactions on Graphics (SIGGRAPH Asia),", "year": 2018 }, { "authors": [ "Yifan Xu", "Tianqi Fan", "Yi Yuan", "Gurprit Singh" ], "title": "Ladybird: Quasi-monte carlo sampling for deep implicit field based 3d reconstruction with symmetry", "venue": "In Proc. ECCV,", "year": 2020 }, { "authors": [ "Qingnan Zhou", "Alec Jacobson" ], "title": "Thingi10k: A dataset of 10,000 3d-printing models", "venue": "arXiv preprint arXiv:1605.04797,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "While 3D surface representation has been a foundational topic of study in the computer graphics community for over four decades, recent developments in machine learning have highlighted the potential that neural networks can play as effective parameterizations of solid shapes.\nThe success of neural approaches to shape representations has been evidenced both through their ability of representing complex geometries as well as their utility in end-to-end 3D shape learning, reconstruction, and understanding and tasks. These approaches also make use of the growing availability of user generated 3D content and high-fidelity 3D capture devices, e.g., point cloud scanners.\nFor these 3D tasks, one powerful configuration is to represent a 3D surface S as the set containing any point x ∈ R3 for which an implicit function (i.e., a neural network) evaluates to zero:\nS := { x ∈ R3|fθ( x; z) = 0 } , (1)\nImplicit Explicit (mesh) where θ ∈ Rm are the network weights and z ∈ Rk is an input latent vector encoding a particular shape. In contrast to the de facto standard polygonal mesh representation which explicitly discretizes a surface’s geometry, the function f implicitly defines the shape S encoded in z. We refer to the representation in Eq. (1) as a latent-encoded neural implicit.\nPark et al. (2019) propose to optimize the weights θ so each shape Si ∈ D in a dataset or shape distribution D is encoded into a corresponding latent vector zi. If successfully trained, the weights θ of their DEEPSDF implicit function fθ can be said to generalize across the “shape space” of D. As always with supervision, reducing the training set from D will affect f ’s ability to generalize and can lead to overfitting. Doing so may seem, at first, to be an ill-fated and uninteresting idea.\nOur work considers an extreme case – when the training set is reduced to a single shape Si. We can draw a simple but powerful conclusion: in this setting, one can completely forgo the latent vector\n(i.e., k = 0). From the perspective of learning the shape space of D, we can “purposefully overfit” a network to a single shape Si:\nSi := { x ∈ R3|fθi(x) = 0 } , (2)\nwhere θi now parameterizes a weight-encoded neural implicit for the single shape Si. In the pursuit of learning the “space of shapes,” representing a single shape as a weight-encoded neural implicit has been discarded as a basic validation check or stepping stone toward the ultimate goal of generalizing over many shapes (see, e.g., (Chen & Zhang, 2019; Park et al., 2019; Atzmon & Lipman, 2020a;b)). Weight-encoded neural implicits, while not novel, have been overlooked as a valuable shape representation beyond learning and computer vision tasks. For example, the original DEEPSDF work briefly considered – and nearly immediately discards – the idea of independently encoding each shape of a large collection:\n“Training a specific neural network for each shape is neither feasible nor very useful.” – Park et al. (2019)\nWe propose training a specific neural network for each shape and will show that this approach is both feasible and very useful.\nI II III IV\nPoint cloud × × • ×/• Mesh • × • × Regular grid • • × • Adaptive grid • • • × Neural implicit • • • • We establish that a weight-encoded neural implicit meets the criteria of a first-class representation for 3D shapes ready for direct use in graphics and geometry processing pipelines (see inset table) While common solid shape representations have some important features and miss others, neural implicits provide a new and rich constellation of features. Unstructured point clouds are often raw output from 3D scanners, but do not admit straightforward smooth surface visualization (I). While meshes are the de facto standard representation, conducting signed distance queries and CSG operations remain non-trivial (II). Signed distances or occupancies stored on a regular grid admit fast spatial queries and are vectorizeable just like 2D images, but they wastefully sample space uniformly rather than compactly adapt their storage budget to a particular shape (III). Adaptive or sparse grids are more economical, but, just as meshes will have a different number of vertices and faces, adaptive grids will different storage profiles and access paths precluding consistent data vectorization (IV).\nWhile previous methods have explored weight-encoded neural implicits as an intermediary representation for scene reconstruction (e.g., (Mildenhall et al., 2020)) and noisy point-cloud surfacing tasks (e.g., (Atzmon & Lipman, 2020a;b)), we consider neural implicits as the primary geometric representation. Beyond this observational contribution, our technical contributions include a proposed architecture and training regime for converting the (current) most widely-adopted 3D geometry format – polygonal meshes – into a weight-encoded neural implicit representation.\nWe report on experiments1 with different architectures, sampling techniques, and activation functions – including positional encoding (Mildenhall et al., 2020) and sinusoidal activation approaches (Sitzmann et al., 2020b) that have proven powerful in the context of neural implicits. Compared to existing training regimes, we benefit from memory improvements (directly impacting visualization performance), stability to perturbed input data, and scalability to large datasets.\nWeight-encoded neural implicits can be treated as an efficient, lossy compression for 3D shapes. Increasing the size of the network increases the 3D surface accuracy (see Figure 1) and, compared to standard graphics solutions for reducing complexity (mesh decimation and storing signed distances on a regular grid), we achieve higher accuracy for the same memory footprint as well as maintaining a SIMD representation: n shapes can be represented as n weight-vectors for a fixed architecture.\nThe benefits of converting an existing mesh to a neural implicit extends beyond compression: in approximating the signed distance field (SDF) of the model, neural implicits are both directly usable for many tasks in graphics and geometry processing, and preferable in many contexts compared to traditional representations. Many downstream uses of 3D shapes already mandate the conversion of meshes to less accurate grid-based SDFs, due to the ease and efficiency of computation for SDFs: here, neural implicits serve as a drop-in replacement.\n1Source code, data, and demo at our (anonymized) repo: https://github.com/u2ni/ICLR2021\nEncoding: Latent Weight\nInterpolation: trivial non-trivial Scalability: poor excellent Stability: poor excellent Many works explore latent-encoding methods (e.g., (Park et al., 2019; Atzmon & Lipman, 2020a;b)), taking advantage of interpolation in latent space as a (learned) proxy for exploration in the “space of shapes”. We show that this flexibility comes at a direct cost of other desirable proprieties. In particular, we show that latentencoded neural implicits scale poorly as a representation for individual shapes both at training and inference time. Existing latent-encoded neural implicits are sensitive to the distribution of training data: while they may perform well for large datasets of a limited subclass of shapes (e.g., “jet airplanes”), we show that training fails with more general 3D shape datasets. Even within a class, existing methods rely on canonical orientation alignment (see Figure 2) in order to alleviate some of this difficulty – such orientation are notably (and notoriously) not present in 3D shapes captured or authored in the wild and, as a result, latent-encoded neural implicits will fail to provide meaningful results for many real-world and practical shape datasets. Fitting latent-encoded neural implicits to each shape independently complicates shape space interpolation, rendering it difficult though not impossible (Sitzmann et al., 2020a). In contrast, weight-encoded neural implicits leverage the power of the neural network function space without the constraints imposed by the requirement of generalizing across shapes through latent sampling." }, { "heading": "2 METHOD", "text": "Neural implicits soared in popularity over the last year. While significant attention has been given to perfecting network architectures and loss functions in the context of latent-encoding and pointcloud reconstruction, there is relatively little consideration of the conversion process from 3D surface meshes to weight-encoded neural implicits (e.g., both Park et al. (2019) and Sitzmann et al. (2020b) consider this task briefly). We focus on identifying a setup to optimize weight-encoded neural implicits for arbitrary shapes robustly with a small number of parameters while achieving a high surface accuracy. Once successfully converted, we consider how the weight-encoded neural implicit representation compares to standard 3D model reduction techniques and how choosing this representation impacts downstream graphics and geometric modeling operations." }, { "heading": "2.1 SIGNED DISTANCE FIELD REGRESSION", "text": "In general, the value of an implicit function f away from its zero-isosurface can be arbitrary. In shape learning, many previous methods have considered occupancy where f(~x) outputs the likelihood of ~x being inside of a solid shape (and extract the surface as the 50%-isosurface) Mildenhall et al. (2020); Mescheder et al. (2019); Littwin & Wolf (2019); Chen & Zhang (2019); Maturana & Scherer (2015); Wang et al. (2018). We instead advocate that f should approximate the signed distance field (SDF) induced by a given solid shape. Learning properties aside (see, e.g., (Park et al., 2019)), SDFs are more immediately useful in graphics and geometry processing applications.\nGiven a surface S = ∂V of a volumetric solid V ⊂ R3, the signed distance field gS : R3 → R induced by S is a continuous function of space that outputs the distance of a query point ~x ∈ R3 modulated by ±1 depending on whether ~x is inside or outside of the solid:\ngS(~x) = signS(~x) min ~p∈S ‖~x− ~p‖, where signS(~x) = { −1 if ~x ∈ V , 1 otherwise.\n(3)\nOur goal is to regress a feed-forward network fθ to approximate the SDF of a given surface S: fθ(~x) ≈ gS(~x). (4)\nIf successfully trained, the weights θ ∈ Rm encode a neural implicit representing S." }, { "heading": "2.1.1 ARCHITECTURE", "text": "Our proposed architecture is a feed-forward fully connected network with N layers, of hidden size H . Each hidden layer has ReLU non-linearities, while the output layer is activated by tanh.\nIncreasing the depth and width of this network will generally improve accuracy but at the cost of increasing the memory footprint and, for example, the time required to render the surface. The\nweight-encoded neural implicit’s rendered in Figures 2, 4, and 8 all share a common architecture of just 8 fully connected layers with a hidden size of 32 (resulting in just 7553 weights, or 59 kB in memory). Through experimentation on a subset of 1000 meshes from Thingi10k (Zhou & Jacobson, 2016), we find that this configuration yields a good balance between reconstruction accuracy, rendering speed, and memory impact (Figure 1). While maintaining acceptable surface quality, our default architecture has a 99% reduction in number of parameters and 93% speed up in “time to render first frame” compared to the default weight-encoding architecture of (Park et al., 2019).\nExcited by the recent work exploring methods to overcome an MLP’s bias to learn low frequency signals faster, we performed experiments using both positional encodings (Tancik et al., 2020) and SIREN activations (Sitzmann et al., 2020b). Both perform well when the network architecture is sufficiently wide (e.g., H > 64), but introduce surface noise with our more compact architecture. See Appendix A.3 for detailed experimental setup and findings.\nBy increasing N and H , our network could in theory (Hornik et al., 1989) learn to emulate any arbitrary topology shape with infinite precision. In reality, like any representation, there are tradeoffs. The network complexity can be increased over our base configuration for smaller surface reconstruction error, or decreased for faster rendering speeds depending on the application. A sample of geometries produced at a number of configurations can be seen in Figure 1." }, { "heading": "2.1.2 INTEGRATED LOSS → IMPORTANCE SAMPLING", "text": "Particularly choices of pointwise loss functions have been well explored by previous papers (Park et al., 2019; Atzmon & Lipman, 2020a;b; Gropp et al., 2020; Sitzmann et al., 2020b), in our experiments we find that a simple absolute difference |fθ( x) − gS( x)| works well. Defining the total loss after the fact via ad hoc sampling (near-)surface sample process (Park et al., 2019; Atzmon & Lipman, 2020a;b) leaves an unclear notion whether the total loss can be expressed as an integral and hides possibly unwanted bias. We focus instead on how to integrate this pointwise loss over space.\nSampling based on mesh vertices (Littwin & Wolf, 2019; Sitzmann et al., 2020b) reduces accuracy in the middle of triangle edges and faces and introduces bias near regions of the mesh (inset: Vertex) with denser vertex distributions regardless of the geometric complexity or saliency of the region.\nVertex Surface Ourshigh density\nlow density\nSimilarly, sampling from Gaussians centered on the surface Park et al. (2019); Chen & Zhang (2019); Atzmon & Lipman (2020a;b) will place over emphasis in regions of high curvature, in thin solid/void regions (inset: Surface).\nIn contrast to ad hoc samplings, we define the total loss directly as an integral over space,\nL(θ) =\n∫\nR3 w( x) |fθ( x)− gS( x)| d x, (5)\nwhere w : R3 → R≥0 is a non-negative weighting function with finite integral over R3. Methods which randomly sample within a bounding box around a given shape (Mescheder et al., 2019; Tancik et al., 2020) can be understood as choosing w to be the characteristic function of the box. As Park et al. (2019) already observe, this is wasteful if we care most that f is accurate near the shape’s surface (i.e., where gS = 0).\nWe achieve this directly — without yet invoking sampling — by choosing w exponentially as distance to S grows, specifically: w( x) = e−β|gS( x)|, (6) where β ≥ 0 can be adjusted from uniform sampling (β = 0) to β → ∞ for surface-only sampling. Attempting to sample space and measure the integrand of Eq. (5) directly leads to many samples having little to no numerical effect during training. For example, if β = 30 and we consider a point unit distance away from the surface, the weighting term itself closes in on machine double precision w ≈ 9e − 14. By resisting the urge to prematurely sample until after we have written our total loss function as an integral, we can instead apply importance sampling (Kahn & Harris, 1951) to construct a proportional approximation:\nL(θ) ≈ ∑\nx∈Dw\n|fθ( x)− gS( x)|, (7)\nwhere Dw is a distribution over R3 with probabilities proportional to w. We sample from Dw in practice via a simple subset rejection strategy. Starting with a large (e.g., 10M) pool of uniform samples within a loose bounding sphere around the shape, we re-sample (with replacement) a smaller (e.g., 1M) subset with probability according to w. Further improvements may be possible by incorporating advanced sampling patterns à la Xu et al. (2020).\nCompared to uniform sampling, weighting by our choice of w leads to faster convergence and reduced surface reconstruction when validating against a subset of 1000 geometries from Thingi10k (96 epochs with surface error of 0.00231). Compared to the sampling of Park et al. (2019), we match convergence speed (86 epochs each) and demonstrate a ≈ 5% improvement in surface error.\nGround truth Standard sampling Region-based sampling\nbias points\nPerhaps the most valuable property of our importance sampling scheme to be its flexibility.\nOur method has effectively removed all unintended bias present in previous approaches, and enables complete user control on intended bias to the sampling process. The importance metric, w( x), can be modified to explicitly bias importance toward regions of high curvature, minimum feature size (emulating the hidden bias of Park et al. (2019)), or near user annotations (see inset where w( x) is additionally scaled according to user selection). This flexibility allows for greater use of the network’s capacity on areas important to the user, without increasing overall network complexity or radically changing the sampling protocol." }, { "heading": "2.2 ROBUST LOSS FUNCTION FOR MESHES IN THE WILD", "text": "The input S should be the boundary of a solid region V ⊂ R3; that is, a closed, consistently oriented, non-self-intersecting surface. Ignoring “two-sided” meshes that are not intended to represent the boundary of a solid shape (e.g., clothing), many if not most meshes found online which intend to represent a solid shape would not qualify these strict pre-conditions. Zhou & Jacobson (2016)\nOriginal geometry with slice plane\nVisual hull reconstruction\nWinding number reconstruction\nMessy input mesh Unsigned distance field Winding number field Robust signed distance field observe that nearly 50% of Thingi10k’s solid models for 3D printing fail one criteria or another. The failure point in terms of our equations so far is the definition of the signing function sign( x) in Eq. (3) which relies on determining whether a point x lies inside V . To determine insideness, previous approaches either require watertight inputs , use error-prone voxel flood-filling (Mescheder et al., 2019) or use inaccurate visual hulls as a proxy (Park et al., 2019) (see inset where visual hill signing can be shown to ”close off” internal structure. Virtox (left) under CC BY. ). Alternatively, Atzmon & Lipman (2020a;b) advocate for a loss function based on unsigned distances.\nThis introduces unnecessary initialization and convergence issues, that can be avoided if we assume that the input mesh intentionally oriented to enclose a solid region (as is the case for nearly all of Thingi10k), but may suffer from open boundaries, self-intersections, non-manifold elements, etc. Under these assumptions, the generalized winding number (Jacobson et al., 2013) computes correct insideness for solid meshes and gracefully degrades to a fractional value for messy input shapes (see inset). Using the tree-based fast winding numbers of Barill et al. (2018) and a bounding volume hierarchy for (unsigned) distances, we can construct our 1M-point sample set efficiently and optimize weights θ for even the most problematic meshes (see inset) in an average of 90 seconds per shape." }, { "heading": "2.3 EFFICIENT VISUALIZATION", "text": "Our weight-encoded neural implicit representation can be treated as its classical counterpart (SDF) and rendered efficiently using sphere-tracing (Hart, 1996). Sphere tracing is a common technique for rendering implicit fields where rays are initialized in the image plane and iteratively “marched” along by a step size equal to the signed distance function value at the current location. The ray is declared to have hit the surface when sufficiently close (< ). For more details, see Morgan McGuire’s comprehensive notes at casual-effects.com.\nWe trivially adapt traditional sphere-tracing by initializing the starting position of each ray to be its first (if any) intersection with the similarity transformed unit sphere, since all weight-encoded neural implicits are normalized to lay within. As rays of the image will converge different times, we employ a dynamic batching method that composes batches of points for inference based on a mask buffer which tracks rays that have converged to the surface or reached the maximum number of steps. Local shading requires the surface normal at the hit point. For SDFs, the unit normal vector is immediately revealed as the spatial gradient (i.e., ∂fθ/∂ x). This can be computed by finite differences or back propagation through the network." }, { "heading": "3 IMPLEMENTATION AND RESULTS", "text": "We implement weight-encoded neural implicit networks in Tensorflow (Abadi et al. (2015)) with point sampling and mesh processing implemented in libigl (Jacobson et al. (2016)). We train our model for up to 102 epochs and allow early stopping for quickly converging geometries. We use the ADAM optimizer (Kingma & Ba (2014)) with a fixed learning rate of 10−4. These settings generalized well across a wide range of geometries (see Figures 4 and 5)." }, { "heading": "3.1 SURFACE VISUALIZATION AND CSG", "text": "We implement sphere-marching visualization and shading kernels in CUDA, using CUTLASS (Kerr et al. (2018)) linear algebra libraries for efficient matrix multiplication at inference-time.\nWe achieve an average display frame rate of 34 Hz – for the large subset of the Thingi10k dataset we visualize – when rendering a single neural implicit at 512 × 512 resolution on an Nvidia P100 GPU. This a significant performance improvement over previous learnt implicit inference and display pipelines, attributed in large part to our compact representation. Liu et al. (2020) present a specialized renderer capable of a 1 Hz display rate, however at the price of many conservative optimizations: these include overstepping along all rays by a factor of 50%, increasing the convergence criteria (early stopping), and implementing a coarse-to-fine display strategy. While these additional optimizations could further improve our rendering speed (at the cost of reduced visual quality), we opt to rely on a simpler (and very efficient) standard sphere-marching SDF renderer.\nIndeed, as our representation is a learnt representation of the SDF, we also inherit other importantNeural implicit ge metric modeling benefits of traditional implicit function representations. Weight-encoded neural implicits admit robust shape manipulation and modification using constructive solid geometry operations (CSG) – by directly modifying the inferred distance values (see inset and accompanying video). Weight-encoded neural implicits admit SIMD evaluation and, given their compactness, many neural implicits can be rendered in parallel at interactive rates on modern GPUs." }, { "heading": "3.2 STABILITY AND SCALE", "text": "Training deep neural networks on large geometric datasets can be cumbersome and time consuming. For our weight-encoded neural implicit representation to be effective, we must be able to convert any 3D shape into its weight-encoded form in a reasonable amount of time. Due to our relatively simple\nbase network architecture (8 layers of 32 neurons each) we find that we can overfit our model to any 3D shape in 90 seconds, on average. As this requires only 59 kB of memory, we can train many models/shapes concurrently on modern GPUs without approaching any practical memory limitations – this ease of training is uncommon to other learning-based shape representations. Converting the entirety of the 10,000 models in the Thingi10k dataset Zhou & Jacobson (2016) on an Nvidia Titan RTX only took 16 hours on a single GPU, or four hours on four Nvidia Titan RTX cards.\nConverting the Thingi10k dataset from mesh format to weightencoded neural implicit format reduces the overall storage from 38.85 GB to 590 MB – a 1:66 compression rate. While a DeepSDF network Park et al. (2019) trained on the same dataset could compress this dataset to an impressive 7 MB footprint, the latent-decoded geometries it produces are of comparably lower quality. This comparison is representative, as Thingi10k is a real-world mesh dataset of objects obtained “in the wild”. The dataset neither contains geometries aligned to a common frame of reference nor comprises objects nearing no semblance of inter-class categorization. These two properties make it difficult for any latent-encoded neural implicit network to converge to a reasonable result during training.\nWe further support these claims using two experiments. First, we attempt to train DeepSDF on the Thingi10k dataset, and second we experiment with DeepSDF’s ability to reconstruct shapes with slight perturbations from the shapenet (Chang et al., 2015) common shape orientation. Here, DeepSDF does not converge on the 10,000 model Thingi10k dataset, producing incoherent re-\nconstructions when exploring the latent space of shapes it has learned. Moreover, if we further limit DeepSDF to training with a single class of objects, it is not able to reconstruct features on the tails of the inter-class distribution (inset, right). Secondly, we evaluate DeepSDF’s ability to reconstruct geometries not aligned to the common orientation. Here, we retain single-class DeepSDF training and reconstruct the same input shape at orientations differing from the default (Figure 2). This test validates latent-encoding’s reliance on having consistently aligned datasets, immediately precluding their use with large, real-world datasets." }, { "heading": "3.3 REPRESENTATION COMPACTNESS", "text": "All of the shapes in Figure 4 were rendered with weight-encoded neural implicits generated using our base network architecture, resulting in a total of 7553 weights for each shape’s implicit function. At just 59 kB of memory we find that our lightweight representation can capture complex geometric topologies at high resolution compared to uniform signed distance grids or adaptively decimated meshes with similar memory footprints.\nThe comparisons in Figure 8 use geometry converted to a weight-encoded neural implicit in our base configuration, visualized next to the rendered result of a uniformly sampled SDF grid with 203 samples as well as with the original mesh adaptively decimated (Garland & Heckbert, 1997) down to 7600 floats (i.e., vertex and face data). Compared to decimated meshes (our baseline non-uniform format), we observe that weight-encoded neural implicits have similar surface\nquality but with smoother reconstructions due to the continuous (versus piecewise linear) nature of the implicit. Compared to SDFs stored on a grid (our baseline uniform format) we observe far better quality at equal memory. Furthermore, we notice that our approach better captures high frequency surface detail compared to both these representations, often producing results that more closely match the curvature of the original shape.\nWe measure our method’s robustness by converting the Thingi10k ((Zhou & Jacobson, 2016)) dataset and measuring the average surface error (1 / N ∑N\ni=1 |fθ(pi)|) and training loss. We report mean training loss for errors between the true and predicted SDF values at points sampled using our importance metric (Section 2.1.2). This surface error is the sum of errors at points along the shape’s 0-isocontour. These metrics measure both the error at the surface and within the shape’s bounding volume. Errors within the bounding volume decrease rendering performance and/or lead to hole artifacts in the shape during visualization. Surface errors are more evident after meshing the implicit SDF using, i.e., marching cubes. We sample 105 surface points when measuring surface error, and compute loss against a training set of 1M points. We visualize results on the entire Thingi10k dataset in Figure 5. We find that, at our base configuration, 93% of the 105 Thingi10k shapes reach a surface error below 0.003, and no model exceeds 0.01 (worst case of 0.0097; see Fig. 6)." }, { "heading": "4 LIMITATIONS AND FUTURE WORK", "text": "Our default architecture will fail to satisfyingly approximate very topologically or geometrically complex shapes. While increasing the size of the network will generally alleviate this (see Figure 6), it would be interesting to consider cascading or adaptively sized networks. Our L1 loss function encourages the network to match the values of a shape’s signed distance field,\nbut not necessarily its derivatives (cf. Gropp et al. (2020); Sitzmann et al. (2020b)). True SDFs satisfy an Eikonal equation (|∂g/d x| = 1) and this property is sometimes important for downstream tasks. For future work, we would like to investigate whether Eikonal satisfaction can be ensured exactly by construction. With respect to single-shape accuracy, latent-encodings work well in specialized scenarios (e.g., large-networks trained on canonically aligned specialized classes). With respect to shape-space learning, latent-encodings lie in a simpler continuous space than weights, which suffer from transposition and reordering non-injectivities (i.e., multiple weight vectors represent the same implicit). Nevertheless, weight-encodings allow us to faithfully prepare large diverse datasets of ’real-world’ shapes into a vectorizeable representation. We have shown this is simply not possible with existing latent-encodings. We include the full Thingi10k dataset converted to weightencoded neural implicits vectors as a data release2. This vectorized data is ripe for meta-learning future work. Indeed, concurrent work is already exploring this direction Sitzmann et al. (2020a). We hope our consideration of weight-encoded neural implicits as a first-class shape representation encourages their use in computer graphics, geometry processing, machine learning, and beyond.\n2https://github.com/u2ni/ICLR2021" }, { "heading": "A APPENDIX", "text": "A.1 THE WEIGHT-ENCODED NEURAL IMPLICIT FILE FORMAT\nOur compact weight-encoded neural implicit is designed to be effortlessly consumed and integrated into existing graphics and geometry processing pipelines. For each trained model, the chosen architecture and similarity transformaton matrix (since all geometries are normalized to the unit sphere) are written as the first bytes before encoding the learned weights θ into an HDF5 format file.\nFor a fixed architecture, the instructions to evaluate the estimated SDF is the same for any point and any shape. This SIMD property allows multiple geometries to be evaluated in parallel. The fixed storage profiles and memory layout of our learned implicit functions provide consistent query and rendering speeds. We store our model weights using the HDF5 format. This allows easy integration into Tensorflow (below) which can load our model natively. We additionally support the loading of arbitrary weight-encoded neural implicit through the ”High Five” HDF5 C++ library (https://github.com/BlueBrain/HighFive) for rendering and meshing.\nimport t e n s o r f l o w as t f import numpy as np\n# load model ” key ” d i c t a t i n g a r c h i t e c t u r e . SIMD . sdfModel = t f . k e r a s . models . m o d e l f r o m j s o n ( open ( ’ key . j s o n ’ ) )\n# load s p e c i f i c w e i g h t f o r S t a n d f o r d bunny geomet ry sdfModel . l o a d w e i g h t s ( ’ bunny . h5 ’ )\n# g e n e r a t e 128 x128x128 g r i d f o r SDF q u e r i e s K = np . l i n s p a c e ( − 1 . 0 , 1 . 0 , 1 2 8 ) g r i d = [ [ x , y , z ] f o r x in K f o r y in K f o r z in K]\n# i n f e r SDF a t each p o i n t S = sdfModel . p r e d i c t ( g r i d )\nA.2 ERROR DRIVEN CONVERSION\nWe fix the architecture during the Thingi10k dataset conversion, resulting in a constant and compact memory footprint. If, however, maintaining a target surface reconstruction quality is of more importance to a fixed memory cost, we can instead shift to an error driven surface fitting approach (much like classical approaches (Ohtake et al., 2005)), scaling network architecture complexity based on the input geometry. As each generated weight-encoded neural implicit encodes its own architecture, such an approach results in smaller architectures for simpler geometries and larger ones for topologically-complex geometries. We visualize the effect of error driven optimization in Figure 6, where we perform a simple grid search until reaching a user-desired surface error threshold.\nBased on our conversion of the Thingi10k dataset, we find that a majority of models are well represented using our base configuration (Fig. 5) – if desired, geometries that fall within the tails of the complexity distribution can be retrained with larger architectures, again until we reach a desired surface fidelity. This decision can be further informed by whether SIMD and fixed memory access patterns are beneficial to the underlying application.\nA.3 SIREN AND FOURIER FEATURES\nIn an effort to improve the reconstruction quality of our weight-encoded neural implicits we explored recent work focused on improving MLPs ability to represent high frequency signals. We experimented with three methods: namely we investigated using the SIREN (Sitzmann et al., 2020b) activations, positional encoding (Mildenhall et al., 2020), and Fourier features (Tancik et al., 2020). Each of these approaches have lead to impressive resuts for high-fidelity reconstructions of 3D surfaces mitagating the known problem that MLPs learn low frequency signals faster (Rahaman et al., 2019).\nMildenhall et al. (2020) define their positional encodings as,\nγ(p) = (sin(20πp), cos(20πp), ..., sin(2L−1πp), cos(2L−1πp)) (8)\nwhere γ is a mapping from R into the higher dimensional space R2L.\nWhile Tancik et al. (2020) expands on this approach with random gaussian features yielding the mapping function,\nλ(p) = (cos(2π Bp), sin(2πBp)) (9)\nwhere each entry in B ∈ Rm×d is sampled from N (0, σ2), and σ is left as a hyperparameter specific to each problem.\nWe evaluate both of these approaches by mapping each axis (x,y,z) of our sampled points to the higher dimensional space. We find that when the network architecture is of sufficient width these mappings work exceptionally well. We evaluated using γ with various L configurations ranging from 4 to 10. Unfortunatly, we find that our light weight (and intentionally underparamerterized) architecture struggles to learn from the augmented input signal. We visualize the affect of positional encodings when L = 10 in Figure 7. Similarly, we see drastic degredation of quality when employing λ for mapping to a default embedding size of 256 (not shown as we were unable to march). These approaches are clearly practical methods for reconstructing high-fidelity surfaces, but with our focus on minimizing the number of parameters the cost of mapping to a higher dimension input is too high.\nOur experimental setup for evaluating Sitzmann et al. (2020b) periodic activation consisted of modifying an existing tensorflow (Abadi et al., 2015) implementation to accept our spatial queries as input and signed distances as target. We train the SIREN model to 200 epochs with a learning rate of 5e−5 and the same loss as our own configuration. Interestingly, we find that the SIREN model produces smoother approximations of the armadillo’s surface (see Figure 7) but lacks fine detail. Once again, when increasing our model complexity to just 8 layers of 64 hidden units, we start to see the benefits of the periodic activation yielding much better approximations of the surface then our relu activation. For our base configuration of just 7553 parameters we choose to continue using RELU activation, but where high-fidelity weight-encoding neural implicits are required, SIREN should be employed.\nA.4 REPRESENTATION COMPACTNESS" } ]
2,020
WEIGHT-ENCODED NEURAL IMPLICIT 3D SHAPES
SP:d4831b759e850c4a630024c55aa6ccd957d337e1
[ "The paper proposes a network that operates on features of graphs that are embedded in a d-dim Euclidean space. The paper considers equivariance to a group G that is the direct product of permutations of N points and Euclidean transformations. The features they consider are tensor products of the N-dimensional natural representations of permutations and the d-dimensional standard representation of O(d). From the coordinates, an “isometric adjacency matrix” is created, which is such a tensor. This matrix is combined in various G-equivariant ways with the features and then linearly combined with learnable weights to create new features. These operations are interleaved with non-linearities to form the network. The authors compare to several graph network methods and show competitive performance on several tasks." ]
Graphs are one of the most important data structures for representing pairwise relations between objects. Specifically, a graph embedded in a Euclidean space is essential to solving real problems, such as physical simulations. A crucial requirement for applying graphs in Euclidean spaces to physical simulations is learning and inferring the isometric transformation invariant and equivariant features in a computationally efficient manner. In this paper, we propose a set of transformation invariant and equivariant models based on graph convolutional networks, called IsoGCNs. We demonstrate that the proposed model has a competitive performance compared to state-of-the-art methods on tasks related to geometrical and physical simulation data. Moreover, the proposed model can scale up to graphs with 1M vertices and conduct an inference faster than a conventional finite element analysis, which the existing equivariant models cannot achieve.
[ { "affiliations": [], "name": "Masanobu Horie" }, { "affiliations": [], "name": "Naoki Morita" } ]
[ { "authors": [ "Eman Ahmed", "Alexandre Saint", "Abd El Rahman Shabayek", "Kseniya Cherenkova", "Rig Das", "Gleb Gusev", "Djamila Aouada", "Bjorn Ottersten" ], "title": "A survey on deep learning advances on different 3d data representations", "venue": "arXiv preprint arXiv:1808.01462,", "year": 2018 }, { "authors": [ "Ferran Alet", "Adarsh Keshav Jeewajee", "Maria Bauza Villalonga", "Alberto Rodriguez", "Tomas LozanoPerez", "Leslie Kaelbling" ], "title": "Graph element networks: adaptive, structured computation and memory", "venue": null, "year": 2019 }, { "authors": [ "Igor I Baskin", "Vladimir A Palyulin", "Nikolai S Zefirov" ], "title": "A neural device for searching direct correlations between structures and properties of chemical compounds", "venue": "Journal of chemical information and computer sciences,", "year": 1997 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Kai-Hung Chang", "Chin-Yi Cheng" ], "title": "Learning to simulate and design for structural engineering", "venue": "arXiv preprint arXiv:2003.09103,", "year": 2020 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "arXiv preprint arXiv:2007.02133,", "year": 2020 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Taco S Cohen", "Mario Geiger", "Jonas Köhler", "Max Welling" ], "title": "Spherical cnns", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Taco S Cohen", "Maurice Weiler", "Berkay Kicanaoglu", "Max Welling" ], "title": "Gauge equivariant convolutional networks and the icosahedral cnn", "venue": null, "year": 2019 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Fabian Fuchs", "Daniel Worrall", "Volker Fischer", "Max Welling" ], "title": "Se (3)-transformers: 3d rototranslation equivariant attention networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Christophe Geuzaine", "Jean-François Remacle" ], "title": "Gmsh: a three-dimensional finite element mesh generator with built-in pre- and post-processing facilities", "venue": "International Journal for Numerical Methods in Engineering,", "year": 2009 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yu Ihara", "Gaku Hashimoto", "Hiroshi Okuda" ], "title": "Web-based integrated cloud cae platform for largescale finite element analysis", "venue": "Mechanical Engineering Letters,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Johannes Klicpera", "Janek Groß", "Stephan Günnemann" ], "title": "Directional message passing for molecular graphs", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Sebastian Koch", "Albert Matveev", "Zhongshi Jiang", "Francis Williams", "Alexey Artemov", "Evgeny Burnaev", "Marc Alexa", "Denis Zorin", "Daniele Panozzo" ], "title": "Abc: A big cad model dataset for geometric deep learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Risi Kondor" ], "title": "N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials", "venue": "arXiv preprint arXiv:1803.01588,", "year": 2018 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "arXiv preprint arXiv:1812.09902,", "year": 2018 }, { "authors": [ "Naoki Morita", "Kazuo Yonekura", "Ichiro Yasuzumi", "Mitsuyoshi Tsunori", "Gaku Hashimoto", "Hiroshi Okuda" ], "title": "Development of 3× 3 dof blocking structural elements to enhance the computational intensity of iterative linear solver", "venue": "Mechanical Engineering Letters,", "year": 2016 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "arXiv preprint arXiv:1806.01242,", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Victor Bapst", "Kyle Cranmer", "Peter Battaglia" ], "title": "Hamiltonian graph networks with ode integrators", "venue": "arXiv preprint arXiv:1909.12790,", "year": 2019 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "arXiv preprint arXiv:2002.09405,", "year": 2020 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Alessandro Sperduti", "Antonina Starita" ], "title": "Supervised neural networks for the classification of structures", "venue": "IEEE Transactions on Neural Networks,", "year": 1997 }, { "authors": [ "Blair Swartz", "Burton Wendroff" ], "title": "Generalized finite-difference schemes", "venue": "Mathematics of Computation,", "year": 1969 }, { "authors": [ "Tasuku Tamai", "Seiichi Koshizuka" ], "title": "Least squares moving particle semi-implicit method", "venue": "Computational Particle Mechanics,", "year": 2014 }, { "authors": [ "Nathaniel Thomas", "Tess Smidt", "Steven Kearnes", "Lusann Yang", "Li Li", "Kai Kohlhoff", "Patrick Riley" ], "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds", "venue": "arXiv preprint arXiv:1802.08219,", "year": 2018 }, { "authors": [ "Maurice Weiler", "Mario Geiger", "Max Welling", "Wouter Boomsma", "Taco S Cohen" ], "title": "3d steerable cnns: Learning rotationally equivariant features in volumetric data", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Jure Leskovec" ], "title": "Position-aware graph neural networks", "venue": "arXiv preprint arXiv:1906.04817,", "year": 2019 }, { "authors": [], "title": "rank-0 tensor (scalar) at the ith vertex. Let us assume a partial derivative model of a rank-0 tensor φ at the ith vertex regarding the kth axis (∂φ/∂xk)i ∈ R", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph-structured data embedded in Euclidean spaces can be utilized in many different fields such as object detection, structural chemistry analysis, and physical simulations. Graph neural networks (GNNs) have been introduced to deal with such data. The crucial properties of GNNs include permutation invariance and equivariance. Besides permutations, isometric transformation invariance and equivariance must be addressed when considering graphs in Euclidean spaces because many properties of objects in the Euclidean space do not change under translation and rotation. Due to such invariance and equivariance, 1) the interpretation of the model is facilitated; 2) the output of the model is stabilized and predictable; and 3) the training is rendered efficient by eliminating the necessity of data augmentation as discussed in the literature (Thomas et al., 2018; Weiler et al.," }, { "heading": "2018; Fuchs et al., 2020).", "text": "Isometric transformation invariance and equivariance are inevitable, especially when applied to physical simulations, because every physical quantity and physical law is either invariant or equivariant to such a transformation. Another essential requirement for such applications is computational efficiency because the primary objective of learning a physical simulation is to replace a computationally expensive simulation method with a faster machine learning model.\nIn the present paper, we propose IsoGCNs, a set of simple yet powerful models that provide computationally-efficient isometric transformation invariance and equivariance based on graph convolutional networks (GCNs) (Kipf & Welling, 2017). Specifically, by simply tweaking the definition of an adjacency matrix, the proposed model can realize isometric transformation invariance. Because the proposed approach relies on graphs, it can deal with the complex shapes that are usually presented using mesh or point cloud data structures. Besides, a specific form of the IsoGCN layer can be regarded as a spatial differential operator that is essential for describing physical laws. In addition, we have shown that the proposed approach is computationally efficient in terms of processing graphs\nwith up to 1M vertices that are often presented in real physical simulations. Moreover, the proposed model exhibited faster inference compared to a conventional finite element analysis approach at the same level of accuracy. Therefore, an IsoGCN can suitably replace physical simulations regarding its power to express physical laws and faster, scalable computation. The corresponding implementation and the dataset are available online1.\nThe main contributions of the present paper can be summarized as follows:\n• We construct isometric invariant and equivariant GCNs, called IsoGCNs for the specified input and output tensor ranks.\n• We demonstrate that an IsoGCN model enjoys competitive performance against state-ofthe-art baseline models on the considered tasks related to physical simulations.\n• We confirm that IsoGCNs are scalable to graphs with 1M vertices and achieve inference considerably faster than conventional finite element analysis." }, { "heading": "2 RELATED WORK", "text": "Graph neural networks. The concept of a GNN was first proposed by Baskin et al. (1997); Sperduti & Starita (1997) and then improved by (Gori et al., 2005; Scarselli et al., 2008). Although many variants of GNNs have been proposed, these models have been unified under the concept of message passing neural networks (Gilmer et al., 2017). Generally, message passing is computed with nonlinear neural networks, which can incur a tremendous computational cost. In contrast, the GCN developed by Kipf & Welling (2017) is a considerable simplification of a GNN, that uses a linear message passing scheme expressed as\nHout = σ(ÂHinW ), (1)\nwhere Hin (Hout) is an input (output) feature of the lth layer,  is a renormalized adjacency matrix with self-loops, and W is a trainable weight. A GCN, among the variants of GNNs, is essential to the present study because the proposed model is based on GCNs for computational efficiency.\nInvariant and equivariant neural networks. A function f : X → Y is said to be equivariant to a group G when f(g · x) = g · f(x), for all g ∈ G and x ∈ X , assuming that group G acts on both X and Y . In particular, when f(g · x) = f(x), f is said to be invariant to the group G. Group equivariant convolutional neural networks were first proposed by Cohen & Welling (2016) for discrete groups. Subsequent studies have categorized such networks into continuous groups (Cohen et al., 2018), three-dimensional data (Weiler et al., 2018), and general manifolds (Cohen et al., 2019). These methods are based on CNNs; thus, they cannot handle mesh or point cloud data structures as is. Specifically, 3D steerable CNNs (Weiler et al., 2018) uses voxels (regular grids), which though relatively easy to handle, are not efficient because they represent both occupied and non-occupied parts of an object (Ahmed et al., 2018). In addition, a voxelized object tends to lose the smoothness of its shape, which can lead to drastically different behavior in a physical simulation, as typically observed in structural analysis and computational fluid dynamics.\nThomas et al. (2018); Kondor (2018) discussed how to provide rotation equivariance to point clouds. Specifically, the tensor field network (TFN) (Thomas et al., 2018) is a point cloud based rotation and translation equivariant neural network the layer of which can be written as\nH̃ (l)\nout,i = w ll H̃\n(l) in,i + ∑ k≥0 ∑ j 6=i W lk(xj − xi) H̃ (k) in,j , (2)\nW lk(x) = k+l∑ J=|k−l| φlkJ (‖x‖) J∑ m=−J YJm(x/‖x‖)QlkJm, (3)\nwhere H̃ (l) in,i (H̃ (l) out,i) is a type-l input (output) feature at the ith vertex, φ lk J : R≥0 → R is a trainable function, YJm is the mth component of the J th spherical harmonics, and QlkJm is the ClebschCordan coefficient. The SE(3)-Transformer (Fuchs et al., 2020) is a variant of the TFN with selfattention. These models achieve high expressibility based on spherical harmonics and message passing with nonlinear neural networks. However, for this reason, considerable computational resources\n1https://github.com/yellowshippo/isogcn-iclr2021\nare required. In contrast, the present study allows a significant reduction in the computational costs because it eliminates spherical harmonics and nonlinear message passing. From this perspective, IsoGCNs are also regarded as a simplification of the TFN, as seen in equation 14.\nPhysical simulations using GNNs. Several related studies, including those by Sanchez-Gonzalez et al. (2018; 2019); Alet et al. (2019); Chang & Cheng (2020) focused on applying GNNs to learn physical simulations. These approaches allowed the physical information to be introduced to GNNs; however, addressing isometric transformation equivariance was out of the scope of their research.\nIn the present study, we incorporate isometric transformation invariance and equivariance into GCNs, thereby, ensuring the stability of the training and inference under isometric transformation. Moreover, the proposed approach is efficient in processing large graphs with up to 1M vertices that have a sufficient number of degrees of freedom to express complex shapes." }, { "heading": "3 ISOMETRIC TRANSFORMATION INVARIANT AND EQUIVARIANT GRAPH CONVOLUTIONAL LAYERS", "text": "In this section, we discuss how to construct IsoGCN layers that correspond to the isometric invariant and equivariant GCN layers. To formulate a model, we assume that: 1) only attributes associated with vertices and not edges; and 2) graphs do not contain self-loops. Here, G = (V, E) denotes a graph and d denotes the dimension of a Euclidean space. In this paper, we refer to tensor as geometric tensors, and we consider a (discrete) rank-p tensor field H(p) ∈ R|V|×f×dp , where |V| denotes the number of vertices and f ∈ Z+ (Z+ denotes the positive integers). Here, f denotes the number of features (channels) of H(p), as shown in Figure 1 (a). With the indices, we denote H(p)i;g;k1k2...kp , where i permutes under the permutation of vertices and k1, . . . , kp refers to the Euclidean representation. Thus, under the permutation, π : H(p)i;g;k1k2...kp 7→ H (p) π(i);g;k1k2...kp , and under orthogonal\ntransformation, U : H(p)i;g;k1k2...kp 7→ ∑ l1,l2,...,lp Uk1l1Uk2l2 . . . UkplpH (p) i;g;l1l2...lp ." }, { "heading": "3.1 CONSTRUCTION OF AN ISOMETRIC ADJACENCY MATRIX", "text": "Before constructing an IsoGCN, an isometric adjacency matrix (IsoAM), which is at the core of the IsoGCN concept must be defined. The proof of each proposition can be found in Appendix B.\nAn IsoAM G ∈ R|V|×|V|×d is defined as: Rd 3 Gij;;: := ∑ k,l∈V,k 6=l Tijkl(xk − xl), (4)\nwhere Gij;;: is a slice in the spatial index of G, xi ∈ Rd is the position of the ith vertex (rank1 tensor), and Tijkl ∈ Rd×d is an untrainable transformation invariant and orthogonal transformation equivariant rank-2 tensor. Note that we denote Gij;;k to be consistent with the no-\ntation of H(p)i;g;k1k2...kp because i and j permutes under the vertex permutation and k represents the spatial index while the number of features is always 1. The IsoAM can be viewed as a weighted adjacency matrix for each direction and reflects spatial information while the usual weighted adjacency matrix cannot because a graph has only one adjacency matrix. If the size of the set {Gij;;: 6= 0}j is greater than or equal to d, then it can be deemed to be a frame, which is a generalization of a basis. For the simplest case, one can define Tijkl = δilδjkAijI (Figure 1 (b)), where δij is the Kronecker delta, A is the adjacency matrix of the graph, and I is the identity matrix that is the simplest rank-2 tensor. In another case, Tijkl can be determined from the geometry of a graph, as defined in equation 16. Nevertheless, in the bulk of this section, we retain Tijkl abstract to cover various forms of interaction, such as position-aware GNNs (You et al., 2019). Here, G is composed of only untrainable parameters and thus can be determined before training.\nProposition 3.1. IsoAM defined in equation 4 is both translation invariant and orthogonal transformation equivariant, i.e., for any isometric transformation ∀t ∈ R3,U ∈ O(d), T : x 7→ Ux+ t,\nT : Gij;;k 7→ ∑ l UklGij;;l. (5)\nBased on the definition of the GCN layer in the equation 1, let G ∗ H(0) ∈ R|V|×f×d denote the convolution between G and the rank-0 tensor field H(0) ∈ R|V|×f (f ∈ Z+) as follows:\n(G ∗ H(0))i;g;k := ∑ j Gij;;kH (0) j;g;. (6)\nWith a rank-1 tensor field H(1) ∈ R|V|×f×d, let G H(1) ∈ R|V|×f and G G ∈ R|V|×|V| denote the contractions which are defined as follows:\n(G H(1))i;g; := ∑ j,k Gij;;kH (1) j;g;k, (G G)il;; := ∑ j,k Gij;;kGjl;k. (7)\nThe contraction of IsoAMs G G can be interpreted as the inner product of each component in the IsoAMs. Thus, the subsequent proposition follows. Proposition 3.2. The contraction of IsoAMs G G is isometric transformation invariant, i.e., for any isometric transformation ∀t ∈ R3,U ∈ O(d), T : x 7→ Ux + t, G G 7→ G G.\nWith a rank-p tensor field H(p) ∈ R|V|×f×dp , let G⊗H(p) ∈ R|V|×f×d1+p . and G⊗G ∈ R|V|×|V|×d2 denote the tensor products defined as follows:\n(G⊗ H(p))i;g;km1m2...mp := ∑ j Gij;;kH (p) j;g;m1m2...mp , (8)\n(G⊗G)il;;k1k2 := ∑ j Gij;;k1Gjl;;k2 . (9)\nThe tensor product of IsoAMs G⊗G can be interpreted as the tensor product of each of the IsoAMs components. Thus, the subsequent proposition follows: Proposition 3.3. The tensor product of the IsoAMs G⊗G is isometric transformation equivariant in terms of the rank-2 tensor, i.e., for any isometric transformation ∀t ∈ R3,U ∈ O(d), T : x 7→ Ux + t, and ∀i, j ∈ 1, . . . , |V|, (G⊗G)ij;;k1k2 7→ Uk1l1Uk2l2(G⊗G)ij;;l1l2 .\nThis proposition is easily generalized to the tensors of higher ranks by defining the pth tensor power of G as follows: ⊗0 G = 1, ⊗1 G = G, and ⊗p G = ⊗p−1 G⊗G. Namely, ⊗p G is isometric transformation equivariant in terms of rank-p tensor. Therefore, one can see that (\n⊗p G)⊗ H(q) = ( ⊗p−1 G) ⊗ (G ⊗ H(q)). Moreover, the convolution can be generalized for ⊗p G and the rank-0 tensor field H(0) ∈ R|V|×f as follows:[( p⊗\nG ) ∗ H(0) ] i;g;k1k2...kp = ∑ j ( p⊗ G ) ij;;k1k2...kp H (0) j;g;. (10)" }, { "heading": "The contraction can be generalized for", "text": "⊗p G and the rank-q tensor field H(q) ∈ R|V|×f×dq (p ≥ q) as specified below:[( p⊗\nG ) H(q) ] i;g;k1k2...kp−q = ∑ j,m1,m2,...,mq ( p⊗ G ) ij;;k1k2...kp−qm1m2...mq H(q)j;g;m1m2...mq .\n(11) For the case p < q, the contraction can be defined similarly." }, { "heading": "3.2 CONSTRUCTION OF ISOGCN", "text": "Using the operations defined above, we can construct IsoGCN layers, which take the tensor field of any rank as input, and output the tensor field of any rank, which can differ from those of the input." }, { "heading": "In addition, one can show that these layers are also equivariant under the vertex permutation, as", "text": "discussed in Maron et al. (2018)." }, { "heading": "3.2.1 ISOMETRIC TRANSFORMATION INVARIANT LAYER", "text": "" }, { "heading": "As can be seen in Proposition 3.1, the contraction of IsoAMs is isometric transformation invariant. Therefore, for the isometric transformation invariant layer with a rank-0 input tensor field", "text": "f : R|V|×fin 3 H(0)in 7→ H (0) out ∈ R|V|×fout (fin, fout ∈ Z+), the activation function σ, and the\ntrainable parameter matrix W ∈ Rfin×fout can be constructed as H(0)out = σ ( (G G)H(0)in W ) .\nBy defining L := G G ∈ R|V|×|V|, it can be simplified as H(0)out = σ ( LH (0) in W ) , which has the\nsame form as a GCN (equation 1), with the exception that  is replaced with L.\nAn isometric transformation invariant layer with the rank-p input tensor field H(p)in ∈ R|V|×fin×d p can be formulated as H(0)out = Fp→0(H (p) in ) = σ ([⊗p G H(p)in ]W) . If p = 1, such approaches utilize the inner products of the vectors in Rd, these operations correspond to the extractions of a relative distance and an angle of each pair of vertices, which are employed in Klicpera et al. (2020)." }, { "heading": "3.2.2 ISOMETRIC TRANSFORMATION EQUIVARIANT LAYER", "text": "To construct an isometric transformation equivariant layer, one can use linear transformation, convolution and tensor product to the input tensors. If both the input and the output tensor ranks are greater than 0, one can apply neither nonlinear activation nor bias addition because these operations will cause an inappropriate distortion of the isometry because isometric transformation does not commute with them in general. However, a conversion that uses only a linear transformation, convolution, and tensor product does not have nonlinearity, which limits the predictive performance of the model. To add nonlinearity to such a conversion, we can first convert the input tensors to rank-0 ones, apply nonlinear activations, and then multiply them to the higher rank tensors.\nThe nonlinear isometric transformation equivariant layer with the rank-m input tensor field H(m)in ∈ R|V|×fin×dm and the rank-l (m ≤ l) output tensor H(l)out ∈ R|V|×fout×d l can be defined as:\nH(l)out = Fm→0 ( H(m)in ) × Fm→l ( H(m)in ) , Fm→l ( H(m)in ) =\n[ l−m⊗\nG ] ⊗ H(m)in Wml, (12)\nwhere × denotes multiplication with broadcasting and Wml ∈ Rfin×fout are trainable weight matrices multiplied in the feature direction. If m = 0, we regard G⊗ H(0) as G ∗ H(0). If m = l, one can add the residual connection (He et al., 2016) in equation 12. If m > l,\nH(l)out = Fm→0 ( H(m)in ) × Fm→l ( H(m)in ) , Fm→l ( H(m)in ) =\n[ m−l⊗\nG ] H(m)in Wml. (13)\nIn general, the nonlinear isometric transformation equivariant layer with the rank-0 to rank-M input tensor field {H(m)in }Mm=0 and the rank-l output tensor field H (l) out can be defined as:\nH(l)out = H (l) in W + M∑ m=0 fgather ({ Fk→0(H (k) in ) }M k=0 ) × Fm→l ( H(m)in ) , (14)\nwhere fgather denotes a function such as summation, product and concatenation in the feature direction. One can see that this layer is similar to that in the TFN (equation 2), while there are no spherical harmonics and trainable message passing." }, { "heading": "To be exact, the output of the layer defined above is translation invariant. To output translation equivariant variables such as the vertex positions after deformation (which change accordingly with", "text": "the translation of the input graph), one can first define the reference vertex position xref for each graph, then compute the translation invariant output using equation 14, and finally, add xref to the output. For more detailed information on IsoGCN modeling, see Appendix D." }, { "heading": "3.3 EXAMPLE OF ISOAM", "text": "The IsoGCN G is defined in a general form for the propositions to work with various classes of graph. In this section, we concretize the concept of the IsoAM to apply an IsoGCN\nto mesh structured data. Here, a mesh is regarded as a graph regarding the points in the mesh as vertices of the graph and assuming two vertices are connected when they share the same cell. A concrete instance of IsoAM D̃,D ∈ R|V|×|V|×d is defined as follows:\nTable 1: Correspondence between the differential operators and the expressions using the IsoAM D̃.\nDifferential op. Expression Gradient D̃ ∗ H(0) Divergence D̃ H(1) Laplacian D̃ D̃H(0) Jacobian D̃⊗ H(1) Hessian D̃⊗ D̃ ∗ H(0)\nD̃ij;;k = Dij;;k − δij ∑ l Dil;;k, (15)\nDij;;: = M−1i xj − xi ‖xj − xi‖2 wijAij(m), (16)\nMi = ∑ l xl − xi ‖xl − xi‖ ⊗ xl − xi‖xl − xi‖ wilAil(m), (17)\nwhere R|V|×|V| 3 A(m) := min (∑mk=1 Ak, 1) is an adjacency matrix up to m hops and wij ∈ R is an untrainable weight between the ith and jth vertices that is determined depending on the tasks2. By regarding Tijkl = δilδjkM −1 i wijAij(m)/‖xj − xi‖2 in equation 4, one can see that D is qualified as an IsoAM. Because a linear combination of IsoAMs is also an IsoAM, D̃ is an IsoAM. Thus, they provide both translation invariance and orthogonal transformation equivariance. D̃ can be obtained only from the mesh geometry information, thus can be computed in the preprocessing step.\nHere, D̃ is designed such that it corresponds to the gradient operator model used in physical simulations (Tamai & Koshizuka, 2014; Swartz & Wendroff, 1969). As presented in Table 1 and Appendix C, D̃ is closely related to many differential operators, such as the gradient, divergence, Laplacian, Jacobian, and Hessian. Therefore, the considered IsoAM plays an essential role in constructing neural network models that are capable of learning differential equations." }, { "heading": "4 EXPERIMENT", "text": "To test the applicability of the proposed model, we composed the following two datasets: 1) a differential operator dataset of grid meshes; and 2) an anisotropic nonlinear heat equation dataset of meshes generated from CAD data. In this section, we discuss our machine learning model, the definition of the problem, and the results for each dataset.\nUsing D̃ defined in Section 3.3, we constructed a neural network model considering an encodeprocess-decode configuration (Battaglia et al., 2018). The encoder and decoder were comprised of component-wise MLPs and tensor operations. For each task, we tested m = 2, 5 in equation 16 to investigate the effect of the number of hops considered. In addition to the GCN (Kipf & Welling, 2017), we chose GIN (Xu et al., 2018), SGCN (Wu et al., 2019), Cluster-GCN (Chiang et al., 2019), and GCNII (Chen et al., 2020) as GCN variant baseline models. For the equivariant models, we chose the TFN (Thomas et al., 2018) and SE(3)-Transformer (Fuchs et al., 2020) as the baseline. We implemented these models using PyTorch 1.6.0 (Paszke et al., 2019) and PyTorch Geometric 1.6.1 (Fey & Lenssen, 2019). For both the TFN and SE(3)-Transformer, we used implementation of Fuchs et al. (2020) 3 because the computation of the TFN is considerably faster than the original implementation, as claimed in Fuchs et al. (2020). For each experiment, we minimized the mean squared loss using the Adam optimizer (Kingma & Ba, 2014). The corresponding implementation and the dataset will be made available online. The details of the experiments can be found in Appendix E and F." }, { "heading": "4.1 DIFFERENTIAL OPERATOR DATASET", "text": "To demonstrate the expressive power of IsoGCNs, we created a dataset to learn the differential operators. We first generated a pseudo-2D grid mesh randomly with only one cell in the Z direction and 10 to 100 cells in the X and Y directions. We then generated scalar fields on the grid meshes\n2Mi is invertible when the number of independent vectors in {xl − xi}l is greater than or equal to the space dimension d, which is true for common meshes, e.g., a solid mesh in 3D Euclidean space.\n3https://github.com/FabianFuchsML/se3-transformer-public\nand analytically calculated the gradient, Laplacian, and Hessian fields. We generated 100 samples for each train, validation, and test dataset. For simplicity, we set wij = 1 in equation 16 for all (i, j) ∈ E . To compare the performance with the GCN models, we simply replaced an IsoGCN layer with a GCN or its variant layers while keeping the number of hops m the same to enable a fair comparison. We adjusted the hyperparameters for the equivariant models to ensure that the number of parameters in each was almost the same as that in the IsoGCN model. For more details regarding the model architecture, see Appendix E. We conducted the experiments using the following settings: 1) inputting the scalar field and predicting the gradient field (rank-0→ rank-1 tensor); 2) inputting the scalar field and predicting the Hessian field (rank-0→ rank-2 tensor); 3) inputting the gradient field and predicting the Laplacian field (rank-1→ rank-0 tensor); and 4) inputting the gradient field and predicting the Hessian field (rank-1→ rank-2 tensor). Figure 2 and Table 2 present a visualization and comparison of predictive performance, respectively. The results show that an IsoGCN outperforms other GCN models for all settings. This is because the IsoGCN model has information on the relative position of the adjacency vertices, and thus understands the direction of the gradient, whereas the other GCN models cannot distinguish where the adjacencies are, making it nearly impossible to predict the gradient directions. Adding the vertex positions to the input feature to other GCN models exhibited a performance improvement, however as the vertex position is not a translation invariant feature, it could degrade the predictive performance of the models. Thus, we did not input x as a vertex feature to the IsoGCN model or other equivariant models to retain their isometric transformation invariant and equivariant natures. IsoGCNs perform competitively against other equivariant models with shorter inference time as shown in Ta-\nble 7. As mentioned in Section 3.3, D̃ corresponds to the gradient operator, which is now confirmed in practice." }, { "heading": "4.2 ANISOTROPIC NONLINEAR HEAT EQUATION DATASET", "text": "To apply the proposed model to a real problem, we adopted the anisotropic nonlinear heat equation. We considered the task of predicting the time evolution of the temperature field based on the initial temperature field, material property, and mesh geometry information as inputs. We randomly selected 82 CAD shapes from the first 200 shapes of the ABC dataset (Koch et al., 2019), generate first-order tetrahedral meshes using a mesh generator program, Gmsh (Geuzaine & Remacle, 2009), randomly set the initial temperature and anisotropic thermal conductivity, and finally conducted a finite element analysis (FEA) using the FEA program FrontISTR4 (Morita et al., 2016; Ihara et al., 2017).\nFor this task, we set wij = V effectivej /V effective i , where V effective i denotes the effective volume of the ith vertex (equation 46.) Similarly to the differential operator dataset, we tested the number of hops m = 2, 5. However because we put four IsoAM operations in one model, the number of hops visible from the model is 8 (m = 2) or 20 (m = 5). As is the case with the differential operator dataset, we replaced an IsoGCN layer accordingly for GCN or its variant models. In the case of k = 2, we reduced the number of parameters for each of the equivariant models to fewer than the IsoGCN model because they exceeded the memory of the GPU (NVIDIA Tesla V100 with 32 GiB memory) with the same number of parameters. In the case of k = 5, neither the TFN nor the SE(3)Transformer fits into the memory of the GPU even with the number of parameters equal to 10. For more details about the dataset and the model, see Appendix F.\nFigure 3 and Table 3 present the results of the qualitative and quantitative comparisons for the test dataset. The IsoGCN demonstrably outperforms all other baseline models. Moreover, owing to the computationally efficient isometric transformation invariant nature of IsoGCNs, it also achieved a high prediction performance for the meshes that had a significantly larger graph than those considered in the training dataset. The IsoGCN can scale up to 1M vertices, which is practical and is considerably greater than that reported in Sanchez-Gonzalez et al. (2020). Therefore, we conclude that IsoGCN models can be trained on relatively smaller meshes5 to save the training time and then used to apply the inference to larger meshes without observing significant performance deterioration.\nTable 4 reports the preprocessing and inference computation time using the equivariant models with m = 2 as the number of hops and FEA using FrontISTR 5.0.0. We varied the time step (∆t =\n4https://github.com/FrontISTR/FrontISTR. We applied a private update to FrontISTR to deal with the anisotropic heat problem, which will be also made available online.\n5However, it should also be sufficiently large to express sample shapes and fields.\n1.0, 0.5) for the FEA computation to compute the t = 1.0 time evolution thus, resulting in different computation times and errors compared to an FEA with ∆t = 0.01, which was considered as the ground truth. Clearly, the IsoGCN is 3- to 5- times faster than the FEA with the same level of accuracy, while other equivariant models have almost the same speed as FrontISTR with ∆t = 0.5.\n5 CONCLUSION\nIn this study, we proposed the GCNbased isometric transformation invariant and equivariant models called IsoGCN. We discussed an example of an isometric adjacency matrix (IsoAM) that was closely related to the essential differential operators. The experiment results confirmed that the proposed model leveraged the spatial structures and can deal with large-scale graphs. The computation time of the IsoGCN model is significantly shorter than the FEA, which other equivariant models cannot achieve. Therefore, IsoGCN must be the first choice to learn physical simulations because of its computational efficiency as well as isometric transformation invariance and equivariance. Our demonstrations were conducted on the mesh structured dataset based on the FEA results. However, we expect IsoGCNs to be applied to various domains,\nsuch as object detection, molecular property prediction, and physical simulations using particles." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors gratefully acknowledge Takanori Maehara for his helpful advice and NVIDIA for hardware donations. This work was supported by JSPS KAKENHI Grant Number 19H01098." }, { "heading": "Eman Ahmed, Alexandre Saint, Abd El Rahman Shabayek, Kseniya Cherenkova, Rig Das, Gleb", "text": "Gusev, Djamila Aouada, and Bjorn Ottersten. A survey on deep learning advances on different 3d data representations. arXiv preprint arXiv:1808.01462, 2018.\nFerran Alet, Adarsh Keshav Jeewajee, Maria Bauza Villalonga, Alberto Rodriguez, Tomas LozanoPerez, and Leslie Kaelbling. Graph element networks: adaptive, structured computation and memory. In ICML, 2019.\nIgor I Baskin, Vladimir A Palyulin, and Nikolai S Zefirov. A neural device for searching direct correlations between structures and properties of chemical compounds. Journal of chemical information and computer sciences, 37(4):715–721, 1997." }, { "heading": "Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,", "text": "Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.\nKai-Hung Chang and Chin-Yi Cheng. Learning to simulate and design for structural engineering. arXiv preprint arXiv:2003.09103, 2020.\nMing Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. arXiv preprint arXiv:2007.02133, 2020." }, { "heading": "Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An", "text": "efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 257–266, 2019.\nTaco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990–2999, 2016.\nTaco S Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical cnns. In ICLR, 2018.\nTaco S Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. ICML, 2019.\nMatthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.\nFabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d rototranslation equivariant attention networks. Advances in Neural Information Processing Systems, 33, 2020.\nChristophe Geuzaine and Jean-François Remacle. Gmsh: a three-dimensional finite element mesh generator with built-in pre- and post-processing facilities. International Journal for Numerical Methods in Engineering, 79(11):1309–1331, 2009." }, { "heading": "Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural", "text": "message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. JMLR. org, 2017.\nMarco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pp. 729–734. IEEE, 2005.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.\nYu Ihara, Gaku Hashimoto, and Hiroshi Okuda. Web-based integrated cloud cae platform for largescale finite element analysis. Mechanical Engineering Letters, 3:17–00520, 2017.\nDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.\nJohannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In ICLR, 2020.\nSebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. Abc: A big cad model dataset for geometric deep learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.\nRisi Kondor. N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials. arXiv preprint arXiv:1803.01588, 2018.\nHaggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018.\nNaoki Morita, Kazuo Yonekura, Ichiro Yasuzumi, Mitsuyoshi Tsunori, Gaku Hashimoto, and Hiroshi Okuda. Development of 3× 3 dof blocking structural elements to enhance the computational intensity of iterative linear solver. Mechanical Engineering Letters, 2:16–00082, 2016.\nVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814, 2010." }, { "heading": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor", "text": "Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019." }, { "heading": "Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller,", "text": "Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. arXiv preprint arXiv:1806.01242, 2018." }, { "heading": "Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph", "text": "networks with ode integrators. arXiv preprint arXiv:1909.12790, 2019.\nAlvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter W Battaglia. Learning to simulate complex physics with graph networks. arXiv preprint arXiv:2002.09405, 2020." }, { "heading": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.", "text": "The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.\nAlessandro Sperduti and Antonina Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997.\nBlair Swartz and Burton Wendroff. Generalized finite-difference schemes. Mathematics of Computation, 23(105):37–49, 1969.\nTasuku Tamai and Seiichi Koshizuka. Least squares moving particle semi-implicit method. Computational Particle Mechanics, 1(3):277–305, 2014." }, { "heading": "Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick", "text": "Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018." }, { "heading": "Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable", "text": "cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, pp. 10381–10392, 2018.\nFelix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In ICML, pp. 6861–6871. PMLR, 2019.\nKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.\nJiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. arXiv preprint arXiv:1906.04817, 2019." }, { "heading": "A NOTATION", "text": "G A graph V A vertex set |V| The number of vertices E An edge set Z+ The positive integers d The dimension of the Euclidean space\nxi The position of the ith vertex\nxik Element k of xi G ∈ R|V|×|V|×d The isometric adjacency matrix (IsoAM) (equation 4) Gij;;: ∈ Rd Slice of G in the spatial index (equation 4) Gij;;k ∈ R Element (i, j, k) of G H(p) ∈ R|V|×f×dp A rank-p tensor field tensor (f, p ∈ Z+) H(p)i;g;k1k2...kp Element (i; g; k1, k2, . . . , kp) of H\n(p). i refers to the permutation representation, k1, . . . kp refer to the Euclidean representation, and g denotes the feature index (See section 3).[\np⊗ G ] ∗ H(0) Convolution of the pth power of G and rank-0 tensor field H(0) (equa-\ntion 6, equation 10)[ p⊗\nG ] H(q) Contraction of the pth power of G and rank-q tensor fields (equation 7,\nequation 11)[ p⊗\nG ] ⊗ H(q) Tensor product of the pth power of G and rank-q tensor fields H(q)\n(equation 8)\nH(p)in The rank-p input tensor field of the considered layer H(p)out The rank-p output tensor field of the considered layer\nσ The activation function\nW The trainable parameter matrix\nA ∈ R|V|×|V| An adjacency matrix δij The Kronecker delta\nV effectivei The effective volume of the ith vertex (equation 46)\nV meani The mean volume of the ith vertex (equation 47) D̃ ∈ R|V|×|V| A concrete instance of IsoAM (equation 15)" }, { "heading": "B PROOFS OF PROPOSITIONS", "text": "In this section, we present the proofs of the propositions described in Section 3. Let R3 3 g(xl,xk) = (xk − xl). Note that G is expressed using g(xi,xj) as Gij;;: =∑ k,l∈V,k 6=l Tijklg(xl,xk)." }, { "heading": "B.1 PROOF OF PROPOSITION 3.1", "text": "Proof. First, we demonstrate the invariance with respect to the translation with ∀t ∈ Rd. g(xi,xj) is transformed invariantly as follows under translation:\ng(xi + t,xj + t) = [xj + t− (xi + t)] = (xj − xi) = g(xi,xj). (18)\nBy definition, Tijkl is also translation invariant. Thus,∑ k,l∈V,k 6=l Tijklg(xl + t,xk + t) = ∑ k,l∈V,k 6=l Tijklg(xl,xk)\n= Gij;;:. (19)\nWe then show an equivariance regarding the orthogonal transformation with ∀U ∈ O(d). g(xi,xj) is transformed as follows by orthogonal transformation:\ng(Uxi,Uxj) = Uxj −Uxi = U(xj − xi) = Ug(xi,xj). (20)\nBy definition, Tijkl is transformed to UTijklU−1 by orthogonal transformation. Thus,∑ k,l∈V,k 6=l UTijklU −1g(Uxl,Uxk) = ∑ k,l∈V,k 6=l UTijklU −1Ug(xl,xk)\n= UGij;;:. (21)\nTherefore, G is both translation invariant and an orthogonal transformation equivariant." }, { "heading": "B.2 PROOF OF PROPOSITION 3.2", "text": "Proof. Here, G G is translation invariant because G is translation invariant. We prove rotation invariance under an orthogonal transformation ∀U ∈ O(n). In addition, G G is transformed under U as follows:∑\nj,k\nGij;;kGjl;;k 7→ ∑\nj,k,m,n\nUkmGij;;mUknGjl;;n\n= ∑\nj,k,m,n\nUkmUknGij;;mGjl;;n\n= ∑\nj,k,m,n\nUTmkUknGij;;mGjl;;n\n= ∑ j,m,n δmnGij;;mGjl;;n (∵ property of the orthogonal matrix)\n= ∑ j Gij;;mGjl;;m\n= ∑ j,k Gij;;kGjl;;k. (∵ Change the dummy index m→ k) (22)\nTherefore, G G is isometric transformation invariant." }, { "heading": "B.3 PROOF OF PROPOSITION 3.3", "text": "Proof. G⊗G is transformed under ∀U ∈ O(n) as follows:\n∑ j Gij;;kGjl;;m 7→ ∑ n,o UknGij;;nUmoGjl;;o\n= ∑ n,o UknGij;;nGjl;;oUTom. (23)\nBy regarding Gij;;nGjl;;o as one matrix Hno, it follows the coordinate transformation of rank-2 tensor UHUT for each i, j, and l." }, { "heading": "C PHYSICAL INTUITION OF D̃", "text": "In this section, we discuss the connection between the concrete IsoAM example D̃ and the differential operators such as the gradient, divergence, the Laplacian, the Jacobian, and the Hessian operators.\nLet φi ∈ R denote a rank-0 tensor (scalar) at the ith vertex. Let us assume a partial derivative model of a rank-0 tensor φ at the ith vertex regarding the kth axis (∂φ/∂xk)i ∈ R (k ∈ {1, . . . , d}), that is based on the gradient model in the least squares moving particle semi-implicit method (Tamai & Koshizuka, 2014).\n( ∂φ\n∂xk ) i :=M−1i ∑ j φj − φi ‖xj − xi‖ xjk − xik ‖xj − xi‖ wijAij(m) (24)\n= ∑ j Dijk(φj − φi), (25)\nMi = ∑ l xl − xi ‖xl − xi‖ ⊗ xl − xi‖xl − xi‖ wilAil(m). (26)\nAlthough one could define wij as a function of the distance ‖xj − xi‖, wij was kept constant with respect to the distance required to maintain the simplicity of the model with fewer hyperparameters." }, { "heading": "C.1 GRADIENT", "text": "D̃ can be viewed as a Laplacian matrix based on D; however, D̃ ∗ H(0) can be interpreted as the gradient within the Euclidean space. Let ∇ H(0) ∈ R|V|×f×d be an approximation of the gradient of H(0). Using equation 25, the gradient model can be expressed as follows:\n( ∇ H(0) ) i;g;k = ∂H(0)i;g; ∂xk\n(27)\n= Dijk(H (0) j;g; − H (0) i;g;). (28)\nUsing this gradient model, we can confirm that (D̃ ∗ H(0))i;g;k = (∇ H(0))i;glk because\n( D̃ ∗ H(0) ) i;g;k = ∑ j D̃ij;;kH (0) j;g; (29)\n= ∑ j (Dij;;k − δij ∑ l Dil;;k)H (0) j;g;\n= ∑ j Dij;;kH (0) j;g; − ∑ j,l δijDil;;kH (0) j;g;\n= ∑ j Dij;;kH (0) j;g; − ∑ l Dil;;kH (0) i;g;\n= ∑ j Dij;;kH (0) j;g; − ∑ j Dij;;kH (0) i;g; (∵ Change the dummy index l→ j)\n= ∑ j Dij;;k(H (0) j;g; − H (0) i;g;)\n= ( ∇ H(0) ) i;g;k . (30)\nTherefore, D̃∗ can be interpreted as the gradient operator within a Euclidean space." }, { "heading": "C.2 DIVERGENCE", "text": "We show that D̃ H(1) corresponds to the divergence. Using D, the divergence model ∇ · H(1) ∈ R|V|×f is expressed as follows:\n( ∇ · H(1) ) i;g; = (∑ k ∂ H(1) ∂xk ) i;g;\n(31)\n= ∑ j,k Dij;;k(H (1) j;g;k − H (1) i;g;k). (32)\nThen, D̃ H(1) is\n(D̃ H(1))i;g; = ∑ j,k D̃ij;;kH (1) i;g;k\n= ∑ j,k\n( Dij;;k − δij\n∑ l D ) H(1)i;g;k\n= ∑ j,k Dij;;kH (1) j;g;k − ∑ l,k Dil;;kH (1) i;g;k\n= ∑ j,k Dij;;k(H (1) j;g;k − H (1) i;g;k) (∵ Change the dummy index l→ j)\n= (∇ · H(1))i;g;. (33)" }, { "heading": "C.3 LAPLACIAN OPERATOR", "text": "We prove that D̃ D̃ corresponds to the Laplacian operator within a Euclidean space.\nUsing equation 25, the Laplacian model∇2 H(0) ∈ R|V|×f can be expressed as follows:\n( ∇2 H(0) ) i;g; := ∑ k [ ∂ ∂xk ( ∂H ∂xk ) i ] i;g;\n= ∑ j,k Dij;;k [( ∂H ∂xk ) j;g; − ( ∂H ∂xk ) i;g; ]\n= ∑ j,k Dij;;k [∑ l Djl;;k(H (0) l;g; − H (0) j;g;)− ∑ l Dil;;k(H (0) l;g; − H (0) i;g;) ] = ∑ j,k,l Dij;;k(Djl;;k − Dil;;k)(H(0)l;g; − H (0) j;g;). (34)\nThen, (D̃ D̃)H(0) is ((D̃ D̃)H(0))i;g; = ∑ j,k,l D̃ij;;kD̃jl;;kH (0) l;g;\n= ∑ j,k,l\n( Dij;;k − δij\n∑ m Dim;;k\n)( Djl;;k − δjl\n∑ n Djn;;k ) H(0)l;g;\n= ∑ j,k,l Dij;;kDjl;;kH (0) l;g; − ∑ j,k,n Dij;;kDjn;;kH (0) j;g;\n− ∑ k,l,m Dim;;kDil;;kH (0) l;g; + ∑ k,m,n Dim;;kDin;;kH (0) i;g;\n= ∑ j,k,l Dij;;kDjl;;kH (0) l;g; − ∑ j,k,n Dij;;kDjn;;kH (0) j;g;\n− ∑ k,l,j Dij;;kDil;;kH (0) l;g; + ∑ k,j,n Dij;;kDin;;kH (0) i;g;\n(∵ Change the dummy index m→ j for the third and fourth terms)\n= ∑ j,k,l Dij;;k(Djl;;k − Dil;;k)(H(0)l;g; − H (0) j;g;)\n(∵ Change the dummy index n→ l for the second and fourth terms)\n= ( ∇2 H(0) ) i;g; . (35)" }, { "heading": "C.4 JACOBIAN AND HESSIAN OPERATORS", "text": "Considering a similar discussion, we can show the following dependencies. For the Jacobian model, J [H(1)] ∈ R|V|×f×d×d,\n( J [H(1)] ) i;g;kl =\n( ∂ H(1)\n∂xl ) i;g;k\n(36)\n= ∑ j Dij;;l(H (1) j;g;k − H (1) i;g;k) (37)\n= (D̃ ⊗ H(1))i;g;lk. (38)\nFor the Hessian model, Hess[H(0)] ∈ R|V|×f×d×d,( Hess[H(0)] ) i;g;kl = ( ∂\n∂xk\n∂\n∂xl H(0) ) i;g;\n(39)\n= ∑ j,m Dij;;k[Djm;;l(H(0)m;g; − H(0)l;g;)− Dim;;l(H(0)m;g; − H (0) i;g;)] (40)\n= [ (D̃⊗ D̃) ∗ H(0) ] i;g;kl . (41)\nD ISOGCN MODELING DETAILS" }, { "heading": "To achieve isometric transformation invariance and equivariance, there are several rules to follow.", "text": "Here, we describe the desired focus when constructing an IsoGCN model. In this section, a rank-p tensor denotes a tensor the rank of which is p ≥ 1 and σ denotes a nonlinear activation function. W is a trainable weight matrix and b is a trainable bias." }, { "heading": "D.1 ACTIVATION AND BIAS", "text": "" }, { "heading": "As the nonlinear activation function is not isometric transformation equivariant, nonlinear activation", "text": "to rank-p tensors cannot be applied, while one can apply any activation to rank-0 tensors. In addition, adding bias is also not isometric transformation equivariant, one cannot add bias when performing an affine transformation to rank-p tensors. Again, one can add bias to rank-0 tensors.\nThus, for instance, if one converts from rank-0 tensors H(0) to rank-1 tensors using IsoAM G, G∗σ(H(0)W+b) and (G∗σ(H(0)))W are isometric equivariant functions, however (G∗H(0))W+b and σ ( (G ∗ σ(H(0)))W ) are not due to the bias and the nonlinear activation, respectively. Like-\nwise, regarding a conversion from rank-1 tensors H(1) to rank-0 tensors, σ ( (G H(1))W + b ) and\nσ ( G (H(1)W ) )\nare isometric transformation invariant functions; however, G (H(1)W + b) and (G σ(H(1)))W + b are not. To convert rank-p tensors to rank-q tensors (q ≥ 1), one can apply neither bias nor nonlinear activation. To add nonlinearity to such a conversion, we can multiply the converted rank-0 tensors σ(( ⊗p G H(p))W + b) with the input tensors H(p) or the output tensors H(q)." }, { "heading": "D.2 PREPROCESSING OF INPUT FEATURE", "text": "Similarly to the discussion regarding the biases, we have to take care of the preprocessing of rank-p tensors to retain isometric transformation invariance because adding a constant array and component-wise scaling could distort the tensors, resulting in broken isometric transformation equivariance.\nFor instance, H(p)/Stdall [ H(p) ] is a valid transformation to retain isometric transformation equiv-\nariance, assuming Stdall [ H(p) ] ∈ R is a standard deviation of all components of H(p). However,\nconversions such as H(p)/Stdcomponent [ H(p) ] and H(p) − Mean [ H(p) ] are not isometric trans-\nformation equivariant, assuming that Stdcomponent [ H(p) ] ∈ Rdp is a component-wise standard\ndeviation." }, { "heading": "D.3 SCALING", "text": "Because the concrete instance of IsoAM D̃ corresponds to the differential operator, the scale of the output after operations regarding D̃ can be huge. Thus, we rescale D̃ using the scaling factor\n[ Meansample,i(D̃2ii;;1 + D̃ 2 ii;;2 + D̃ 2 ii;;3) ]1/2 , where Meansample,i denotes the mean over the samples\nand vertices.\nD.4 IMPLEMENTATION\nBecause an adjacency matrix A is usually a sparse matrix for a regular mesh, A(m) in equation 16 is also a sparse matrix for a sufficiently small m. Thus, we can leverage sparse matrix multiplication in the IsoGCN computation. This is one major reason why IsoGCNs can compute rapidly. If the multiplication (tensor product or contraction) of IsoAMs must be computed multiple times the associative property of the IsoAM can be utilized." }, { "heading": "For instance, it is apparent that", "text": "[⊗k G]∗H(0) = G⊗(G⊗. . . (G∗H(0))). Assuming that the number\nof nonzero elements in A(m) equals n and H(0) ∈ R|V|×f , then the computational complexity of the right-hand side is O(n|V|fdk). This is an exponential order regarding d. However, d and k are usually small numbers (typically d = 3 and k ≤ 4). Therefore one can compute an IsoGCN layer with a realistic spatial dimension d and tensor rank k fast and memory efficiently. In our implementation, both a sparse matrix operation and associative property are utilized to realize fast computation." }, { "heading": "E.1 MODEL ARCHITECTURES", "text": "" }, { "heading": "E EXPERIMENT DETAILS: DIFFERENTIAL OPERATOR DATASET", "text": "Figure 4 represents the IsoGCN model used for the differential operator dataset. We used the tanh activation function as a nonlinear activation function because we expect the target temperature field to be smooth. Therefore, we avoid using non-differentiable activation functions such as the rectified linear unit (ReLU) (Nair & Hinton, 2010). For GCN and its variants, we simply replaced the IsoGCN layers with the corresponding ones. We stacked m (= 2, 5) layers for GCN, GIN, GCNII, and Cluster-GCN. We used an m hop adjacency matrix for SGCN.\nFor the TFN and SE(3)-Transformer, we set the hyperparameters to have almost the same number of parameters as in the IsoGCN model. The settings of the hyperparameters are shown in Table 5." }, { "heading": "E.2 RESULT DETAILS", "text": "Table 6 represents the detailed comparison of training results. The results show that an IsoGCN outperforms other GCN models for all settings. Compared to other equivariant models, IsoGCN has competitive performance compared to equivariant models with shorter inference time as shown in Table 7. Therefore, it can be found out the proposed model has a strong expressive power to express differential regarding space with less computation resources compared to the TFN and SE(3)Transformer." }, { "heading": "F EXPERIMENTS DETAILS: ANISOTROPIC NONLINEAR HEAT EQUATION DATASET", "text": "" }, { "heading": "F.1 DATASET", "text": "The purpose of the experiment was to solve the anisotropic nonlinear heat diffusion under an adiabatic boundary condition. The governing equation is defined as follows:\nΩ ⊂ R3, (42) ∂T (x, t)\n∂t = ∇ ·C(T (x, t))∇T (x, t), in Ω, (43)\nT (x, t = 0) = T0.0(x), in Ω, (44) ∇T (x, t)|x=xb · n(xb) = 0, on ∂Ω, (45)\nwhere T is the temperature field, T0.0 is the initial temperature field, C ∈ Rd×d is an anisotropic diffusion tensor and n(xb) is the normal vector at xb ∈ ∂Ω. Here, C depends on temperature thus the equation is nonlinear. We randomly generate C(T = −1) for it to be a positive semidefinite symmetric tensor with eigenvalues varying from 0.0 to 0.02. Then, we defined the linear temperature dependency the slope of which is −C(T = −1)/4. The function of the anisotropic diffusion tensor is uniform for each sample. The task is defined to predict the temperature field at t = 0.2, 0.4, 0.6, 1.0 (T0.2, T0.4, T0.6, T0.8, T1.0) from the given initial temperature field, material property, and mesh geometry. However, the performance is evaluated only with T1.0 to focus on the predictive performance. We inserted other output features to stabilize the trainings. Accordingly, the diffusion number of this problem is C∆t/(∆x)2 ' 10.04 assuming ∆x ' 10.0−3. Figure 5 represents the process of generating the dataset. We generated up to 9 FEA results for each CAD shape. To avoid data leakage in terms of the CAD shapes, we first split them into training, validation, and test datasets, and then applied the following process.\nUsing one CAD shape, we generated up to three meshes using clscale (a control parameter of the mesh characteristic lengths) = 0.20, 0.25, and 0.30. To facilitate the training process, we scaled the meshes to fit into a cube with an edge length equal to 1.\nUsing one mesh, we generated three initial conditions randomly using a Fourier series of the 2nd to 10th orders. We then applied an FEA to each initial condition and material property determined randomly as described above. We applied an implicit method to solve time evolutions and a direct method to solve the linear equations. The FEA time step ∆t was set to 0.01.\nDuring this process, some of the meshes or FEA results may not have been available due to excessive computation time or non-convergence. Therefore, the size of the dataset was not exactly equal to the number multiplied by 9. Finally, we obtained 439 FEA results for the training dataset, 143 FEA results for the validation dataset, and 140 FEA results for the test dataset.\nPublished as a conference paper at ICLR 2021\n<latexit sha1_base64=\"hOssCaZXMVf8qIaFIWJe17EJ8Cw=\">AAAEnnicjVNdb9MwFPW2AKN8bINHXiyqSjxAaapJ4wGhwRjiBTEk2lU0UeU4TmvVdoLtMCov/Ape4X/xb7CTQNsUpFlKdH3vPedcH8tRxqjSvd6vre0d79r1G7s3W7du37m7t39wb6jSXGIywClL5ShCijAqyEBTzcgokwTxiJHzaH7i6udfiFQ0FR/1IiMhR1NBE4qRtqlPgSZftTl5+bqY7Ld73V654Gbg10Eb1OtscrDzLYhTnHMiNGZIqbHfy3RokNQUM1K0glyRDOE5mpKxDQXiRIWmHLmAHZuJYZJK+wkNy+wqwiCu1IJHtpMjPVPNmkv+sxbxhrJOnoWGiizXROBKOMkZ1Cl0fsCYSoI1W9gAYUnt7BDPkERYW9dagSAXOOUcidgEMU2SYtwPTZDYuolh2y/cv18UEHZSGVOBpCNKEiKtKxSxdYKswRBkzizESqLlxvKt46j4C3Si8CksVa0oFU/czcMriFsLlvJLYUu2qrzC+Sf7f8pcUF2MfcsHA3cTkptx2w9LO1wNXlA9g+pzjiSBkT3xnOiGp0hXAzGS6K4bJ5B0OtOXE7NpQ0yY9buRHFjnIm4GzeZhlR/afAeuVUhVIZsuZzJdTvPcTvPYWlJP9KLJj2R19Prgbl/iArM8RtDUOF3DnF4BgSJVYVzn5YpD7sX6zfe5GQz7Xf+we/ThsH38qn67u+ABeAgeAR8cgWPwFpyBAcBAgO/gB/jpQe+N9857X7Vub9WY+2BteaPf5fiNmQ==</latexit>\n<latexit sha1_base64=\"yVdg73GTKvaP/vOenFWpBOAWlhk=\">AAAEn3icjVNdb9MwFPW2AKN8bINHXgzVJB6gNNWk8YDQBJrgBVQk2g0lUeU4N61Vxwm2w6iy8C94hd/Fv8FOAm1TkGYp0fW995xzfSyHGWdK9/u/trZ3nGvXb+ze7Ny6fefu3v7BvbFKc0lhRFOeyvOQKOBMwEgzzeE8k0CSkMNZOH9t62dfQCqWio96kUGQkKlgMaNEm5Tna/iqi3egZuVkv9vv9auFNwO3CbqoWcPJwc43P0ppnoDQlBOlPLef6aAgUjPKoez4uYKM0DmZgmdCQRJQQVHNXOJDk4lwnErzCY2r7CqiIIlSiyQ0nQnRM9Wu2eQ/a2HSUtbx86BgIss1CFoLxznHOsXWEBwxCVTzhQkIlczMjumMSEK1sa3jC7igaZIQERV+xOK49AZB4cemXkS465b2PyhLjA9TGTFBpCWKY5DGFUb4OkHWYvAzaxbhFdFyY/jWcUz8BVpR/AxXqkaUiaf26vEVxI0FS/mlsCFbVV7h/JP9P2UumC491/Bh396ETAqv6waVHbaGL5ieYfU5JxJwaE48B93ylOh6IA6x7tlxfMmmM305KTZtiIAbv1vJkXEuTIpRu3lc58cmf4jXKlBXYNPlTKbLaV6YaZ4YS5qJXrb5iayP3hzc7iucXyyP4bc1Ttcwp1dAkFDVGNt5ueKQfbFu+31uBuNBzz3qHX846p68at7uLnqAHqHHyEXH6AS9RUM0QhSl6Dv6gX46D503zntnWLdubzWY+2htOZ9+A0xKjmg=</latexit>\n<latexit sha1_base64=\"xM3z+uAOq8tPJNgckAHYO0pk7Qg=\">AAAEqHicjVNdb9MwFPW2AKN8deORF4uqEkJQmmrSeEBogIp4LBJth5qqcpyb1qrjBNthq7LwU3iFv8S/wU4CbVOQZinR9b33nHN9EvsJZ0p3u7/29g+cGzdvHd5u3Ll77/6D5tHxSMWppDCkMY/luU8UcCZgqJnmcJ5IIJHPYewv39n6+CtIxWLxSa8SmEZkLljIKNEmNWseexoudfa+/wZLUCnXKp81W91Ot1h4N3CroIWqNZgdHXzzgpimEQhNOVFq4nYTPc2I1IxyyBteqiAhdEnmMDGhIBGoaVYMn+O2yQQ4jKV5hMZFdhORkUipVeSbzojoharXbPKfNT+qKevw5TRjIkk1CFoKhynHOsbWGRwwCVTzlQkIlczMjumCSEK18a/hCbigcRQREWRewMIwn/SmmReaehbglpvbdy/PMW7HMmCCSEsUhiCNK4zwbYKkxuAl1izCC6L1xvBt45j4C7Si+AUuVI0oE8/tP4CvIW4sWMuvhQ3ZpvIG55/s/ylTwXQ+cQ0f9uyXkFE2abnTwg5bwxdML7D6khIJ2DcnXoKueUp0ORCHUHfsOJ5k84W+mmW7NgTAjd+15NA450fZsN48KvMjk2/jrQqUFdh1OZHxeppXZppnxpJqotd1fiLLo1cHt/sC52XrY3h1jf4Wpn8NBPFVibGdVxsO2Rvr1u/nbjDqddyTzunHk9bZ2+ruHqJH6DF6glx0is7QBzRAQ0TRJfqOfqCfzlNn4Iydz2Xr/l6FeYi2luP/Bo0PkVA=</latexit>\n<latexit sha1_base64=\"FIcp+4dBaDbsiDoquO/TI+LHhDY=\">AAAEq3icjVNdb9MwFPW2AqN8dSCeeLGoJvHASlMNjQdAE2gSj0PQbqKJKse5aa06TrAdRuWF/8Ir/CL+DXYSaJuCNEuJru+955zrkzjMOFO63/+1tb3Tunb9xu7N9q3bd+7e6+zdH6k0lxSGNOWpPA+JAs4EDDXTHM4zCSQJOZyF87eufvYFpGKp+KgXGQQJmQoWM0q0TU06D30NX7WhXFHCocCvcL83eD7pdPu9frnwZuDVQRfV63Syt/PNj1KaJyA05USpsdfPdGCI1Ixa3rafK8gInZMpjG0oSAIqMOX8Bd63mQjHqbSP0LjMriIMSZRaJKHtTIieqWbNJf9ZC5OGso5fBIaJLNcgaCUc5xzrFDtzcMQkUM0XNiBUMjs7pjMiCdXWwrYv4IKmSUJEZPyIxXExHgTGj23dRLjrFe49KAqM91MZMUGkI4pjkNYVRvg6QdZg8DNnFuEl0XJj+dZxTPwFOlH8DJeqVpSJA/cb4CuIWwuW8kthS7aqvML5J/t/ylwwXYw9y4d99yVkYsZdLyjtcDV8wfQMq885kYBDe+I56IanRFcDcYh1z43jSzad6cuJ2bQhAm79biSH1rkwMcNm86jKj2x+H69VoKrApsuZTJfTvLTTPLWW1BO9bvITWR29PrjblzjfLI/hNzVO1jAnV0CQUFUY13m54lBhb6zXvJ+bwWjQ8w57R+8Pu8dv6ru7ix6hx+gJ8tAROkbv0CkaIooM+o5+oJ+tg9aH1qeWX7Vub9WYB2htteA3jrGRiQ==</latexit>\n<latexit sha1_base64=\"y7fiJ4jHEByvjPZGoa23Gz1K2TY=\">AAAEq3icjVNdb9MwFPW2AKN8rAPxxItFNYkHVpJq0ngANIEm8TgE7SaaqnKcm9aq4wTbYVRe+C+8wi/i32AnhbYpSLOU6Pree865PomjnDOlff/X1vaOd+Pmrd3brTt3793fa+8/GKiskBT6NOOZvIiIAs4E9DXTHC5yCSSNOJxHs7eufv4FpGKZ+KjnOYxSMhEsYZRomxq3H4UavmpDuaKEQ4lfYb/b88ftjt/1q4U3g2ARdNBinY33d76FcUaLFISmnCg1DPxcjwyRmlHL2woLBTmhMzKBoQ0FSUGNTDV/iQ9sJsZJJu0jNK6yqwhDUqXmaWQ7U6KnqllzyX/WorShrJMXI8NEXmgQtBZOCo51hp05OGYSqOZzGxAqmZ0d0ymRhGprYSsUcEmzNCUiNmHMkqQc9kYmTGzdxLgTlO7dK0uMDzIZM0GkI0oSkNYVRvg6Qd5gCHNnFuEV0XJj+dZxTPwFOlH8HFeqVpSJQ/cb4GuIWwuW8kthS7aqvML5J/t/ykIwXQ4Dy4dD9yVkaoadYFTZ4Wr4kukpVp8LIgFH9sQz0A1Pia4H4pDorhsnlGwy1Vdjs2lDDNz63Uj2rXNRavrN5kGdH9j8AV6rQF2BTZdzmS2neWmneWYtWUz0uslPZH30xcHdvsKFZnmMsKlxuoY5vQaCRKrGuM6rFYdKe2OD5v3cDAa9bnDUPX5/1Dl5s7i7u+gxeoKeogAdoxP0Dp2hPqLIoO/oB/rpHXofvE9eWLduby0wD9Ha8uA3ebyRhA==</latexit>\n<latexit sha1_base64=\"4YDtS8ECjugI70StCFvN3VYgdPI=\">AAAEq3icjVNdb9MwFPW2AKN8dSCeeLGoJvHASlImjQdAE2gSj0PQbqKpKse5aa06TrAdRuWF/8Ir/CL+DXZSaJuCNEuJru+955zrkzjKOVPa939tbe94167f2L3ZunX7zt177b37A5UVkkKfZjyT5xFRwJmAvmaaw3kugaQRh7No9tbVz76AVCwTH/U8h1FKJoIljBJtU+P2w1DDV20oV5RwKPEr7Hef++N2x+/61cKbQbAIOmixTsd7O9/COKNFCkJTTpQaBn6uR4ZIzajlbYWFgpzQGZnA0IaCpKBGppq/xPs2E+Mkk/YRGlfZVYQhqVLzNLKdKdFT1ay55D9rUdpQ1smLkWEiLzQIWgsnBcc6w84cHDMJVPO5DQiVzM6O6ZRIQrW1sBUKuKBZmhIRmzBmSVIOeyMTJrZuYtwJSvfulSXG+5mMmSDSESUJSOsKI3ydIG8whLkzi/CKaLmxfOs4Jv4CnSh+hitVK8rEgfsN8BXErQVL+aWwJVtVXuH8k/0/ZSGYLoeB5cOh+xIyNcNOMKrscDV8wfQUq88FkYAje+IZ6IanRNcDcUh0140TSjaZ6sux2bQhBm79biT71rkoNf1m86DOD2x+H69VoK7Apsu5zJbTvLTTPLWWLCZ63eQnsj764uBuX+FCszxG2NQ4WcOcXAFBIlVjXOflikOlvbFB835uBoNeNzjsHr0/7By/WdzdXfQIPUZPUICO0DF6h05RH1Fk0Hf0A/30DrwP3icvrFu3txaYB2htefAbfe6RhQ==</latexit>\n<latexit sha1_base64=\"msSuJfcNH32OWDGTHyF2HxMHjQc=\">AAAErnicjVNdb9MwFPW2AKN8rIMXJF4sqko8QEmqSeMBoQk0Cd6GRLtJTVQ5jtNatZ1gO4zKy34Nr/B/+DfYSaBtCtIsJbq+H+fce2zHOaNK+/6vnd0979btO/t3O/fuP3h40D18NFZZITEZ4Yxl8iJGijAqyEhTzchFLgniMSPn8eK9i59/JVLRTHzWy5xEHM0ETSlG2rqm3SehJt+0+SiopohBnImEukg57fb8gV8tuG0EjdEDzTqbHu5dh0mGC06ExgwpNQn8XEcGSU0xI2UnLBTJEV6gGZlYUyBOVGSqEUrYt54Eppm0n9Cw8q5XGMSVWvLYZnKk56odc85/xmLeYtbp68hQkReaCFwTpwWDOoNOH5hQSbBmS2sgLK0WGOI5kghrq2InFOQSZ5wjkZgwoWlaToaRCVMbNwnsBaX7D8sSwn4mEyqQdEBpSqRVxQq8CZC3EMLciWWPwQGtNhZvs46Kv4WOFL6CFaslpeKluwnwBuRWghX9itiCrTOvYf7x/h+ysLeonAQWD4buJCQ3k14QVXK4GLykeg7VlwJJAmM78YLolqZI1w0xkuqBayeUdDbXV1OzLUNCmNW75RxZ5WJuRu3kce0fW38fbkRIHSHbKucyW3XzxnbzwkrSdPS2jY9kPXozuNtXdaFZjRG2OU43ak5vUIFiVde4zKs1hdyLDdrvc9sYDwfB0eD401Hv5F3zdvfBU/AMPAcBOAYn4AM4AyOAwTX4Dn6An57vjb3Im9apuztNzWOwsbz5bwJKlD8=</latexit>\n<latexit sha1_base64=\"lHk8XfLKuKImx0Dx7SJ2T8cRysY=\">AAAE2nicjVPPb9MwFPa2AGP86uC4i0VViQOUppo0DghNoElckIZEu0lNVDnOy2rVcYLtMJUsB7ghrvwPu8Kfw3+DnWS0SUGapUQv3/P7vve+2EHKmdKDwe+NzS3nxs1b27d37ty9d/9BZ/fhWCWZpDCiCU/kaUAUcCZgpJnmcJpKIHHA4SSYv7H5k08gFUvEB71IwY/JmWARo0QbaNrZ82KiZzLO3xENkhHu4VQmKUi9KKad7qA/KBdeD9w66KJ6HU93ty69MKFZDEJTTpSauINU+zmRmlEOxY6XKUgJnZMzmJhQkBiUn5dTFLhnkBBHiTSP0LhEVytyEiu1iAOz0/as2jkL/jMXxC1lHb3wcybSTIOglXCUcawTbC3CIZNANV+YgFDJTO+Yzogk1DjUZNZs/rnBnVt1nSRcGVjAOU3imIgw90IWRcVk6OdeZIjyEHfdwr6HRYFxL5EhE0RaxSgCaewzf6JJkLYYvNS6SnhJtPwwfM06Jv4WWlH8HJeqRpSJZ/bU4GuIG6+W8kthQ7aqvMJ5hf6fMhNMFxPX8OGrIzjpun5ph83hc6ZnWH3MiAQcmInnoFWTguiqIQ6R7tt2PMnOZvpimq/bEAI3frfAkXEuiPNRe/O4wscG7+FGBqoMrLtsbs2ym5emm6fGkrqjV21+IqvR68Htd1nn5csxvLbGUaPm6BoVJFBVjd15seKQvdpu+yKvB+Nh393vH7zf7x6+ri/5NtpDj9ET5KIDdIjeomM0QhR9QZfoJ/rleM5X55vzvdq6uVHXPEKN5fz4A+IopyE=</latexit>\nF.2 INPUT FEATURES\nTo express the geometry information, we extracted the effective volume of the ith vertex V effectivei and the mean volume of the ith vertex V meani , which are defined as follows:\nV effectivei = ∑ e∈N ei 1 4 Ve, (46)\nV meani =\n∑ e∈N ei Ve\n|N ei | , (47)\nwhere N ei is the set of elements, including the ith vertex. For GCN or its variant models, we tested several combinations of input vertex features T0.0, C, V effective, V mean, and x (Table 9). For the IsoGCN model, inputs were T0.0, C, V effective, and V mean." }, { "heading": "F.3 MODEL ARCHITECTURES", "text": "Figure 6 represents the IsoGCN model used for the anisotropic nonlinear heat equation dataset. We used the tanh activation function as a nonlinear activation function because we expect the target temperature field to be smooth. Therefore, we avoid using non-differentiable activation functions such as the rectified linear unit (ReLU) (Nair & Hinton, 2010). Although the model looks complicated, one propagation block corresponds to the first-order Taylor expansion T (t+∆t) ' ∇C ∇T (t)+T (t) because the propagation block is expressed as D̃ C MLP(T )D̃ ∗ T + T , where T denotes the\nrank-0 tensor input to the propagation block. By stacking this propagation block p times, we can approximate the pth order Taylor expansion of the anisotropic nonlinear heat equation.\nFor GCN and its variants, we simply replaced the IsoGCN layers with the corresponding ones. We stacked m (= 2, 5) layers for GCN, GIN, GCNII, and Cluster-GCN. We used an m hop adjacency matrix for SGCN.\nFor the TFN and SE(3)-Transformer, we set the hyperparameters to as many parameters as possible that would fit on the GPU because the TFN and SE(3)-Transformer with almost the same number of parameters as in IsoGCN did not fit on the GPU we used (NVIDIA Tesla V100 with 32 GiB memory). The settings of the hyperparameters are shown in Table 8." }, { "heading": "F.4 RESULT DETAILS", "text": "Table 9 shows a detailed comparison of the training results. The inclusion of x in the input features of the baseline models did not improve the performance. In addition, if x is included in the input features, a loss of the generalization capacity for larger shapes compared to the training dataset may result as it extrapolates. The proposed model achieved the best performance compared to the baseline models considered. Therefore, we concluded that the essential features regarding the mesh shapes are included in D̃. Besides, IsoGCN can scale up to meshes with 1M vertices as shown in Figure 7." } ]
2,021
EQUIVARIANT GRAPH CONVOLUTIONAL NETWORKS
SP:41a9a0e893ccd973ebf57ca7f99b9b6f22e8d339
[ "The paper studies the controlled sequence generation problem based on pretrained language models, i.e., controlling a generic pretrained LM to satisfy certain constraints, e.g., removing certain biases in language models. Specifically, the paper proposes a distributional view and imposes constraints based on collective statistical properties. The problem is formalized as a constraint satisfaction problem, minimizing a divergence objective. The paper proposes to use KL-Adaptive DPG algorithm for approximating the optimal energy-based model distribution. Experiments were conducted over both pointwise constraints and distributional constraints, showing the effectiveness of the model over the compared baselines." ]
We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LMs). This approach permits to specify, in a single formal framework, both “pointwise’” and “distributional” constraints over the target LM — to our knowledge, the first model with such generality — while minimizing KL divergence from the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train a target controlled Autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM. We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study, we show the effectiveness of our adaptive technique for obtaining faster convergence.1
[ { "affiliations": [], "name": "Muhammad Khalifa" }, { "affiliations": [], "name": "Hady Elsahar" }, { "affiliations": [], "name": "Marc Dymetman" } ]
[ { "authors": [ "Sun-ichi Amari", "Hiroshi Nagaoka" ], "title": "Methods of Information Geometry", "venue": null, "year": 2000 }, { "authors": [ "Daniel Andor", "Chris Alberti", "David Weiss", "Aliaksei Severyn", "Alessandro Presta", "Kuzman Ganchev", "Slav Petrov", "Michael Collins" ], "title": "Globally Normalized Transition-Based", "venue": "Neural Networks", "year": 2016 }, { "authors": [ "Dzmitry Bahdanau", "Philemon Brakel", "Kelvin Xu", "Anirudh Goyal", "Ryan Lowe", "Joelle Pineau", "Aaron C. Courville", "Yoshua Bengio" ], "title": "An actor-critic algorithm for sequence prediction", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "A. Bakhtin", "Y. Deng", "S. Gross", "Myle Ott", "Marc’Aurelio Ranzato", "Arthur Szlam" ], "title": "Energy-based models for text", "venue": "ArXiv, abs/2004.10188,", "year": 2020 }, { "authors": [ "David Belanger", "Andrew McCallum" ], "title": "Structured prediction energy networks", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Emily M. Bender", "Timnit Gebru", "Angelina McMillan-Major", "Shmargaret Shmitchell" ], "title": "On the dangers of stochastic parrots: Can language models be too big", "venue": "In Proceedings of FAccT 2021,", "year": 2021 }, { "authors": [ "Su Lin Blodgett", "Solon Barocas", "Hal Daumé III", "Hanna Wallach" ], "title": "Language (technology) is power: A critical survey of “bias", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5454–5476,", "year": 2020 }, { "authors": [ "Shikha Bordia", "Samuel R. Bowman" ], "title": "Identifying and reducing gender bias in word-level language models", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Radford", "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": "CoRR, abs/2005.14165,", "year": 2020 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language gans falling short", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "George Casella", "Christian P Robert", "Martin T Wells" ], "title": "Generalized accept-reject sampling schemes. In A Festschrift for Herman Rubin, pp. 342–347", "venue": "Institute of Mathematical Statistics,", "year": 2004 }, { "authors": [ "Eric Chu", "Peter J. Liu" ], "title": "Meansum: A neural model for unsupervised multi-document abstractive summarization", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "I. Csiszar" ], "title": "I-Divergence Geometry of Probability Distributions and Minimization Problems", "venue": "Ann. Probab., 3(1):146–158,", "year": 1975 }, { "authors": [ "I. Csiszár" ], "title": "Maxent, mathematics, and information theory", "venue": "Maximum Entropy and Bayesian Methods, pp. 35–50,", "year": 1996 }, { "authors": [ "Imre Csiszár", "Paul C. Shields" ], "title": "Information theory and statistics: A tutorial", "venue": "Commun. Inf. Theory,", "year": 2004 }, { "authors": [ "Yuntian Deng", "Anton Bakhtin", "Myle Ott", "Arthur Szlam", "Marc’Aurelio Ranzato" ], "title": "Residual energy-based models for text generation", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eduardo Graells-Garrido", "Mounia Lalmas", "Filippo Menczer" ], "title": "First women, second sex: Gender bias in wikipedia", "venue": "Proceedings of the 26th ACM Conference on Hypertext & Social Media,", "year": 2015 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "Training products of experts by minimizing contrastive divergence", "venue": "Neural Comput.,", "year": 2002 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Antoine Bosselut", "David Golub", "Yejin Choi" ], "title": "Learning to write with cooperative discriminators", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Li Du", "Maxwell Forbes", "Yejin Choi" ], "title": "The curious case of neural text degeneration", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Natasha Jaques", "Shixiang Gu", "Dzmitry Bahdanau", "José Miguel Hernández-Lobato", "Richard E. Turner", "Douglas Eck" ], "title": "Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Natasha Jaques", "Asma Ghandeharioun", "Judy Hanwen Shen", "Craig Ferguson", "Àgata Lapedriza", "Noah Jones", "Shixiang Gu", "Rosalind W. Picard" ], "title": "Way off-policy batch deep reinforcement learning of implicit human preferences in dialog", "venue": "URL http://arxiv.org/abs/1907.00456", "year": 1907 }, { "authors": [ "Nitish Shirish Keskar", "Bryan McCann", "Lav R. Varshney", "Caiming Xiong", "Richard Socher" ], "title": "CTRL: A conditional transformer language model for controllable generation", "venue": "CoRR, abs/1909.05858,", "year": 2019 }, { "authors": [ "Taesup Kim", "Yoshua Bengio" ], "title": "Deep directed generative models with energy-based probability estimation", "venue": "CoRR, abs/1606.03439,", "year": 2016 }, { "authors": [ "Matt J. Kusner", "José Miguel Hernández-Lobato" ], "title": "GANS for sequences of discrete elements with the gumbel-softmax distribution", "venue": "CoRR, abs/1611.04051,", "year": 2016 }, { "authors": [ "Rémi Lebret", "David Grangier", "Michael Auli" ], "title": "Neural text generation from structured data with application to the biography domain", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "Marc’Aurelio Ranzato", "Fu Jie Huang" ], "title": "A Tutorial on Energy-Based Learning", "venue": null, "year": 2006 }, { "authors": [ "Jiwei Li", "Michel Galley", "Chris Brockett", "Jianfeng Gao", "Bill Dolan" ], "title": "A diversity-promoting objective function for neural conversation models", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Jiwei Li", "Will Monroe", "Alan Ritter", "Dan Jurafsky", "Michel Galley", "Jianfeng Gao" ], "title": "Deep reinforcement learning for dialogue generation", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Juncen Li", "Robin Jia", "He He", "Percy Liang" ], "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Chia-Wei Liu", "Ryan Lowe", "Iulian Serban", "Michael Noseworthy", "Laurent Charlin", "Joelle Pineau" ], "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Siqi Liu", "Zhenhai Zhu", "Ning Ye", "Sergio Guadarrama", "Kevin Murphy" ], "title": "Optimization of image description metrics using policy gradient methods", "venue": "CoRR, abs/1612.00370,", "year": 2016 }, { "authors": [ "Moin Nadeem", "Anna Bethke", "Siva Reddy" ], "title": "Stereoset: Measuring stereotypical bias in pretrained language models", "venue": "CoRR, abs/2004.09456,", "year": 2020 }, { "authors": [ "Frank Nielsen" ], "title": "An elementary introduction to information", "venue": "geometry. CoRR,", "year": 2018 }, { "authors": [ "Art B. Owen" ], "title": "Importance Sampling", "venue": "Tetiana Parshakova,", "year": 2013 }, { "authors": [ "Tetiana Parshakova", "Jean-Marc Andreoli", "Marc Dymetman" ], "title": "Distributional Reinforcement Learning For Energy-Based Sequential Models. CoRR, 2019b. URL https://arxiv.org/ abs/1912.08517", "venue": null, "year": 1912 }, { "authors": [ "Ramakanth Pasunuru", "Mohit Bansal" ], "title": "Reinforced video captioning with entailment rewards", "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher" ], "title": "A deep reinforced model for abstractive summarization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Marcelo O.R. Prates", "Pedro H.C. Avelar", "Luı́s C. Lamb" ], "title": "Assessing gender bias in machine translation: a case study with google translate", "venue": "Neural Computing and Applications,", "year": 2020 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Marc’Aurelio Ranzato", "Y-Lan Boureau", "Sumit Chopra", "Yann LeCun" ], "title": "A unified energy-based framework for unsupervised learning", "venue": "Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics,", "year": 2007 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Christian P. Robert", "George Casella" ], "title": "Monte Carlo Statistical Methods (Springer Texts in Statistics)", "venue": null, "year": 2005 }, { "authors": [ "Ronald Rosenfeld", "Stanley F. Chen", "Xiaojin Zhu" ], "title": "Whole-sentence exponential language models: A vehicle for linguistic-statistical integration", "venue": "Computers, Speech and Language,", "year": 2001 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Abigail See", "Stephen Roller", "Douwe Kiela", "Jason Weston" ], "title": "What makes a good conversation? how controllable attributes affect human judgments", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Emily Sheng", "Kai-Wei Chang", "Premkumar Natarajan", "Nanyun Peng" ], "title": "The woman worked as a babysitter: On biases in language generation", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Emily Sheng", "Kai-Wei Chang", "Premkumar Natarajan", "Nanyun Peng" ], "title": "The woman worked as a babysitter: On biases in language generation", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Emily Sheng", "Kai-Wei Chang", "Premkumar Natarajan", "Nanyun Peng" ], "title": "Towards controllable biases in language generation", "venue": "CoRR, abs/2005.00268,", "year": 2020 }, { "authors": [ "Rakshith Shetty", "Marcus Rohrbach", "Lisa Anne Hendricks", "Mario Fritz", "Bernt Schiele" ], "title": "Speaking the same language: Matching machine to human captions by adversarial training", "venue": "In IEEE International Conference on Computer Vision, ICCV 2017,", "year": 2017 }, { "authors": [ "Gabriel Stanovsky", "Noah A. Smith", "Luke Zettlemoyer" ], "title": "Evaluating gender bias in machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Pradyumna Tambwekar", "Murtaza Dhuliawala", "Lara J. Martin", "Animesh Mehta", "Brent Harrison", "Mark O. Riedl" ], "title": "Controllable neural story plot generation via reward shaping", "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lifu Tu", "Richard Yuanzhe Pang", "Sam Wiseman", "Kevin Gimpel" ], "title": "Engine: Energy-based inference networks for non-autoregressive machine", "venue": "translation. ArXiv,", "year": 2020 }, { "authors": [ "Eric Wallace", "Shi Feng", "Nikhil Kandpal", "Matt Gardner", "Sameer Singh" ], "title": "Universal adversarial triggers for attacking and analyzing NLP", "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing,", "year": 2019 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Mach. Learn.,", "year": 1992 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "In Machine Learning,", "year": 1992 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Jamie Brew" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": "URL http://arxiv.org/abs/1910.03771", "year": 1910 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Chris Dyer", "Eric P Xing", "Taylor Berg-Kirkpatrick" ], "title": "Unsupervised text style transfer using language models as discriminators", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Yaoming Zhu", "Sidi Lu", "Lei Zheng", "Jiaxian Guo", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Texygen: A benchmarking platform for text generation models", "venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval,", "year": 2018 }, { "authors": [ "Daniel M. Ziegler", "Nisan Stiennon", "Jeffrey Wu", "Tom B. Brown", "Alec Radford", "Dario Amodei", "Paul Christiano", "Geoffrey Irving" ], "title": "Fine-tuning language models from human", "venue": "URL http://arxiv.org/abs/1909.08593", "year": 1909 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural language models, such as GPT-2/3 (Radford et al., 2019; Brown et al., 2020a), pretrained on huge amounts of text, have become pre-eminent in NLP, producing texts of unprecedented quality. In this paper, we are concerned with the problem of controlling a generic pretrained LM in order to satisfy certain desiderata. For instance, we may want to avoid toxic content; prevent certain demographic biases; or steer generations towards a certain topic or style. Prior work, taking inspiration from Reinforcement Learning (RL), has aimed at inducing autoregressive models to optimize global objectives using task specific rewards such as BLEU and ROUGE for Machine Translation and Summarization (Ranzato et al., 2016; Bahdanau et al., 2017), or hand crafted rewards (Li et al., 2016b; Tambwekar et al., 2019) to improve certain a priori desirable features.\nHowever, such an optimization process is not infallible; Liu et al. (2016a) noted that it often leads to “degeneration”, producing poor examples that improve the average reward but forgo coherence and fluency. This degeneration is often diagnosed as an effect of deviating too much from the original pretrained LM during optimization. Consequently, prior work has regarded proximity to the pretrained model as a prescription for sample quality. This view is most prominent in open-domain generation where no gold references are available for fine-tuning, making the pretrained LM itself the yardstick for fluency. Jaques et al. (2017); Ziegler et al. (2019) propose a conservative fine-tuning approach moderated by a KL penalty between the trained policy and the original LM, discouraging large deviations. A KL penalty was also used by Dathathri et al. (2020), this time in a plug-and-play rather than a fine-tuning context. However, the authors show that balancing policy deviations from the original LM while also satisfying the control conditions is delicate. To combat degeneration they had to combine the KL penalty with post-norm fusion, reranking, and early-stopping procedures.\n∗Equal Contributions. †Work done during an internship at NAVER Labs Europe. 1Code available on https://github.com/naver/gdc\nMost of the existing work on Controlled Generation has taken what we refer to as a “pointwise” view, namely focusing on the quality of each individual output, a view that is encouraged by the standard RL goal of maximizing rewards computed at the individual level. Such techniques are incapable of enforcing “distributional” conditions, where some collective statistical properties are desired over the set of all generations.\nDistributional control is key to solving the problem of social biases in LMs trained on large, uncurated Web corpora. Those LMs - dubbed “Stochastic Parrots” in (Bender et al., 2021) - tend to encode hegemonic biases that are harmful to marginalized populations. There has been a large body of work analysing these distributional biases (Blodgett et al., 2020; Stanovsky et al., 2019; Prates et al., 2020; Sheng et al., 2019a; Brown et al., 2020b). However, applying distributional control on pretrained models is still an understudied problem. Sheng et al. (2020) introduce a method relying on adversarial triggers (Wallace et al., 2019); this method does not de-bias the whole distribution but only obtains non-biased continuations of given prompts. Bordia & Bowman (2019) introduce a regularization term for reducing gender bias when training a language model from scratch (as opposed to de-biasing a pretrained model).2\nIn this work, we present our Generation with Distributional Control (GDC) approach, in which we formalize the problem of controlled text generation as a constraint satisfaction problem over the probability distribution p representing the desired target LM. Namely, we require the expectations (“moments”) relative to p of certain output features to have specific values; this permits for instance to condition all outputs to speak about sports (a pointwise constraint), and 50% of them to mention female characters (a distributional constraint). Additionally, we require p to have a minimal KL divergence DKL(p, a) from the original pretrained LM a. This has the effect that p now inherits favorable linguistic qualities from a. As we will explain, this formulation is a generalization of the Maximum Entropy Principle and leads to a unique solution P (x). P (x) is an unnormalized distribution, aka an Energy-Based Model (EBM) (Hinton, 2002; LeCun et al., 2006; Bakhtin et al., 2020), of which p(x) = 1/Z P (x) is the normalized version, where Z .= ∑ x P (x) is the partition function of P .\nComputing the EBM representation P is a crucial step, as it fully determines the optimal distribution p we are looking for. However, it is not the end of the story, because the representation thus obtained does not enable us to directly sample from p, an essential property of any LM.3 To this end, we introduce KL-adaptive DPG (Distributional Policy Gradient), a variant of an algorithm recently proposed in (Parshakova et al., 2019b). We train the policy πθ to approximate p in an adaptive way, by speeding up the next round of approximations based on approximations previously obtained. At the end of this process, we obtain a final πθ, our target LM, on which we can estimate diverse metrics, including DKL(p, πθ), measuring the approximation quality of πθ relative to the optimal p, and DKL(πθ, a), measuring the divergence of πθ relative to the original LM a.\nThis two-step approach differs from much research in NLP-oriented work with EBMs, which tends to use EBM representations inside the training loops of neural networks, blurring different dimensions of the problem. By contrast — similarly to Parshakova et al. (2019a;b) in a different context — we clearly decouple the relatively simple problem of determining a “pivot” optimal EBM from the more difficult problem of exploiting this EBM at inference time, Such decoupling is valuable, because it permits to better diagnose the important challenges to focus on.\nOverall, our contributions can be summarized as follows:\n1. We introduce a Distributional View for controlled text generation formalized as a constraint satisfaction problem combined with a divergence minimization objective, providing a single framework both for “distributional” constraints (collective statistical requirements) and for “pointwise” constraints (hard requirements on each individual) (§2.1). To our knowledge, this is the first framework with such generality for controlled text generation.\n2. We show how these constraints lead to an optimal EBM for the target model (§2.2), propose the KL-Adaptive DPG algorithm for approximating the optimal EBM distribution by\n2Additional Related Work is provided in §E. We use §A, §B ... to refer to sections in the Appendix. 3One possible sampling approach here would be to employ MCMC techniques, such as Metropolis-\nHastings (Robert & Casella, 2005). These come with theoretical convergence guarantees in the limit but in practice convergence can be very difficult to assess, and furthermore, obtaining samples can be extremely slow.\nan autoregressive policy (§2.3), and show the effectiveness of this adaptive technique for obtaining faster convergence (§B.2).\n3. We conduct experiments in a number of pointwise and distributional conditions, assessing results in terms of divergence from GPT-2, fluency and diversity, with better performance than strong baselines. The distributional experiments show the potential of our approach as a remedy to the current and important problem of bias in pretrained language models, providing a novel direction for addressing it (§3)." }, { "heading": "2 FORMALIZATION", "text": "We denote byX the set of all sequences x of bounded length Lmax, by a the initial pretrained model and by p the desired target model. The probabilities of x according to each model are a(x) and p(x). Our approach consists in expressing our desiderata through constraints on the desired values µ̄i of the expectations (aka moments) µi . = Ex∼p φi(x) of certain predefined real-valued feature functions φi(x), for i ∈ {1, . . . , k}. To illustrate, the previous example can be expressed by using two binary features, φ1(x) = 1 iff x is classified as speaking about sports, φ2(x) = 1 iff x mentions a female character. Then our “moment constraints” take the following form: µ1 = Ex∼p φ1(x) = 1.0, µ2 = Ex∼p φ2(x) = 0.5. The first (pointwise) constraint implies that each individual x has to speak about sports (otherwise µ1 could not reach its maximum value 1.0), the second (distributional) constraint that 50% of the x’s have to mention a female character.4\nLet C be the set of all distributions c over X that satisfy the moment constraints. We then propose to specify p as a distribution respecting the constraints, but also minimizing KL divergence from a:\np . = arg min\nc∈C DKL(c, a), (1)\nEquation (1) is a generalization of the Maximum Entropy Principle of Jaynes (1957), which corresponds to the limit case where a is the uniform u distribution over X , noting that minimizing DKL(c, u) is equivalent to maximizing the entropy of c under the constraints — in other words, trying to find the least “specific” distribution satisfying the constraints." }, { "heading": "2.1 CONSTRAINTS, INFORMATION GEOMETRY, EXPONENTIAL FAMILIES", "text": "To recap our formal approach, we have a finite setX , a distribution a overX s.t. a(x) > 0,∀x ∈ X , and real functions φ1, ..., φk overX . We specify moment constraints µi = µ̄i on distributions c over X , where µi . = Ex∼c φi(x) and the µ̄i’s are given targets; the set of distributions satisfying these constraints is denoted by C. Our Problem is to find a p such that p = arg minc∈C DKL(c, a). We follow Csiszár & Shields (2004) on this question, a problem that is at the core of the field of Information Geometry (Nielsen, 2018; Amari & Nagaoka, 2000). Under the assumption that C 6= ∅, they prove the following result (also see §A.1):\n4This example uses only binary features, but real-valued features can also be used, for instance scores returned by a soft classifier.\nTheorem 1 (A) There exists a unique solution p to the problem above, obtained as p(x) ∝ P (x) where P is in exponential family form:\nP (x) = a(x) 1[x ∈ XC ] e ∑ i λiφi(x). (2)\nIn other words p(x) = 1/Z P (x), with Z = ∑ x∈X P (x); P is an unnormalized distribution, i.e. an EBM. Here XC = {x ∈ X| ∃c ∈ C s.t. c(x) > 0} is the “support set” associated with C. The λi’s are real numbers called the natural parameters associated with the moments µi.\n(B) p can be approximated to arbitrary precision by distributions p of the form:\np (x) ∝ a(x) e ∑ i λ ,iφi(x) (3)\nfor appropriate real values of the λ ,i.\n(C) p satisfies the Pythagorean Identity: DKL(c, a) = DKL(c, p) +DKL(p, a),∀c ∈ C (see Fig 1).\nThe advantage of this version of the connection between Generalized Maximum Entropy and Exponential Families is its generality, which distinguishes it from other presentations, and which makes it ideal for unified application to pointwise, distributional or hybrid constraints.\nIn the special case of only pointwise constraints, of the form Ex∼cφi(x) = 1.0, i ∈ [1, k], with φi(x) ∈ {0, 1}, let’s define the predicate b(x) to be 1 iff x satisfies all the constraints. Then, using the (A) form of the result, it is an easy exercise (see §A.2) to prove that XC = {x ∈ X| b(x) = 1} and that one has p(x) ∝ a(x)b(x). In this case P (x) = a(x)b(x) is a very simple EBM that does not involve an exponential part; this is the EBM form that we use for experiments involving only pointwise constraints.\nIn the general case where some constraints are distributional, the determination ofXC is not as direct, and we prefer to use the approximation provided by (B), which permits a generic implementation. With only distributional constraints, an exact solution is typically obtained with finite λ’s. With hybrid constraints, some of the λ’s may tend to infinite (positive or negative) values but thresholding them suffices to get a good approximation.\n2.2 FROM MOMENT CONSTRAINTS TO EBM Algorithm 1 Computing λ Input: a, features φ, imposed moments µ̄ 1: sample a batch x1, . . . , xN from a 2: for each j ∈ [1, N ]: wj(λ)← eλ·φ(xj)\n3: µ̂(λ)← ∑N j=1 wj(λ) φ(xj)∑N\nj=1 wj(λ)\n4: solve by SGD: arg minλ ||µ̄− µ̂(λ)||22 Output: parameter vector λ\nLet’s now consider a set of desired moment constraints µ̄.5 In the general case (i.e., when some constraints are distributional), we use Theorem 1.(B), which says that the desired energy-based model P can be approximated arbitrarily closely in the following form:\nP (x) . = a(x)eλ·φ(x). (4)\nThis EBM defines the desired normalized distribution p(x) .= P (x)Z , where Z . = ∑ x P (x). What is left is to learn appropriate values for the parameter vector λ s.t.:\nEx∼pφ(x) ' µ̄. (5)\nWe address this problem through Algorithm 1. First, we sample a large number N of sequences x1 . . . xj . . . xN from a. On line 2, we define “importance weights” wj(λ) . =\nP (xj) a(xj) =\nexp 〈λ,φ(xj)〉. On line 3, we then use SNIS (Self Normalized Importance Sampling) (Kim & Bengio, 2016; Parshakova et al., 2019a) to estimate µ(λ) .= Ex∼pφ(x). SNIS consists in computing:\nµ̂(λ) = ∑N j=1 wj(λ) φ(xj)∑N\nj=1 wj(λ) , (6)\n5Boldface φ and µ represents vectors of real values (features and moments).\nand it can be shown that µ̂(λ) ' µ(λ), with convergence in the limit (Owen, 2013). Note that the estimate µ̂(λ) is obtained not as a single number, but as a parametric function of the variable λ. We want to find λ such that µ̂(λ) = µ̄, a question that we handle on line 4 by performing an SGD optimization over the objective min ||µ̄− µ̂(λ)||22.6\nAt the end of this process, we obtain an estimated value for the parameter vector λ, and a representation P (x) = a(x) exp 〈λ,φ(x)〉. While a(x) is a normalized distribution by construction, the introduction of the second factor loses this normalization property, making P (x) an EBM.7 8" }, { "heading": "2.3 FROM EBM TO AUTOREGRESSIVE POLICY", "text": "Algorithm 2 KL-Adaptive DPG Input: P , initial policy q 1: πθ ← q 2: for each iteration do 3: for each episode do 4: sample x from q(·) 5: θ ← θ+α(θ) P (x)\nq(x) ∇θ log πθ(x)\n6: if DKL(p||πθ) < DKL(p||q) then 7: q ← πθ\nOutput: πθ\nThe EBM representation just obtained for P defines the optimal p = Z−1P unambiguously, a crucial intermediate step in the solution of our problem. From it we can immediately compute ratios of the form p(x)/p(x′) for two sequences x, x′, but without knowing Z, we cannot compute p(x) and, even with such a knowledge, we cannot produce samples from p.\nThis problem is typical of EBMs at large: they provide a rich and flexible mechanism for specifying models, but they leave a gap between representation and exploitation. A range of techniques, from sophisticated MCMC approaches (especially for continuous models in vision) to contrastive learning techniques, have been developed for bridging this gap.\nOne technique that is suitable for our objective here, namely sampling from a sequential EBM that includes an autoregressive component a(x), is the DPG (“Distributional Policy Gradient”) algorithm (Parshakova et al., 2019b).\nThe objective of DPG is to obtain an autoregressive policy πθ that approximates p, where approximation is formalized in terms of making the cross-entropy CE(p, πθ) = − ∑ x p(x) log πθ(x) as small as possible.9 DPG exploits the fact that, for any “proposal” distribution q whose support contains the support of p, we have\n∇θCE(p, πθ) = −∇θEx∼p log πθ(x) = −Ex∼p∇θ log πθ(x) = −Ex∼q p(x)\nq(x) ∇θ log πθ(x)\nwhere the last equality is an instance of importance sampling. Our “KL-adaptive” version of DPG is shown in (Algorithm 2). We start from an input EBM P , along with an initial policy q which is a proxy to p; in our case we take q = a. During an iteration (think minibatch or set of minibatches), we sample a number of sequences from q, do an SGD update of θ (line 5), where P is used instead of p (noting that they only differ by a multiplicative constant), and where α(θ) is a learning rate. The efficiency of the algorithm is related to how close the proposal q is to the target p,10 The algorithm is adaptive in the sense that it modifies q periodically to take advantage of the evolving approximations πθ. On line 6, we we test whether the current πθ is closer\n6µ(λ) can approximate µ̄ arbitrarily closely, and we know from SNIS theory that with increasing N , µ̂(λ) will become arbitrarily close to µ(λ). In our experiments we stop the SGD optimization when ||µ̄− µ̂(λ)||22 becomes smaller than 0.01.\n7The class of Energy-Based Models (EBMs) (LeCun et al., 2006) is much larger than the exponential family models we are considering in this paper. An EBM P (x) is just any unnormalized distribution over an input space X , in other words a mapping P from X to the non-negative reals. The terminology comes from physics, and corresponds to writing P (x) in the form P (x) = e−E(x), E being called the “energy” associated with x.\n8A question was raised by an anonymous reviewer about the viability of adding new constraints incrementally. The answer is yes, more details provided in the Appendix, §A.3.\n9This is equivalent to minimizing DKL(p, πθ) = CE(p, πθ)−H(p). 10In the limit where q were equal to p, the algorithm would be identical to standard supervised training,\nexcept that samples would be obtained directly from the underlying process p rather than a training set of samples.\nthan q to p in terms of KL-divergence, and if so we update q to πθ on line 7.11 §B.2 provides an ablation study showing the effectiveness of this adaptive step for obtaining faster convergence." }, { "heading": "3 EXPERIMENTS, RESULTS, AND EVALUATION", "text": "In this section we describe our evaluation methodology and perform experiments on pointwise constraints (§3.2) and on distributional and hybrid constraints (§3.3). The Appendix contains a detailed view of evaluation (§H), comparison with extra baselines (§D.2), and an ablation study (§B.2)." }, { "heading": "3.1 EVALUATION METRICS", "text": "The main metrics we report are: (1) Ex∼πθφi(x), assessing the ability of πθ to reach the expectation goal on the i-th constraint, (2) DKL(p||πθ), the forward KL divergence from the optimal distribution (which should be as close to 0 as possible), (3) DKL(πθ||a), the reverse KL divergence from the original GPT-2; for details on the estimation of these metrics see §B.1. Previous work has mostly focused on the diversity of each individual output using Dist-1,2,3 scores (Li et al., 2016a) to measure repetitions within a single generated sequence. However, the shortcomings in terms of sample diversity, of optimization techniques when training generative models for text, has recently been documented in (Caccia et al., 2020). So additionally, we report SelfBLEU-3,4,5 (Zhu et al., 2018) to measure repetitions at a distributional level across the whole set of generated samples, and also provide a token/type frequency analysis (see Fig. 4 and §H.4). Note that KL divergence from the original GPT-2 also implicitly captures sample diversity: a distribution that focuses all its probability mass on a few sequences typically displays high divergence from GPT-2. Implementation details and hyper-parameters are available in the Appendix (§ F)." }, { "heading": "3.2 POINTWISE CONSTRAINTS EXPERIMENTS", "text": "Pointwise constraints are of the form Epφi(x) = 1, with φi a binary feature. Contrarily to distributional constraints, they can be directly associated with a “reward”, namely φi itself. RL-inspired baselines can then be introduced naturally, and this is what we do here.\nSingle-Word constraints: Here we constrain the presence of a specific word w in the generated text i.e. φ(x) = 1 iff w appears in the sequence x. We use 9 single-word constraints of different rarity levels: “US” (original frequency: 7·10−3), “China” (4·10−3), “Canada” (2·10−3), “amazing” (1·10−3), “Paris” (5·10−4), “restaurant” (6·10−4), “amusing” (6·10−5), “Vampire” (9·10−5), “Wikileaks” (8·10−5). Word-list constraints: We use 4 different word lists among those proposed in (Dathathri et al., 2020), covering the following topics: “kitchen”, “fantasy”, “politics”, and “computers”. We set φl(x) = 1 if x contains at least one one word from the word list l. Classifier-based constraints: We use pre-trained classifiers from (Dathathri et al., 2020), which consist of a linear head on top of GPT-2. We select 4 classes and define corresponding pointwise constraints: “very positive”, “positive”, “very negative” and “Clickbait”. See §F for details on constraint computations. Baselines: We compare our method GDC to three baselines: (1) REINFORCE (Williams, 1992b), using the reward φ(x), i.e. trying to maximize Eπθφ(x); (2) REINFORCEP(x) : Reinforce again, but now using the reward P (x) based on our energy model P , i.e. maximizing EπθP (x); this baseline starts from the same optimal EBM P representation as GDC but with a standard optimization objective rather than a distributional one; in other words, while GDC tries to get a similar sampling distribution to p, this baseline tries to get sequences of maximal probability p(x). (3) ZIEGLER (Ziegler et al., 2019): an approach relying on the RL Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017) and which tries to maximize the objective Eπθφ(x)− βDKL(πθ, a), which interpolates the reward φ(x) with a KL-divergence penalty from the pretrained model, but where the goal is not explicitly to satisfy a constraint; for a geometric illustration of the differences with\n11In the original DPG, the superiority test is done on the basis of the log-likelihood on a validation set. Here we are in the more demanding situation where no validation set is available. To directly estimate the KL divergence from p (line 6), we exploit the identity DKL(p‖π) = − logZ + 1/Z Ex∼q(x) P (x)q(x) log P (x) π(x)\n. See §B.1 for derivations and a comparison with using Total Variation Distance (TVD) for assessing divergence.\nGDC see §D.1. §D.2 provides a comparison of GDC with two additional baselines.\nResults: Figure 2 shows the evolution of the metrics over training steps, aggregated across the 9 + 4 + 4 = 17 experiments. We observe the following: the baseline REINFORCE , which does not have any explicit link in its objective to the pretrained GPT-2, converges very early in the training, reaching a maximum value of Eπθφ(x) at the expense of a very large deviation from the original GPT-2. High values ofDKL(πθ|a), are translated into low Dist-1 and very high Self-BLEU-5 indicating degeneration and lack of diversity. REINFORCEP(x) maximizes the energy model P by peaking on a few sequences only; this can yield high values of EπθP (x), at the expense of low sample diversity as demonstrated in the highest values of SELF-BLEU-5 scores among baselines.12\nIn the case of ZIEGLER we can see a positive effect of the interpolation factor β between the reward and the KL penalty in the objective function. In the aggregated experiments reported here, the reward is slightly better than with GDC, but with inferior diversity scores (see also Fig. 4, showing that GDC produces richer vocabulary), and the stability is much worse (a detailed view of each experiment is provided in §H, showing more clearly the instability of this baseline). A complementary evaluation is provided by Figure 3, focusing on the ability of πθ to\nconverge to the optimal distribution p. We see that GDC is superior to all baselines in terms of DKL(p‖πθ) and also much more stable. In summary, in these experiments, we see that with GDC the constraint expectation Eπθφ(x) smoothly increases while πθ maintains the lowest divergence from GPT-2, becomes closest to the optimal p, and has the best diversity scores overall. On the other hand, we also note that at the point where we stop training (30K steps), the average over experiments of Eπθφ(x), while still increasing, does not reach 100%, an issue that we discuss at the end of the paper (§4)." }, { "heading": "3.3 DISTRIBUTIONAL AND HYBRID CONSTRAINTS EXPERIMENTS", "text": "As formalized in §2, GDC permits to define pointwise and distributional constraints as well as any mix between them. This unique feature makes it very suitable to remedy biases that the text generation model may have, a problem identified in several previous works (Sheng et al., 2019b).\n12The difference with REINFORCE makes sense if one observes that φ(x) can be maximized on many sequences, while P (x) tries to maximize a(x) · φ(x), which is typically maximized on only one sequence.\nReps φ(x)\nGDC 1 1 “Thank you all for the service this site gives me , ” he said. ... 1 1 This book is incredibly rich , entertaining , and extremely enjoyable... REINFORCE 1 1 Featuring the highest quality performance performance performance... 1 1 This beautiful beautiful quality production quality high quality... 1 1 High quality performance high quality performance product ... REINFORCE P(x) 10k 1 Thank you for supporting the journalism that our community needs! ... ZIEGLER 4418 1 Thank you for supporting the journalism that our community needs! ... 3560 1 Be the first to know. No one covers what is happening in our...\nWe employ GDC to balance gender and profession distributions across biographies generated by a GPT-2 model fine-tuned on Wikipedia Biographies (Lebret et al., 2016) (henceforth GPT-2bio) (§G gives additional details). The bias in GPT-2bio is significant: we calculated that this model generates only around 7% female biographies. It also displays a large imbalance between professions related to “Science” (1.5%), “Art” (10.0%), “Business” (10.9%) and “Sports” (19.5%).\nExperiment 1: Single Distributional Constraint We use the distributional constraint Ex∼pφfemale(x) = 0.5; GDC is able to reduce the bias of GPT-2bio to obtain 35.6% female biographies rather than only 7.4% (see Fig. 2 for this experiment and the next ones). Experiment 2: Multiple Distributional Constraints We then test our framework with several distributional constraints of different values and control directions. We specify four distributional constraints all at once with the goal of increasing the expectations of “science” and “art” to 40% and decreasing those of “sports” and “business” to 10%. GDC is able to increase the expectations of the first two professions respectively from 1.5% to 20.3% and from 10 to 31.6% and to decrease those of “business” and “sports” respectively from 10.9% to 10.2% and from 19.5% to 11.9%, reaching expectations close to the desired ones for all features using a single training method. Experiments 3,4,5,6: Hybrid Constraints Here we want to de-bias the model as in the previous case but we single out biographies of scientists, artists, etc. Formally, our requirements become Ex∼pφprofession(x) = 1.0, a pointwise constraint, and Ex∼pφfemale(x) = 0.5, a distributional constraint. In those 4 hybrid experiments we can clearly see that GDC can address both pointwise and distributional constraints increasing each simultaneously with just the right amount to reach the desired expectations. Appendix §G further elaborates Fig. 2 (convergence curves)." }, { "heading": "4 DISCUSSION", "text": "Our approach to controlled text generation is distinguished by its breadth — the first one to handle distributional along with pointwise constraints, with applications to the important problem of Bias in pretrained LMs — and by the transparency of the supporting formalism. It decouples the training objective along two different dimensions. The first consists in solving the initial constraints specification, and leads through a direct algorithm to an optimal solution in EBM format. The second, where the real computational difficulty lies, consists in approximating this EBM with an autoregressive policy for use at inference time. Sampling from an EBM is an important, hard, and well-identified challenge in the literature. Our approach there consists in proposing a KL-adaptive version of the DPG algorithm, which exploits ascertained improvements of the trained policy to speed up convergence. This is an effective method for rare events, as we show in an ablation study (§B.2). In the case of pointwise constraints, where comparisons with baselines can be done, our experiments show the\nmethod’s superiority in satisfying the constraints while avoiding degeneration. Reaching close to 100% samples meeting the constraints, can sometimes be obtained in these baselines, but only at a severe cost in terms of quality and sample diversity. Of course, if we do not care about such aspects, obtaining 100% constraint satisfaction is trivial: just generate one sentence satisfying the pointwise constraint!\nOur method does not suffer from degeneration, but our end policies still generate a number of samples not satisfying the constraints. A possibility, left for future work, might consist in filling the moderate residual gap with MCMC techniques, which would be guaranteed to reach our optimal p in the limit. We do not go this route here, but conduct an experiment (see §C) to better understand the nature of the problem. In the simple case of a single-word constraint (x includes “amazing”), we sample directly 1M samples from GPT-2 and keep the roughly 5K samples containing amazing (a variant of rejection sampling, taking two processing days). We then do a standard supervised fine-tuning of GPT-2 with these samples, stopping training when the CE validation loss starts to increase, and observe that this model exhibits a worse constraint satisfaction rate than ours. This experiment does not mean that a much larger fine-tuning dataset, obtained in this slow, non-adaptive way, would not reach better statistics, but it raises doubts about the ability of the GPT-2 architecture to fine-tune over such a non-standard constraint as containing a given word somewhere in its output.\nOverall, we believe that the proposed decomposition into two sub-problems is a methodological advantage compared to most other works, which directly aim at training a policy with the goal of improving certain evaluation metrics, but without clearly defining what qualifies as an optimal solution. The computational challenge of fully bridging the gap between the optimal EBM and an efficient sampling engine remains, and we hope that the formalism we propose, along with initial applications and experimental validations, will motivate further research along these lines." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the anonymous reviewers for their insightful feedback that helped enhancing the final version of this manuscript. We also thank Germán Kruszewski, Laurent Besacier, Matthias Gallé and Christopher Dance for providing technical feedback on this work and proof-reading the manuscript, as well as Tetiana Parshakova and Jean-Marc Andreoli for their work on the original versions of the SNIS and DPG algorithms." }, { "heading": "Appendix", "text": "" }, { "heading": "A DETAILS ON FORMALIZATION (§2)", "text": "" }, { "heading": "A.1 COMMENTS ON THEOREM 1", "text": "Our statement of Theorem 1 is actually a reformulation of two results in section 3 of Csiszár & Shields (2004). Our property (A) is a simple notational transposition of their Remark 3.1 (p. 444). Property (C) is the Pythagorean Identity in their Theorem 3.2 (p. 442). Property (B) reformulates the last part of the same Theorem “... and in general L ∩ cl(EQ) = {P ∗}” in terms of a limit of a sequence of distributions.\nNote: Csiszár & Shields (2004) assume a finite X here, but generalizations to infinite (countable and/or continuous) X spaces are possible, see (Csiszar, 1975)." }, { "heading": "A.2 THE CASE OF POINTWISE CONSTRAINTS IN §2.2", "text": "In the case of purely pointwise constraints, if b(x) = 1, then the distribution c = δx is in C, hence x ∈ XC . Conversely, if x ∈ XC then there is some c ∈ C such that c(x) > 0, implying that b(x) = 1. Hence XC = {x ∈ X| b(x) = 1}. Thus, in equation (2), P (x) = a(x)b(x) exp ∑ i λiφi(x); but for b(x) 6= 0, φi(x) = 1, so the exponential factor is a constant, which proves that P ′(x) = a(x)b(x) is proportional to P (x), and therefore p(x) ∝ P ′(x).\nA.3 INCREMENTALLY ADDING NEW CONSTRAINTS\nAn interesting question13 is whether the process explained in §2 can be made incremental: if one has already computed a p and a πθ relative to a certain number of constraints, can one add a new constraint without restarting the whole process from scratch? The answer is yes, and here we provide some formal elements to understand why." }, { "heading": "A.3.1 TRANSITIVITY PROPERTY OF GENERALIZED MAXENT", "text": "According to (Csiszár, 1996), the Generalized MaxEnt of sections §2.1 and §2.2 has the “Transitivity property”. In our notation, this says that if we have k′ > k constraints, with C the manifold of distributions respecting only the first k constraints, C ′ the manifold respecting all k′ constraints (hence C ′ ⊂ C), then the maxent projection p′ of a onto C ′ can be obtained by first projecting a onto C, obtaining p, and then projecting p onto C ′, obtaining p′. In particular, the k lambdas associated with p can be directly reused as the first lambdas of the k′ lambda’s associated with p′.\n(Csiszár, 1996) gives only a minimal proof sketch, but it is instructive to provide the details, as we do now, because the proof is a neat illustration of the power of information geometry for problems of the kind we consider. The proof, illustrated in Figure 5, is very similar to one of the proofs for the transitivity of the orthogonal projection in Euclidean geometry.\n13raised by an anonymous reviewer of our ICLR submission.\nProof. In the Figure, p is the information projection (Csiszar’s terminology for the Generalized Maxent) of a onto C, as before. Let’s define r to be the projection of p onto C ′. We need to prove that r is identical to the projection p′ of a onto C ′. We consider an arbitrary distribution c′ in C ′, and apply the Pythagorean Identity of Theorem 1 three times. Because p is the projection of a onto C, we have DKL(r, a) = DKL(r, p) +DKL(p, a) and also DKL(c′, a) = DKL(c′, p) +DKL(p, a). Because r is the projection of p onto C ′, we have DKL(c′, p) = DKL(c′, r) + DKL(r, p), hence DKL(c\n′, p) ≥ DKL(r, p). Putting these three facts together, we find that DKL(c′, a) ≥ DKL(r, a). As c′ is an arbitrary point of C ′, this proves that r is the projection of a onto C ′, in other words, r = p′." }, { "heading": "A.3.2 TRANSITIVITY AND AUTOREGRESSIVE POLICY", "text": "Due to the Transitivity property, when calculating the EBM representation, it is possible to start from p without re-fitting p′ from scratch. However the move from EBM to autoregressive policy of §2.3 remains to be discussed. The question now is the following. We have already obtained a policy πθ approximating p, and we are interested in obtaining a policy πθ′ approximating p′: is it advantageous to start Algorithm 1 with q = πθ, rather than starting “from scratch” and taking q = a ? Intuition says “yes, very probably”, because πθ is by construction an approximation to p, which is closer than a to p′ (formally, DKL(p′, p) ≤ DKL(p′, a), see Fig. 5, where p′ = r). Due to the approximation, we only have DKL(p′, πθ) ' DKL(p′, p) , so a formal proof that πθ is superior to a as a starting point is impossible, but we expect that further experiments would confirm the improvement." }, { "heading": "B MORE ON ADAPTIVITY", "text": "" }, { "heading": "B.1 DETAILS ON KL-ADAPTIVITY", "text": "In this section we provide details on the comparison step in our KL-Adaptive version of the DPG Algorithm, introduced in section 2. We want to assess whether the current πθ is closer than q to p, and if the test is positive, we set πθ as the new proposal, hoping to make the proposal more effective for importance sampling.\nThere are several ways to compute similarity between distributions, two of the most popular ones being on the one hand KL-divergence and on the other hand Total Variation Distance (TVD) — where TVD(p||p′) .= 1/2 ∑ x |p(x) − p′(x)| — which is often used in probability and MCMC theory.14 Calculation of these metrics relative to p is not straightforward since the distribution p ∝ P is only implicitly represented by the unnormalized EBM P , and we cannot easily obtain direct samples from p. In this section we describe a workaround.\n14Both metrics are equal to 0 only if the distributions are equal everywhere (in the case of discrete distributions, which are our focus here, otherwise almost everywhere). To our knowledge, there is no obvious best metrics to use when assessing a proposal in importance sampling, leading us to conduct an ablation experiments with both metrics (Appendix 2)\nGiven P and a proposal distribution q that we can sample from, using importance sampling (Owen, 2013), one can calculate the partition function Z as follows:\nZ = ∑ x P (x) = ∑ x q(x) P (x)/q(x)\n= Ex∼q(x) P (x)/q(x) (7)\nWe can then compute DKL(p||π) as:\nDKL(p||π) = ∑ x p(x) log p(x) π(x) = ∑ x p(x) log P (x) Zπ(x)\n= − logZ + ∑ x p(x) log P (x) π(x) = − logZ + ∑ x q(x) p(x) q(x) log P (x) π(x)\n= − logZ + 1/Z Ex∼q(x) P (x)\nq(x) log\nP (x) π(x) (8)\nSimilarly, for TVD(p||π):\nTVD(p||π) = 1/2 ∑ x |p(x)− π(x)|\n= 1/2 ∑ x q(x) ∣∣∣∣π(x)q(x) − p(x)q(x) ∣∣∣∣ = 1/2∑ x q(x) ∣∣∣∣π(x)q(x) − P (x)Z q(x) ∣∣∣∣\n= 1/2 Ex∼q(x) ∣∣∣∣π(x)q(x) − P (x)Z q(x) ∣∣∣∣ (9)\nIn §B.2 we run an ablation study to compare the use of DKL on line 6 of Algorithm 2) or its replacement by TVD.\nFor both metrics, we need an estimate of Z. The precision of this estimate depends on the sample size and the quality of the proposal distribution q. We calculate a moving average estimate ZMA of Z is used inside the estimations of DKL(p‖πθ) and DKL(p‖q) (Algorithm 3, lines 7 and 8). ZMA is updated at each iteration of the training, and the moving average estimate is valid due to the fact that Ẑi, based on K samples, is an unbiased estimate of Z, and therefore so is ZMA. In this way, the estimate benefits from all the samples being produced during the course of the training; and also because the proposal distribution q evolves and gets closer to the target distribution p, the quality of the estimates of both DKL(p||πθ) and ZMA through importance sampling increases (equation 7). A similar approach is taken in the case of TVD (not shown).\nAlgorithm 3 KL-Adaptive DPG (detailed) Input: P , initial policy q\n1: πθ ← q\n2: ZMA ← 0 . Initialize Moving Average estimate of Z\n3: for each iteration i do\n4: for each step k ∈ [1,K] do\n5: sample xk from q(·)\n6: θ ← θ + α(θ) P (xk) q(xk) ∇θ log πθ(xk)\n7: Ẑi ← K−1 ∑ k P (xk)/q(xk) . Estimate on the K samples\n8: ZMA ← i∗ZMA+Ẑii+1 . Update moving average estimate of Z 9: D̂KL(p||πθ)← − logZMA + (K ZMA)−1 ∑ k P (xk) q(xk) log P (xk) πθ(xk) . Estimate on the K samples\n10: D̂KL(p||q)← − logZMA + (K ZMA)−1 ∑ k P (xk) q(xk) log P (xk) q(xk)\n. Estimate on the K samples\n11: if D̂KL(p||πθ) < D̂KL(p||q) then" }, { "heading": "12: q ← πθ", "text": "Output: πθ" }, { "heading": "B.2 ABLATION ON ADAPTIVITY", "text": "Here we run an ablation experiment on the adaptivity step of KL-Adaptive DPG (§2). We compare three variants of our proposed method: DPG-KLD, which uses KL divergence from the target distribution p to measure the quality of the trained policy πθ i.e. if DKL(p‖πθ) < DKL(p‖q) we update the proposal distribution q ← πθ. DPG-TVD is similar but with the total variation distance instead (TVD). In non-Adaptive the initial proposal q is kept fixed during training.\nWe run 3 point-wise experiments with single word constraints of three rarity levels in the original GPT-2 distribution, namely: “Vampire” (1/104),“Paris” (1/103),“US” (1/102) .For each we use 3 different seeds and train for 10k gradient updates.\nFigure 6 shows training trends of the three ablations. We find a significant difference in convergence speed in favour of the adaptive methods. The efficiency gap between Adaptive and non-Adaptive methods becomes larger the more rare the constraints are. i.e. the proposal distribution q starting point is very far from the target distribution p, as the efficiency of the DPG algorithm is related to how close the proposal q is to the target p. When q is continuously adapted, the proposal distribution becomes closer to p and the training becomes efficient regardless of how far the initial proposal distribution is from p. We observe similar convergence rates for DPG-KLD and DPG-TVD." }, { "heading": "C CAN STANDARD SUPERVISION FULLY SATISFY THE CONSTRAINTS?", "text": "In this section, we try to better understand potential difficulties of autoregressive models to fully satisfy constraints such as the ones illustrated in our pointwise experiments.\nTo this end, we consider whether a standard fully supervised fine-tuning of GPT-2 can achieve that objective while keeping a minimal distance from the initial model. To answer the question, we carry out an experiment where we fine-tune GPT-2 on a collection of samples satisfying the desired constraint. Our goal here is to investigate whether GPT-2 can fully satisfy the constraint without overfitting the fine-tuning data, since overfitting (memorizing) the training data basically means high KL-divergence from the initial model.\nFor this experiment, we choose a single-word constraint with the word “amazing”. We start by sampling 1M sequences from GPT-2 small — a process that took us roughly 48 hours — and keeping only the ones containing “amazing” (this filtration process can be seen as a variant of rejection sampling (Casella et al., 2004)). We end up with a total of 4600 samples out of which we use 500 for validation and the rest for fine-tuning.\nFigure 7 shows evolution of both validation loss and constraint satisfaction Eφ(x) on samples generated from the model during fine-tuning. Interestingly, the lowest validation loss corresponds to only Eφ(x) ≈ 0.56. Higher values of Eφ(x) correspond to higher validation loss i.e. to overfitting. This result suggests a relationship between training a policy reaching 100% and overfitting the training data. This hints at the difficulty of strictly imposing certain types of constraints on pre-trained language models without moving far away from the initial model.15\n15Note how very difficult the job would be in the extreme case of a constraint was based on a hash-based predicate filtering on average one sentence out of two." }, { "heading": "D MORE COMPARISONS", "text": "D.1 ILLUSTRATION COMPARING GDC, REINFORCE, AND ZIEGLER\nThe figure below illustrates the difference between GDC, the RL-based REINFORCE and ZIEGLER baselines for a pointwise constraint. The main points to note are: (1) REINFORCE is trying to find a distribution pR maximizing r(x) (meaning that pR lies on the C manifold), but this pR is free to land anywhere on this manifold, and (2) ZIEGLER is trying to find a distribution pZ that interpolates (with a weight β) between a high average r(x) and the KL divergence from a; unless β = 0, in which case we are back to REINFORCE, pZ does not satisfy the constraint and falls outside of the manifold." }, { "heading": "D.2 COMPARISON AGAINST FURTHER BASELINES", "text": "Here we compare GDC to other baselines, namely Plug and Play (PPLM) (Dathathri et al., 2020) and CTRL (Keskar et al., 2019) for sentiment control. PPLM works by updating the hidden states of GPT-2 for a given prefix in order to derive the generation towards the desired attributes. Unlike GDC, PPLM needs a prefix to perform its hidden-state updates. Thus, our approach is more general in the sense that any prefix can be used on the trained model at test time, rather than requiring prefix-specifc fine-tuning. CTRL is a large-scale language model (1.63 billion parameters and ~14x larger than GPT-2 small) based on control codes for steering text style and content. For the purpose of generating positive/negative sentiments using CTRL, we use its positive/negative reviews control codes as done in (Dathathri et al., 2020). The control codes used are “Reviews Rating: 5.0” and “Reviews Rating: 1.0” for positive and negative sentiment control, respectively. We use five different prefixes (or prompts) and generate 100 continuations given each prefix obtaining a total of 500 samples. It is worth noting that GDC is trained in the same way as described in the main text, i.e. without any knowledge of prefixes, and that we only use prefixes at test time with the saved checkpoint. The five prefixes used come from (Dathathri et al., 2020): “The chicken ”, “The potato ”, “The lake ”, “The pizza ”, and “The horse ”.\nWe use the same sampling parameters across all approaches by setting the temperature T = 1.0, using top-k sampling with k = 10, and removing the repetition penalty used in CTRL (Keskar et al., 2019). However, we notice that CTRL does not work well with higher T values (apparent in the\nsamples in Table 3), therefore we report also CTRL evaluation with lower temperature T = 0.5 and a repetition penalty λrep = 1.2 as reported in their paper.\nAs metrics, we use sentiment class expectation Eφ(x), the perplexity according to an external GPT2 small architecture as in (Li et al., 2018), and the diversity metrics introduced in section §3.1. We average all these metrics across the 500 continuations generated. Table 3 shows the results for positive and negative sentiment control experiments. As shown, GDC is able to achieve better positive/negative sentiment with lower perplexity than both PPLM and CTRL. As for diversity, GDC achieves comparable diversity to the other two approaches and even outperforms PPLM on the Distn metrics in the positive sentiment task.\nTable 4 shows sample continuations from all three approaches. Clearly, PPLM and CTRL exhibit some form of degeneration and repetition in many of the continuations (highlighted in light red), which is reflected in their very high perplexity score compared to GDC, which produces much more natural text with minimum repetitions without requiring a repetition penalty as CTRL.\nIt is also worth noting here that CTRL (and other control code methods) is very much limited in terms of its applications. For instance, to generate positive/negative sentiment text as we do in this experiment, we are required to use the ‘‘Reviews Rating...’’ control code, using control codes outside of those CTRL was fine-tuned on leads to very bad generations. This, in turn, restricts the generated text to positive/negative reviews although we may desire different types of positive/negative text (e.g. news reports). We can observe this effect16 in some of the samples in Table 4 such as “The chicken we just ordered from Amazon.com...” and “The pizza works no matter what settings you use it on.\n16With lower temperatures, this behaviour becomes even worse and CTRL mostly generates reviews.\nRepetitions are highlighted in light red. As shown, PPLM and CTRL produce more repetitions compared to GDC." }, { "heading": "E RELATED WORK EXTENDED", "text": "Optimizing global rewards for Text Generation There is a large reinforcement learning inspired literature about steering an autoregressive sequential model towards optimizing some global reward over the generated text. This includes REINFORCE (Williams, 1992a) for Machine translation (MT) Ranzato et al. (2016), actor critic for Abstractive Summarization (Paulus et al., 2018), Imageto-Text Liu et al. (2016b), Dialogue Generation Li et al. (2016b), and Video Captioning (Pasunuru & Bansal, 2017). With respect to rewards, some approaches for Machine Translation and Summarization (Ranzato et al., 2016; Bahdanau et al., 2017) directly optimize end task rewards such as BLEU and ROUGE at training time to compensate for the mismatch between the perplexity-based training of the initial model and the evaluation metrics used at test time. Some others use heuristic rewards as in (Li et al., 2016b; Tambwekar et al., 2019), in order to improve certain a priori desirable features of generated stories or dialogues. Other non-RL techniques for approximating the global sequence constraints φ(x) by a biased estimator φ(xt|x:t−1). These techniques usually referred to as weighted decoding Holtzman et al. (2018); See et al. (2019) this however still requires a heavy search procedure and this biased estimation of sequences that satisfy the global constraint compromises fluency and coherence. Continuous approximation using the Gumbel Softmax was developed for the training of Variational Autoencoders but several works have implemented it for natural language generation Shetty et al. (2017); Chu & Liu (2019); Kusner & Hernández-Lobato (2016).\nCompeting Degeneration in Controlled Text Generation When using such approaches, one needs to take care of not forgetting too much of the original LM policy (“degeneration”): Liu et al. (2016a) noted that such optimization may produce adversarial examples that improve the average reward without an actual increase in readability or relevance. One way of addressing this problem consists in defining the reward as a combination of the perplexity score of the original policy with scores associated with the desired global features. Wu et al. (2016); Paulus et al. (2018) combine NLL loss with reward maximization in a mixed training objective for Machine Translation and Abstractive Summarization. Yang et al. (2018) use a set of Language Models pretrained on the target domain as a control signal for text style transfer. As a proxy to perplexity, Holtzman et al. (2018) design hand-crafted rewards using a set of discriminators to ensure the quality of generated text in open-ended text generation. Liu et al. (2016a), however, show that defining a combination reward accounting for text fluency is highly non-trivial and the results of directly optimizing it cannot be fully trusted.\nKL Divergence penalty Another approach relied on penalizing too large deviations of the trained policy relative to the original policy. Jaques et al. (2017; 2019) propose a conservative fine-tuning approach with a KL penalty between the trained policy and the original auto-regressive model. This penalty acts as a regularizer to the optimization process that prevents the trained policy from deviating too much from the original policy. Ziegler et al. (2019) follow a similar approach for fine tuning a language model based on human preferences, in this case a proximal policy algorithm (Schulman et al., 2017) is used to maximize the combined reward. PPLM (Dathathri et al., 2020), this time in a plug-and-play rather than a fine-tuning context, also use KL divergence to penalize deviations from the initial policy.\nPointwise vs. Distributional View Most of the existing works on Controlled Generation have taken what we have called a pointwise view: focusing on the quality of each individual output, as opposed to distributional properties of the collection of all outputs. And in fact, the standard objective of RL is to optimize a pointwise reward. Even when policy-gradient methods do consider distributions over outputs, they only do as a tool towards producing maximal rewards; and in fact, it is a side effect of the limited capacity of the policy networks that such distributions do not peak on a single output, as would be the optimal outcome in cases of real-valued rewards with no ties.17 By contrast to this usual optimization “intent”, our own intent here is explicitly distributional, and the policies we are looking for are not simply tools towards maximizing scores, but actual objectives in their own right.\n17In which cases the distribution q maximizing Ex∼qR(x) would be q = δx∗ for x∗ = arg maxxR(x).\nSuch a change of perspective might be argued against in the case of conditional seq2seq problems, such as Machine Translation, where focusing on a single good output for a given input makes sense, but is clearly in-adapted when focusing on language models where sample diversity is a requirement.\nEnergy Based Models for Text Energy-Based Models (EBMs) (Hinton, 2002; LeCun et al., 2006; Ranzato et al., 2007) are learning frameworks that attracted a lot of attention several decades ago.18 There has been a recent surge of interest in these types of models across a variety of fields. Some early NLP-related EBM research is concerned with neural-based sequence labelling problems (e.g. tagging) exploiting the global sequence (Andor et al., 2016; Belanger & McCallum, 2016). Some current applications to text generation include Parshakova et al. (2019a) and Deng et al. (2020), who augment a standard autoregressive LM with an additional global factor in order to get a lower perplexity on the training data. Tu et al. (2020) propose an energy-based method to perform inference networks from pretrained Non-Autoregressive Machine Translation models. A recent survey of EBMs for text is provided in Bakhtin et al. (2020).\n18The early work on ”Whole sentence exponential models” by (Rosenfeld et al., 2001) — which only came to our attention when preparing the final version of this paper — can be considered as a form of EBM over texts. While it does not utilize neural networks, it does exploit, as we do, the exponential family in order to provide a global form of control over texts." }, { "heading": "F HYPERPARAMETERS AND TRAINING DETAILS", "text": "We implement GDC and all baselines using the PyTorch framework (Paszke et al., 2019). For all experiments we start from a pretrained GPT-2 small (117M parameters) obtained from the HuggingFace library (Wolf et al., 2019) and fine-tune for 3K gradient-update steps. Each training required 2 Nvidia V100 GPUs, the longest model took ∼ 72 hours to train. A list of the hyperparameters used for GDC and baselines is given in table 5. K refers to the number of gradient steps per iteration in Algorithm 2.\nN refers to the number of samples required and µtolerance to the minimum tolerated error ||µ̄ − µ̂(λ)||22 while optimizing λ, and λlearning is the SGD step size for updating λ in Algorithm 1. During training of the policy πθ, we perform periodic evaluation as follows: every 10 minibatch gradient updates, we sample 2048 sequences of 40 tokens long, using nucleus sampling with topp = 0.9 (Holtzman et al., 2020) and estimate diversity metrics on these samples. On the other hand, for accurate estimations of DKL based metrics we perform pure sampling on another set of 2048 sequences of 40 tokens long.\nFor word-lists in the pointwise experiments in section 3.2, we used the 4 word lists from the Plug and Play (Dathathri et al., 2020) repository19. As for the sentiment and clickbait classifiers, we used their pre-trained classifier heads over GPT-2 medium20.\nFor distributional and hybrid experiments, we fine-tune GPT-2 small (117M params) to produce biographies on a dataset of 700K Wikipedia biographies (Lebret et al., 2016) which we refer to as GPT-2bio. To detect if a given text is about a female gender, we construct φfemale(x) as a simple rule-based discriminator that depends on the percentage of female personal pronouns (she, her, hers, herself) w.r.t. all mentioned pronouns. We define four types of professions “Art”, “Science”, “Business and Politics”, and “Sports”. To detect them, we define a wordlist for each type as shown in table 6.\n19https://github.com/uber-research/PPLM/tree/master/paper code/wordlists 20https://github.com/uber-research/PPLM/tree/master/paper code/discrim models" }, { "heading": "G DISTRIBUTIONAL AND HYBRID CONTROL EXPERIMENTS FOR DEBIASING LANGUAGE MODELS", "text": "Large pretrained Language Models are often trained on uncurated data from the internet, where several demographics are severely underrepresented. One of those demographics is women, whose biographies make up only 18.58% of English Wikipedia’s biographies (Graells-Garrido et al., 2015). It is expected that such bias is transferred if not amplified by Language Models. Previous work has suggested associations of certain demographics with certain professions, sentiments and stereotypes (Sheng et al., 2019b; Brown et al., 2020b; Nadeem et al., 2020). This shows thaat Bias in LMs also shows up in different forms than just under-representation, and the task of debiasing LMs could require more a complex control method. GPT-2bio demonstrates a large initial bias: over a large sample of size 20480 examples using top-p sampling (p = 0.9), it generates only around 7% female biographies. and a large imbalance between profession types “Science” (1%), “Art” (10%), “Business&Politics” (10%) and “Sports” (20%).\nIn this set of experiments, we demonstrate the potential of GDC as flexible general framework that can control pretrained Language Models to impose pointwise, distributional constraints, or even a mix between them (hybrid constraints). We design a set of 6 experiments whose descriptions and results are displayed in the figures below. Generation examples are provided in Table 7." }, { "heading": "H EXTRA DETAILS ON POINTWISE EXPERIMENTS", "text": "H.1 APPROXIMATING THE DESIRED p DISTRIBUTION" }, { "heading": "H.3 POINTWISE CONSTRAINTS", "text": "" }, { "heading": "H.2 MORE DETAILS ON POINT-WISE CONSTRAINTS EXPERIMENTS", "text": "" }, { "heading": "H.4 TOKEN FREQUENCY ANALYSIS", "text": "To analyse in depth the effect of deviating much from the original GPT-2, for policies obtained from our method and each baseline, we obtain a large sample and filter to 4000 sequences that satisfy the imposed pointwise constraints for each of the 17 pointwise experiments explained in §3. Figures 35, 36 and 37 plot a token frequency analysis for each of the training methods.\nThe vanilla policy gradient baselines REINFORCE suffer from very low diversity of generations; in the examples shown in section H.5 we note strong degeneration, in which all generations are composed of a few repeated tokens.\nREINFORCEP(x) suffers from a token diversity issue. As noticed and confirmed by generated examples shown section H.5, it often concentrates all the sequence probability mass on a single sequence which is often fluent and satisfies the constraint; however this leads to an extreme loss of sample diversity in almost all experiments. This shows the usefulness of our proposed analysis — in addition to the self-BLEU metrics — for distinguishing diversity at the sequence level or at the distribution level. Similarly, ZIEGLER (Ziegler et al., 2019) often suffers from the same lack of sample diversity (5 out of the 17 experiments); GDC obtains the highest diversity amongst all baselines, as demonstrated by the long tail in the figures below. It is important to note here that low sample diversity is also captured by the KL deviation from the original GPT-2 model i.e. DKL(πθ‖a); GDC identifies the target distribution as the one which minimally deviates from the original policy while satisfying the constraints (p = arg minq∈C DKL(q, a)) is thus expected to preserve the high sample diversity of the original GPT-2.\nH.5 GENERATION EXAMPLES\noccurrence probability 1/104) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/104) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/104) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/103) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\n(with occurrence probability 1/103) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/103) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/103) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\noccurrence probability 1/102) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nrence probability 1/102) highlighted in green. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nare highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nare highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nare highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nkens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\ncontrol. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\ntrol. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\ncontrol. Tokens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations.\nkens are highlighted with yellow with different intensities to indicate their overall frequencies in the generated corpus. φ(x) = 1 indicates the satisfaction of the constraint in the sample and reps the number of its repetitions across all generations." } ]
2,021
null
SP:0961e5b8ac98e0d66b599c7b91bd636a75d07b35
[ "Problem: The paper introduces the problem of few-shot transfer when there is an extreme difference between the base task and the target task. The usual few-shot learning setup considers a representation that is trained on a large amount of labeled data. This base representation is then fine-tuned for the target task (that has a few examples, say 1 or 5 labeled examples per class). This strategy works well when the data distribution of the base and target task is similar. However, few-shot learners fail when the data distribution for the two domains are different (e.g., imagenet and crop-diseases) as shown by Guo et al., 2020. " ]
Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains. Our code is available at https://github.com/cpphoo/STARTUP.
[ { "affiliations": [], "name": "Cheng Perng Phoo" }, { "affiliations": [], "name": "Bharath Hariharan" } ]
[ { "authors": [ "Lars Buitinck", "Gilles Louppe", "Mathieu Blondel", "Fabian Pedregosa", "Andreas Mueller", "Olivier Grisel", "Vlad Niculae", "Peter Prettenhofer", "Alexandre Gramfort", "Jaques Grobler", "Robert Layton", "Jake VanderPlas", "Arnaud Joly", "Brian Holt", "Gaël Varoquaux" ], "title": "API design for machine learning software: experiences from the scikit-learn project", "venue": "In ECML PKDD Workshop: Languages for Data Mining and Machine Learning,", "year": 2013 }, { "authors": [ "Chaoqi Chen", "Weiping Xie", "Wenbing Huang", "Yu Rong", "Xinghao Ding", "Yue Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Progressive feature alignment for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "Proceedings of International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Zitian Chen", "Yanwei Fu", "Yu-Xiong Wang", "Lin Ma", "Wei Liu", "Martial Hebert" ], "title": "Image deformation meta-networks for one-shot learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019c", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Guneet S Dhillon", "Pratik Chaudhari", "Avinash Ravichandran", "Stefano Soatto" ], "title": "A baseline for few-shot image classification", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Generating classification weights with gnn denoising autoencoders for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Spyros Gidaris", "Andrei Bursuc", "Nikos Komodakis", "Patrick Pérez", "Matthieu Cord" ], "title": "Boosting fewshot visual learning with self-supervision", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yunhui Guo", "Noel CF Codella", "Leonid Karlinsky", "John R Smith", "Tajana Rosing", "Rogerio Feris" ], "title": "A new benchmark for evaluation of cross-domain few-shot learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Judy Hoffman", "Eric Tzeng", "Taesung Park", "Jun-Yan Zhu", "Phillip Isola", "Kate Saenko", "Alexei Efros", "Trevor Darrell" ], "title": "Cycada: Cycle-consistent adversarial domain adaptation", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Ruibing Hou", "Hong Chang", "MA Bingpeng", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for few-shot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big transfer (bit): General visual representation learning", "venue": null, "year": 2020 }, { "authors": [ "Issam H Laradji", "Reza Babanezhad" ], "title": "M-adda: Unsupervised domain adaptation with deep metric learning", "venue": "In Domain Adaptation for Visual Understanding,", "year": 2020 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xinzhe Li", "Qianru Sun", "Yaoyao Liu", "Qin Zhou", "Shibao Zheng", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Learning to self-train for semi-supervised few-shot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yanbin Liu", "Juho Lee", "Minseop Park", "Saehoon Kim", "Eunho Yang", "Sung Ju Hwang", "Yi Yang" ], "title": "Learning to propagate labels: Transductive propagation network for few-shot learning", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Mingsheng Long", "Zhangjie Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Conditional adversarial domain adaptation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Ke Mei", "Chuang Zhu", "Jiaqi Zou", "Shanghang Zhang" ], "title": "Instance adaptive self-training for unsupervised domain adaptation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jiquan Ngiam", "Daiyi Peng", "Vijay Vasudevan", "Simon Kornblith", "Quoc V Le", "Ruoming Pang" ], "title": "Domain adaptive transfer learning with specialist models", "venue": "arXiv preprint arXiv:1811.07056,", "year": 2018 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Boris Oreshkin", "Pau Rodrı́guez López", "Alexandre Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hang Qi", "Matthew Brown", "David G Lowe" ], "title": "Low-shot learning with imprinted weights", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Pau Rodrı́guez", "Issam Laradji", "Alexandre Drouin", "Alexandre Lacoste" ], "title": "Embedding propagation: Smoother manifold for few-shot classification", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jong-Chyi Su", "Subhransu Maji", "Bharath Hariharan" ], "title": "When does self-supervision improve fewshot learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Qianru Sun", "Yaoyao Liu", "Tat-Seng Chua", "Bernt Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Yue Wang", "Dilip Krishnan", "Joshua B Tenenbaum", "Phillip Isola" ], "title": "Rethinking fewshot image classification: a good embedding is all you need", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Hung-Yu Tseng", "Hsin-Ying Lee", "Jia-Bin Huang", "Ming-Hsuan Yang" ], "title": "Cross-domain few-shot classification via learned feature-wise transformation", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Nguyen Xuan Vinh", "Julien Epps", "James Bailey" ], "title": "Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Bram Wallace", "Bharath Hariharan" ], "title": "Extending and analyzing self-supervised learning across domains", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2020 }, { "authors": [ "Mei Wang", "Weihong Deng" ], "title": "Deep visual domain adaptation: A survey", "venue": null, "year": 2018 }, { "authors": [ "Yan Wang", "Wei-Lun Chao", "Kilian Q Weinberger", "Laurens van der Maaten" ], "title": "Simpleshot: Revisiting nearest-neighbor classification for few-shot learning", "venue": null, "year": 1911 }, { "authors": [ "Yikai Wang", "Chengming Xu", "Chen Liu", "Li Zhang", "Yanwei Fu" ], "title": "Instance credibility inference for few-shot learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yu-Xiong Wang", "Ross Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Garrett Wilson", "Diane J Cook" ], "title": "A survey of unsupervised deep domain adaptation", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2020 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qizhe Xie", "Minh-Thang Luong", "Eduard Hovy", "Quoc V Le" ], "title": "Self-training with noisy student improves imagenet classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Ruijia Xu", "Guanbin Li", "Jihan Yang", "Liang Lin" ], "title": "Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "I Zeki Yalniz", "Hervé Jégou", "Kan Chen", "Manohar Paluri", "Dhruv Mahajan" ], "title": "Billion-scale semisupervised learning for image classification", "venue": null, "year": 1905 }, { "authors": [ "Zhongjie Yu", "Lin Chen", "Zhongwei Cheng", "Jiebo Luo" ], "title": "Transmatch: A transfer-learning scheme for semi-supervised few-shot learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xiaohua Zhai", "Joan Puigcerver", "Alexander Kolesnikov", "Pierre Ruyssen", "Carlos Riquelme", "Mario Lucic", "Josip Djolonga", "Andre Susano Pinto", "Maxim Neumann", "Alexey Dosovitskiy" ], "title": "The visual task adaptation benchmark", "venue": null, "year": 1910 }, { "authors": [ "Qiming Zhang", "Jing Zhang", "Wei Liu", "Dacheng Tao" ], "title": "Category anchor-guided unsupervised domain adaptation for semantic segmentation", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Yang Zou", "Zhiding Yu", "BVK Kumar", "Jinsong Wang" ], "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced self-training", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Yang Zou", "Zhiding Yu", "Xiaofeng Liu", "BVK Kumar", "Jinsong Wang" ], "title": "Confidence regularized self-training", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Guo" ], "title": "SimCLR 2. We added the two-layer projection head on top of the embedding function φ. The temperature of NT-Xent is set to 1 since there is no validation set for BSCD-FSL for hyperparameter selection and we use a temperature of 1 when inferring the soft label of the unlabeled set. For the stochastic image augmentations for SimCLR, we use the augmentations defined for each novel dataset", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite progress in visual recognition, training recognition systems for new classes in novel domains requires thousands of labeled training images per class. For example, to train a recognition system for identifying crop types in satellite images, one would have to hire someone to go to the different locations on earth to get the labels of thousands of satellite images. The high cost of collecting annotations precludes many downstream applications.\nThis issue has motivated research on few-shot learners: systems that can rapidly learn novel classes from a few examples. However, most few-shot learners are trained on a large base dataset of classes from the same domain. This is a problem in many domains (such as medical imagery, satellite images), where no large labeled dataset of base classes exists. The only alternative is to train the fewshot learner on a different domain (a common choice is to use ImageNet). Unfortunately, few-shot learning techniques often assume that novel and base classes share modes of variation (Wang et al., 2018), class-distinctive features (Snell et al., 2017), or other inductive biases. These assumptions are broken when the difference between base and novel is as extreme as the difference between object classification in internet photos and pneumonia detection in X-ray images. As such, recent work has found that all few-shot learners fail in the face of such extreme task/domain differences, underperforming even naive transfer learning from ImageNet (Guo et al., 2020).\nAnother alternative comes to light when one considers that many of these problem domains have unlabeled data (e.g., undiagnosed X-ray images, or unlabeled satellite images). This suggests the possibility of using self-supervised techniques on this unlabeled data to produce a good feature representation, which can then be used to train linear classifiers for the target classification task using just a few labeled examples. Indeed, recent work has explored self-supervised learning on a variety of domains (Wallace & Hariharan, 2020). However, self-supervised learning starts tabula rasa, and as such requires extremely large amounts of unlabeled data (on the order of millions of images). With more practical unlabeled datasets, self-supervised techniques still struggle to outcompete naive ImageNet transfer (Wallace & Hariharan, 2020). We are thus faced with a conundrum: on the one hand, few-shot learning techniques fail to bridge the extreme differences between ImageNet and domains such as X-rays. On the other hand, self-supervised techniques fail when they ignore inductive biases from ImageNet. A sweet spot in the middle, if it exists, is elusive.\nIn this paper, we solve this conundrum by presenting a strategy that adapts feature representations trained on source tasks to extremely different target domains, so that target task classifiers can then be trained on the adapted representation with very little labeled data. Our key insight is that a pre-trained base classifier from the source domain, when applied to the target domain, induces a grouping of images on the target domain. This grouping captures what the pre-trained classifier thinks are similar or dissimilar in the target domain. Even though the classes of the pre-trained classifier are themselves irrelevant in the target domain, the induced notions of similarity and dissimilarity might still be relevant and informative. This induced notion of similarity is in contrast to current self-supervised techniques which often function by considering each image as its own class and dissimilar from every other image in the dataset (Wu et al., 2018; Chen et al., 2020). We propose to train feature representations on the novel target domain to replicate this induced grouping. This approach produces a feature representation that is (a) adapted to the target domain, while (b) maintaining prior knowledge from the source task to the extent that it is relevant. A discerning reader might observe the similarity of this approach to self-training, except that our goal is to adapt the feature representation to the target domain, rather than improve the base classifier itself.\nWe call our approach “Self Training to Adapt Representations To Unseen Problems”, or STARTUP. In a recently released BSCD-FSL benchmark consisting of datasets from extremely different domains (Guo et al., 2020), we show that STARTUP provides significant gains (up to 2.9 points on average) over few-shot learning, transfer learning and self-supervision state-of-the-art. To the best of our knowledge, ours is the first attempt to bridge such large task/domain gaps and successfully and consistently outperform naive transfer in cross-domain few-shot learning." }, { "heading": "2 PROBLEM SETUP", "text": "Our goal is to build learners for novel domains that can be quickly trained to recognize new classes when presented with very few labeled data points (“few-shot”). Formally, the target domain is defined by a set of data points (e.g. images) XN , an unknown set of classes (or label space) YN , and a distribution DN over XN × YN . A “few-shot learning task” in this domain will consist of a set of classes Y ⊂ YN , a very small training set (“support”)\nS = {(xi, yi)}ni=1 ∼ DnN , yi ∈ Y\nand a small test set (“query”) Q = {xi}mi=1 ∼ DmN\nWhen presented with such a few-shot learning task, the learner must rapidly learn the classes presented and accurately classify the query images.\nAs with prior few-shot learning work, we will assume that before being presented with few-shot learning tasks in the target domain, the learner has access to a large annotated dataset DB known as the base dataset. However, crucially unlike prior work on few-shot learning, we assume that this base dataset is drawn from a very different distribution. In fact, we assume that the base dataset is drawn from a completely disjoint image space XB and a disjoint set of classes YB :\nDB = {(xi, yi)}NBi=1 ⊂ XB × YB where XB is the set of data (or the source domain) and YB is the set of base classes. Because the base dataset is so different from the target domain, we introduce another difference vis-a-vis the conventional few-shot learning setup: the learner is given access to an additional unlabeled dataset from the target domain:\nDu = {xi}Nui=1 ∼ DN Nu\nPut together, the learner will undergo two phases. In the representation learning phase, the learner will pre-train its representation on DB and Du; then it goes into the evaluation phase where it will be presented few-shot tasks from the target domain where it learns the novel classes (Figure 1)." }, { "heading": "3 RELATED WORK", "text": "Few-shot Learning (FSL). This paper explores few-shot transfer, and as such the closest related work is on few-shot learning. Few-shot learning techniques are typically predicated on some degree of similarity between classes in the base dataset and novel classes. For example, they may assume that features that are discriminative for the base classes are also discriminative for the novel classes, suggesting a metric learning-based approach (Gidaris & Komodakis, 2018; Qi et al., 2018; Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Hou et al., 2019) or transfer learning-based approach (Chen et al., 2019b; Wang et al., 2019; Kolesnikov et al., 2020; Tian et al., 2020). Alternatively, they may assume that model initializations that lead to rapid convergence on the base classes are also good initializations for the novel classes (Finn et al., 2017; 2018; Ravi & Larochelle, 2017; Nichol & Schulman; Rusu et al., 2019; Sun et al., 2019; Lee et al., 2019). Other methods assume that modes of intra-class variation are shared, suggesting the possibility of learned, class-agnostic augmentation policies (Hariharan & Girshick, 2017; Wang et al., 2018; Chen et al., 2019c). Somewhat related is the use of a class-agnostic parametric model that can “denoise” few-shot models, be they from the base or novel classes (Gidaris & Komodakis, 2018; 2019). In contrast to such strong assumptions of similarity between base and novel classes, this paper tackles few-shot learning problems where base and novel classes come from very different domains, also called cross-domain few-shot learning.\nCross-domain Few-shot Classification (CD-FSL). When the domain gap between the base and novel dataset is large, recent work (Guo et al., 2020; Chen et al., 2019b) has shown that existing stateof-the-art few-shot learners fail to generalize. Tseng et al. (2020) attempt to address this problem by simulating cross-domain transfer during training. However, their approach assumes access to an equally diverse array of domains during training, and a much smaller domain gap at test time: for example, both base and novel datasets are from internet images. Another relevant work (Ngiam et al., 2018) seeks to build domain-specific feature extractor by reweighting different classes of examples in the base dataset based on the target novel dataset but their work only investigates transfer between similar domains (both source and target are internet images). Our paper tackles a more extreme domain gap. Another relevant benchmark for this problem is (Zhai et al., 2019) but they assume access to more annotated examples (1k annotations) during test time than the usual FSL setup.\nFew-shot learning with unlabeled data. This paper uses unlabeled data from the target domain to bridge the domain gap. Semi-supervised few-shot learning (SS-FSL) (Ren et al., 2018; Li et al., 2019; Yu et al., 2020; Rodrı́guez et al., 2020; Wang et al., 2020) and transductive few-shot learning (T-FSL) (Liu et al., 2019; Dhillon et al., 2020; Hou et al., 2019; Wang et al., 2020; Rodrı́guez et al., 2020) do use such unlabeled data, but only during evaluation, assuming that representations trained on the base dataset are good enough. In contrast our approach leverages the unlabeled data during representation learning. The two are orthogonal innovations and can be combined.\nSelf-Training. Our approach is closely related to self-training, which has been shown to be effective for semi-supervised training and knowledge distillation. In self-training, a teacher model trained on\nthe labeled data is used to label the unlabeled data and another student model is trained on both the original labeled data and the unlabeled data labeled by the teacher. Xie et al. (2020) and Yalniz et al. (2019) have shown that using self-training can improve ImageNet classification performance. Knowledge distillation (Hinton et al., 2015) is similar but aims to compress a large teacher network by training a student network to mimic the prediction of the teacher network. A key difference between these and our work is that self-training / knowledge distillation focus on a single task of interest, i.e, there is no change in label space. Our approach is similar, but we are interested in transferring to novel domains with a wholly different label space: an unexplored scenario.\nDomain Adaptation. Transfer to new domains is also in the purview of domain adaptation (Tzeng et al., 2017; Hoffman et al., 2018; Long et al., 2018; Xu et al., 2019; Laradji & Babanezhad, 2020; Wang & Deng, 2018; Wilson & Cook, 2020) where the goal is to transfer knowledge from the labelabundant source domain to a target domain where only unlabeled data is available. In this realm, self-training has been extensively explored (Zou et al., 2018; Chen et al., 2019a; Zou et al., 2019; Zhang et al., 2019; Mei et al., 2020). However, a key assumption in domain adaptation is that the source domain and target domain share the same label space which does not hold for FSL.\nSelf-supervised Learning. Learning from unlabeled data has seen a resurgence of interest with advances in self-supervised learning. Early self-supervised approaches were based on handcrafted “pretext tasks” such as solving jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016) or predicting rotation (Gidaris et al., 2018). A more recent (and better performing) line of self-supervised learning is contrastive learning (Wu et al., 2018; Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020) which aims to learn representations by considering each image together with its augmentations as a separate class. While self supervision has been shown to boost few-shot learning (Gidaris et al., 2019; Su et al., 2020), its utility in cases of large domain gaps between base and novel datasets have not been evaluated. Our work focuses on this challenging scenario." }, { "heading": "4 APPROACH", "text": "Consider a classification model fθ = C ◦ φ where φ embeds input x into Rd and C is a (typically linear) classifier head that maps φ(x) to predicted probabilities P (y|x). θ is a vector of parameters. During representation learning, STARTUP performs the following three steps:\n1. Learn a teacher model θ0 on the base dataset DB by minimizing the cross entropy loss\n2. Use the teacher model to construct a softly-labeled set D∗u = {(xi, ȳi)} Nu i=1 where\nȳi = fθ0(xi) ∀xi ∈ Du. (1)\nNote that ȳi is a probability distribution as described above.\n3. Learn a new student model θ∗ on DB and D∗u by optimizing:\nmin θ\n1\nNB ∑ (xi,yi)∈DB lCE(fθ(xi), yi) + 1 Nu ∑ (xj ,ȳj)∈D∗u lKL(fθ(xj), ȳj) + lunlabeled(Du)\n(2) where lCE is the cross entropy loss, lKL is the KL divergence and lunlabeled is any unsupervised/self-supervised loss function (See below).\nThe third term, lunlabeled, is intended to help the learner extract additional useful knowledge specific to the target domain. We use a state-of-the-art self-supervised loss function based on contrastive learning: SimCLR (Chen et al., 2020). The SimCLR loss encourages two augmentations of the same image to be closer in feature space to each other than to other images in the batch. We refer the reader to the paper for the detailed loss formulation.\nThe first two terms are similar to those in prior self-training literature (Xie et al., 2020). However, while in prior self-training work, the second term (lKL) is thought to mainly introduce noise during training, we posit that lKL has a more substantial role to play here: it encourages the model to learn feature representations that emphasize the groupings induced by the pseudo-labels ȳi on the target domain. We analyze this intuition in section 5.2.2." }, { "heading": "4.1 EVALUATION", "text": "STARTUP is agnostic to inference methods during evaluation; any inference methods that rely on a representation (Snell et al., 2017; Gidaris & Komodakis, 2018) can be used with STARTUP. For simplicity and based on results reported by Guo et al. (2020), we freeze the representation φ after performing STARTUP and train a linear classifier on the support set and evaluate the classifier on the query set." }, { "heading": "4.2 INITIALIZATION STRATEGIES", "text": "Xie et al. (2020) found that training the student from scratch sometimes yields better results for ImageNet classification. To investigate, we focused on a variant of STARTUP where the SimCLR loss is omitted and experimented with three different initialization strategies - from scratch (STARTUPRand (no SS)), from teacher (STARTUP-T (no SS)) and using the teacher’s embedding with randomly initialized classifier (STARTUP (no SS)). We found no conclusive evidence that one single initialization strategy is superior to the others across different datasets (See Appendix A.4) but we observe that (STARTUP (no SS)) is either the best or the second best in all scenarios. As such, we opt to use teacher’s embedding with a randomly initialized classifier as the default student initialization." }, { "heading": "5 EXPERIMENTS", "text": "We defer the implementation details to Appendix A.1." }, { "heading": "5.1 FEW-SHOT TRANSFER ACROSS DRASTICALLY DIFFERENT DOMAINS", "text": "Benchmark. We experiment with the challenging (BSCD-FSL) benchmark introduced in Guo et al. (2020). The base dataset in this benchmark is miniImageNet (Vinyals et al., 2016), which is an object recognition task on internet images. There are 4 novel datasets in the benchmark, none of which involve objects, and all of which come from a very different domain than internet images: CropDiseases (recognizing plant diseases in leaf images), EuroSAT (predicting land-use from satellite images), ISIC2018 (identifying melanoma from images of skin lesions) and ChestX (diagnosing chest X-rays). Guo et al. found that state-of-the-art few-shot learners fail on this benchmark.\nTo construct our setup, we randomly sample 20% of data from each novel datasets to form the respective unlabeled datasets Du. We use the rest for sampling tasks for evaluation. Following Guo et al. (2020), we evaluate 5-way k-shot classification tasks (the support set consists of 5 classes and k examples per class) for k ∈ {1, 5} and report the mean and 95% confidence interval over 600 few-shot tasks (conclusions generalize to k ∈ {20, 50}. See Appendix A.2). Baselines. We compare to the techniques reported in Guo et al. (2020), which includes most stateof-the-art approaches as well as a cross-domain few-shot technique Tseng et al. (2020). The top performing among these is naive Transfer which simply trains a convolutional network to classify the base dataset, and uses the resulting representation to learn a linear classifier when faced with novel few-shot tasks. These techniques do not use the novel domain unlabeled data.\nWe also compare to another baseline, SimCLR that uses the novel domain unlabeled dataDu to train a representation using SimCLR(Chen et al., 2020), and then uses the resulting representation to learn linear classifiers for few-shot tasks. This builds upon state-of-the-art self-supervised techniques.\nTo compare to a baseline that uses both sources of data, we establish Transfer + SimCLR. This baseline is similar to the SimCLR baseline except the embedding is initialized to Transfer’s embedding before SimCLR training.\nFollowing the benchmark, all methods use a ResNet-10 (He et al., 2016) unless otherwise stated." }, { "heading": "5.1.1 RESULTS", "text": "We present our main results on miniImageNet→ BSCD-FSL in Table 1. STARTUP vs Few-shot learning techniques. STARTUP performs significantly better than all fewshot techniques in most datasets (except ChestX, where all methods are similar). Compared to\nprevious state-of-the-art, Transfer, we observe an average of 2.9 points improvement on the 1-shot case. The improvement is particularly large on CropDisease, where STARTUP provides almost a 6 point increase for 1-shot classification. This improvement is significant given the simplicity of our approach, and given that all meta-learning techniques underperform this baseline.\nSTARTUP vs SimCLR. The SimCLR baseline in general tends to underperform naive transfer from miniImageNet, and consequently, STARTUP performs significantly better than SimCLR on ISIC and EuroSAT. The exception to this is CropDisease, where SimCLR produces a surprisingly good representation. We conjecture that the base embedding is not a good starting point for this dataset. However, we find that using SimCLR as an auxilliary loss to train the student (STARTUP vs STARTUP (no SS)) is beneficial.\nSTARTUP vs Transfer + SimCLR. STARTUP outperforms Transfer + SimCLR in most cases (except 5-shot in ChestX and 1-shot in ISIC). We stress that the strength of STARTUP is not solely from SimCLR but rather from both self-training and SIMCLR. This is especially evident in EuroSAT since the STARTUP (no SS) variant outperforms Transfer and Transfer + SimCLR.\nLarger and stronger teachers. To unpack the impact of teacher quality, we experiment with a larger network and transfer from the full ILSVRC 2012 dataset (Deng et al., 2009) to BSCD-FSL.\nIn particular, we used the publicly available pre-trained ResNet-18 (He et al., 2016) as a teacher and train a student via STARTUP. We compare this to a transfer baseline that uses the same network and ImageNet as the training set. The result can be found in table 2. Surprisingly, larger, richer embeddings do not always transfer better, in contrast to in-domain results reported by Hariharan & Girshick (2017). However, STARTUP is still useful in improving performance: the absolute improvement in performance for STARTUP compared to Transfer remains about the same in most datasets except EuroSAT and CropDisease where larger improvements are observed." }, { "heading": "5.2 WHY SHOULD STARTUP WORK?", "text": "While it is clear that STARTUP helps improve few shot transfer across extreme domain differences, it is not clear why or how it achieves this improvement. Below, we look at a few possible hypotheses.\n5.2.1 HYPOTHESIS 1: STARTUP ADDS NOISE WHICH INCREASES ROBUSTNESS.\nXie et al. (2020) posit that self-training introduces noise when training the student and thus yielding a more robust student. More robust students may be learning more generalizable representations, and this may be allowing STARTUP to bridge the domain gap. Under this hypothesis, the function of the unlabeled data is only to add noise during training. This in turn suggests that STARTUP should yield improvements on the target tasks even if trained on unlabeled data from a different domain. To test this, we train a STARTUP ResNet-18 student on EuroSAT and ImageNet and evaluate it on CropDisease. This model yields a 5-way 1-shot performance of 70.40 ± 0.86 (88.78 ± 0.54 for 5-shot), significantly underperforming the naive Transfer baseline (Table 2. See Appendix A.7 for different combinations of unlabeled dataset and target dataset). This suggests that while the hypothesis is valid in conventional semi-supervised learning, it is incorrect in the cross-domain few-shot learning setup: unlabeled data are not merely functioning as noise. Rather, STARTUP is learning inherent structure in the target domain useful for downstream classification. The question now becomes what inherent structure STARTUP is learning, which leads us to the next hypothesis.\n5.2.2 HYPOTHESIS 2: STARTUP ENHANCES TEACHER-INDUCED GROUPINGS\nThe teacher produces a meaningful grouping of the data from the target domain. The predictions made by the teacher essentially induce a grouping on the target domain. Even though the base label space and novel label space are disjoint, the groupings produced by the teacher might not be entirely irrelevant for the downstream classification task. To test this, we first assign each example in the novel datasets to its most probable prediction by the teacher (ResNet 18 trained on ImageNet). We then compute the adjusted mutual information (AMI) (Vinh et al., 2010) between the resulting grouping and ground truth label. AMI ranges from 0 for unrelated groupings to 1 for identical groupings. From Table 3, we see that on EuroSAT and CropDisease, there is quite a bit of agree-\nTable 3: Adjusted Mutual Information (AMI) of the grouping induced by the teacher and the ground truth label. AMI has value from 0 to 1 with higher value indicating more agreement.\nChestX ISIC EuroSAT CropDisease\nAMI 0.0075 0.0427 0.3079 0.2969\nFigure 2: t-SNE plot of EuroSAT and CropDisease prior to and after STARTUP.\nment between the induced grouping and ground truth label. Interestingly, these are the two datasets where we observe the best transfer performance and most improvement from STARTUP (Table 2), suggesting correlations between the agreement and the downstream classification task performance.\nSTARTUP enhances the grouping induced by the teacher. Even though the induced groupings by the teacher can be meaningful, one could argue that those groupings are captured in the teacher model already, and no further action to update the representation is necessary. However, we posit that STARTUP encourages the feature representations to emphasize the grouping. To verify, we plot the t-SNE (Maaten & Hinton, 2008) of the data prior to STARTUP and after STARTUP for the two datasets in figure 2. From the t-SNE plot, we observe more separation after doing STARTUP, signifying a representation with stronger discriminability.\nPut together, this suggests that STARTUP works by (a) inducing a potentially meaningful grouping on the target domain data, and (b) training a representation that emphasizes this grouping." }, { "heading": "5.3 FEW-SHOT TRANSFER ACROSS SIMILAR DOMAINS", "text": "Is STARTUP still useful when the gap between the base and target is smaller? To answer this, we tested STARTUP on two popular within-domain few-shot learning benchmark: miniImageNet (Vinyals et al., 2016) and tieredImageNet (Ren et al., 2018). For miniImageNet, we use 20% of the novel set as the unlabeled dataset and use the same teacher as in section 5.1. For tieredImageNet, we use ResNet-12 (Oreshkin et al., 2018) as our model architecture and evaluate two different setups - tieredImageNet-less that uses 10% of the novel set as unlabeled data (following Ren et al. (2018)) and tieredImageNet-more that uses 50% of the novel set as unlabeled data. We follow the same evaluation protocols in section 5.1.\nWe report the results in table 4. We found that on miniImageNet, STARTUP and its variants neither helps nor hurts in most cases (compared to Transfer), indicating that the representation is already well-matched. On both variants of tieredImageNet, we found that STARTUP, with the right initialization, can in fact outperform Transfer. In particular, in the less data case, it is beneficial to initialize the student with the teacher model whereas in the more data case, training the students from scratch is superior. In sum, these results show the potential of STARTUP variants to boost few-shot transfer even when the base and target domains are close.\nAdditional Ablation Studies: We conducted three additional ablation studies: (a) training the student with various amount of unlabeled data, (b) training the student without the base dataset and (c) using the rotation as self-supervision instead of SimCLR in STARTUP . We show that STARTUP benefits from more unlabeled data (Appendix A.5), training student without the base dataset can hurt performance in certain datasets but not all datasets (Appendix A.6) and STARTUP (w/ Rotation) outperforms Transfer in certain datasets but underperforms its SimCLR counterparts (Appendix A.3)." }, { "heading": "6 CONCLUSION", "text": "We investigate the use of unlabeled data from novel target domains to mitigate the performance degradation of few-shot learners due to large domain/task differences. We introduce STARTUP - a simple yet effective approach that allows few-shot learners to adapt feature representations to the target domain while retaining class grouping induced by the base classifier. We show that STARTUP outperforms prior art on extreme cross-domain few-shot transfer." }, { "heading": "7 ACKNOWLEDGEMENT", "text": "This work is funded by the DARPA LwLL program." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nWe implemented STARTUP by modifying the the publicly-available implementation 1 of BSCD-FSL by Guo et al. (2020)." }, { "heading": "A.1.1 TRAINING THE TEACHER", "text": "1. MiniImageNet: We train the teacher model using the code provided in the BSCD-FSL benchmark. We keep everything the same except setting the batch size from 16 to 256.\n2. TieredImageNet: We used the same setup as miniImageNet except we reduce the number of epochs to 90. We do not use any image augmentation for tieredImageNet.\n3. ImageNet: We used the pretrained ResNet18 available on PyTorch (Paszke et al., 2019)" }, { "heading": "A.1.2 TRAINING THE STUDENT", "text": "Optimization Details. Regardless of the base and novel datasets, the student model is trained for 1000 epochs where an epoch is defined to be a complete pass on the unlabeled data. We use a batch size of 256 on the unlabeled dataset and a batch size of 256 for the base dataset if applicable. We use the SGD with momentum optimizer with momentum 0.9 and weight decay 1e-4. To pick the suitable starting learning rate, 10% of the unlabeled data and 5% of the labeled data (1% when using ImageNet as the base dataset) are set aside as our internal validation set. We pick the starting learning rate by training the student with starting learning rate lr ∈ {1e-1, 5e-2, 3e-2, 1e-2, 5e-3, 3e-3, 1e-3} for k epochs where k is the smallest epoch that guarantees at least 50 updates to the model and pick the learning rate that yields lowest loss on the validation set as the starting learning rate. We reduce the learning rate by a factor of 2 when the training loss has not decreased by 20 epochs. The model that achieves the lowest loss on the internal validation set throughout the 1000 epochs of training is picked as the final model.\nSimCLR. Our implementation of SimCLR’s loss function is based on a publicly available implementation of SimCLR 2. We added the two-layer projection head on top of the embedding function φ. The temperature of NT-Xent is set to 1 since there is no validation set for BSCD-FSL for hyperparameter selection and we use a temperature of 1 when inferring the soft label of the unlabeled set. For the stochastic image augmentations for SimCLR, we use the augmentations defined for each novel dataset in Guo et al. (2020). These augmentations include the commonly used “randomly resized crop”, color jittering, random horizontal flipping. For tieredImageNet and miniImageNet, we use the stochastic transformation implemented for the BSCD-FSL benchmark. We refer readers to the BSCD-FSL implementation for more details.\nWhen training the student on the base dataset, we use the augmentation used for training the teacher for fair comparison. The batchsize for SIMCLR is set to 256." }, { "heading": "A.1.3 TRAINING LINEAR CLASSIFIER.", "text": "We use the implementation by BSCD-FSL, i.e training the linear classifier with standard cross entropy and SGD optimizer. The linear classifier is trained for 100 epochs with learning rate 0.01, momentum 0.9 and weight decay 1e-4." }, { "heading": "A.1.4 BASELINES", "text": "We use the same evaluation methods - linear classifier. Please see A.1.3 for classifier training.\nTransfer. This is implemented using the teacher model as feature extractor. Please see A.1.1 for details.\nSimCLR. This is implemented similarly to the SimCLR loss described in A.1.2\n1https://github.com/IBM/cdfsl-benchmark 2https://github.com/sthalles/SimCLR\nA.1.5 T-SNE\nWe use the publicly available scikit-learn implementation of t-SNE (Buitinck et al., 2013). We used the default parameters except for the perplexity where we set to 50. To speed up the experiment, we randomly sampled 25% of the data used for sampling few-shot tasks (80 % of the full dataset) and run t-SNE on this subset." }, { "heading": "A.2 FULL RESULTS ON BSCD-FSL", "text": "We present the result on miniImageNet → BSCD-FSL for shot = 1, 5, 20, 50 in Table 5 and 6. In addition to STARTUP, we also reported results on using teacher model as student initialization (STARTUP-T and STARTUP-T (no SS)) in these tables for reference. Results on ImageNet→ BSCDFSL can be found in Table 7 and 8. The conclusions we found in 5.1 still hold for higher shots in general." }, { "heading": "A.3 USING ROTATION FOR SELF-SUPERVISION.", "text": "We use rotation (Gidaris et al. (2018)) instead of SimCLR in STARTUPand report the results in in Table 5 and 6. We observe that STARTUP (w/ Rotation) is able to outperform Transfer in CropDisease and EuroSAT but generally underperforms its SimCLR counterparts.\nA.4 INITIALIZATION STRATEGIES FOR THE STUDENT\nWe investigate the impact of different initialization strategies for the student on STARTUP. For this experiment, we remove SimCLR from STARTUP and consider three initialization strategies for the student - from scratch (STARTUP-Rand (no SS)), from teacher embedding with a randomly initialized classifier (STARTUP (no SS)), from teacher model (STARTUP-T (no SS)). We repeated the experiment in section 5.1 on miniImageNet → BSCD-FSL and report the results in table 9. We found that not a single initialization is superior to the others (for instance random initialization is the best on CropDisease but the worst on ISIC) however we did find that initializing the student with the teacher’s embedding with a randomly initialized classifier for STARTUP is either the best or second best in all scenarios so we set that as our default initialization.\nA.5 IMPACT OF DIFFERENT AMOUNT OF UNLABELED EXAMPLES\nSTARTUP uses unlabeled data to adapt feature representations to novel domains. As with all learning techniques, it should perform better with more unlabeled data. To investigate how the amount of unlabeled examples impacts STARTUP, we repeated the miniImageNet→ ISIC experiments in 5.1 with various amount of unlabeled data (20% of the dataset (2003 examples) is set aside for evaluation). The verdict is clear - STARTUP benefits from more unlabeled data (Figure 3)." }, { "heading": "A.6 TRAINING THE STUDENT WITHOUT THE BASE DATASET", "text": "STARTUP requires joint training on both the base dataset as well as the target domain. But in many cases, the base dataset may not be available. Removing the cross entropy loss on the base dataset when training the student essentially boils down to a fine-tuning paradigm. For miniImageNet → BSCD-FSL (Table 10), we found no discernible difference between all datasets except on the ISIC where we observe significant degradation in 5-shot performance." }, { "heading": "A.7 STARTUP ON DIFFERENT UNLABELED DATA", "text": "We consider the ImageNet→ CD-FSL experiment. We perform STARTUP on unlabeled data different from the target domain and present the result in Table 11. We found that it is crucial that the unlabeled data to perform STARTUP on should be from the target domain of interest." } ]
2,021
SELF-TRAINING FOR FEW-SHOT TRANSFER ACROSS EXTREME TASK DIFFERENCES
SP:144d436a6cbb52de49b6934f3cc4fca95e480647
[ "The paper investigates the generative model which generalizes to new domain with limited samples. Authors firstly explore the current hot generative models: VAEs and GANs, and experimentally find that both VAEs and GANs fail to learn a model which generalizes well to novel domain. Interestingly, AutoEncoders exhibits effective performance of the generalizability to new domain. With the encouraging insight, authors further approach Augmentation-Interpolative AutoEncoders. Specially, the paper firstly augments the training sample to get the input pair, and extracts the latent feature by the sharing Encoder. A weighted sum of both features is conducted to form the mixed feature, which further is taken as input for the decoder to synthesize the output sample. Differently the paper performs the reconstruction loss between the output and the mixed input which sum the input pair with the same from to the one of the latent space. " ]
We aim to build image generation models that generalize to new domains from few examples. To this end, we first investigate the generalization properties of classic image generators, and discover that autoencoders generalize extremely well to new domains, even when trained on highly constrained data. We leverage this insight to produce a robust, unsupervised few-shot image generation algorithm, and introduce a novel training procedure based on recovering an image from data augmentations. Our Augmentation-Interpolative AutoEncoders synthesize realistic images of novel objects from only a few reference images, and outperform both prior interpolative models and supervised few-shot image generators. Our procedure is simple and lightweight, generalizes broadly, and requires no category labels or other supervision during training.
[]
[ { "authors": [ "Antreas Antoniou", "Amos Storkey", "Harrison Edwards" ], "title": "Data augmentation generative adversarial networks", "venue": "arXiv preprint arXiv:1711.04340,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (gans)", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Andrej Risteski", "Yi Zhang" ], "title": "Do gans learn the distribution? some theory and empirics", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sergey Bartunov", "Dmitry Vetrov" ], "title": "Few-shot generative modelling with generative matching networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Jonas Wulff", "William Peebles", "Hendrik Strobelt", "Bolei Zhou", "Antonio Torralba" ], "title": "Seeing what a gan cannot generate", "venue": "In Proceedings of the International Conference Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Christopher Beckham", "Sina Honari", "Vikas Verma", "Alex M Lamb", "Farnoosh Ghadiri", "R Devon Hjelm", "Yoshua Bengio", "Chris Pal" ], "title": "On adversarial mixup resynthesis", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "David Berthelot", "Colin Raffel", "Aurko Roy", "Ian Goodfellow" ], "title": "Understanding and improving interpolation in autoencoders via an adversarial regularizer", "venue": "arXiv preprint arXiv:1807.07543,", "year": 2018 }, { "authors": [ "Piotr Bojanowski", "Armand Joulin", "David Lopez-Paz", "Arthur Szlam" ], "title": "Optimizing the latent space of generative networks", "venue": "arXiv preprint arXiv:1707.05776,", "year": 2017 }, { "authors": [ "Alican Bozkurt", "Babak Esmaeili", "Dana Brooks", "Jennifer Dy", "Jan-Willem Meent" ], "title": "Can vaes generate novel examples", "venue": "arXiv preprint arXiv:1812.09624,", "year": 2018 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Louis Clouâtre", "Marc Demers" ], "title": "Figr: Few-shot image generation with reptile", "venue": "arXiv preprint arXiv:1901.02199,", "year": 2019 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "Emnist: an extension of mnist to handwritten letters", "venue": "arXiv preprint arXiv:1702.05373,", "year": 2017 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "arXiv preprint arXiv:1606.02185,", "year": 2016 }, { "authors": [ "Arnab Ghosh", "Viveka Kulharia", "Vinay P Namboodiri", "Philip HS Torr", "Puneet K Dokania" ], "title": "Multiagent diverse generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Luke B Hewitt", "Maxwell I Nye", "Andreea Gane", "Tommi Jaakkola", "Joshua B Tenenbaum" ], "title": "The variational homoencoder: Learning to learn high capacity generative models from few examples", "venue": "arXiv preprint arXiv:1807.08919,", "year": 2018 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Yedid Hoshen", "Ke Li", "Jitendra Malik" ], "title": "Non-adversarial image synthesis with generative latent nearest neighbors", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wittawat Jitkrittum", "Patsorn Sangkloy", "Muhammad Waleed Gondal", "Amit Raj", "James Hays", "Bernhard Schölkopf" ], "title": "Kernel mean matching for content addressability of GANs", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Mark A Kramer" ], "title": "Nonlinear principal component analysis using autoassociative neural networks", "venue": "AIChE journal,", "year": 1991 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Ming-Yu Liu", "Xun Huang", "Arun Mallya", "Tero Karras", "Timo Aila", "Jaakko Lehtinen", "Jan Kautz" ], "title": "Few-shot unsupervised image-to-image translation", "venue": null, "year": 1905 }, { "authors": [ "Qi Mao", "Hsin-Ying Lee", "Hung-Yu Tseng", "Siwei Ma", "Ming-Hsuan Yang" ], "title": "Mode seeking generative adversarial networks for diverse image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond Y.K. Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Jonathan Masci", "Ueli Meier", "Dan Cireşan", "Jürgen Schmidhuber" ], "title": "Stacked convolutional autoencoders for hierarchical feature extraction", "venue": "In International Conference on Artificial Neural Networks,", "year": 2011 }, { "authors": [ "Anh Nguyen", "Jeff Clune", "Yoshua Bengio", "Alexey Dosovitskiy", "Jason Yosinski" ], "title": "Plug & play generative networks: Conditional iterative generation of images in latent space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Atsuhiro Noguchi", "Tatsuya Harada" ], "title": "Image generation from small datasets via batch statistics adaptation", "venue": "arXiv preprint arXiv:1904.01774,", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "arXiv preprint arXiv:1906.00446,", "year": 2019 }, { "authors": [ "Tim Sainburg", "Marvin Thielk", "Brad Theilman", "Benjamin Migliori", "Timothy Gentner" ], "title": "Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions", "venue": "arXiv preprint arXiv:1807.06650,", "year": 2018 }, { "authors": [ "Konstantin Shmelkov", "Cordelia Schmid", "Karteek Alahari" ], "title": "How good is my gan", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yaxing Wang", "Chenshen Wu", "Luis Herranz", "Joost van de Weijer", "Abel Gonzalez-Garcia", "Bogdan Raducanu" ], "title": "Transferring gans: generating images from limited data", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yaxing Wang", "Abel Gonzalez-Garcia", "David Berga", "Luis Herranz", "Fahad Shahbaz Khan", "Joost van de Weijer" ], "title": "Minegan: Effective knowledge transfer from gans to target domains with few images", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Chenshen Wu", "Luis Herranz", "Xialei Liu", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Memory replay gans: Learning to generate new categories without forgetting", "venue": "In Advances In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08318,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shengyu Zhao", "Zhijian Liu", "Ji Lin", "Jun-Yan Zhu", "Song Han" ], "title": "Differentiable augmentation for data-efficient gan training", "venue": "In Advances in neural information processing systems,", "year": 2020 }, { "authors": [ "Sainburg" ], "title": "2018), we found that the network learned interpolations near the seed", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern generative models can synthesize high-quality (Karras et al., 2019; Razavi et al., 2019; Zhang et al., 2018a), diverse (Ghosh et al., 2018; Mao et al., 2019; Razavi et al., 2019), and highresolution (Brock et al., 2018; Karras et al., 2017; 2019) images of any class, but only given a large training dataset for these classes (Creswell et al., 2017). This requirement of a large dataset is impractical in many scenarios. For example, an artist might want to use image generation to help create concept art of futuristic vehicles. Smartphone users may wish to animate a collection of selfies, or researchers training an image classifier might wish to generate augmented data for rare classes. These and other applications will require generative models capable of synthesizing images from a large, ever-growing set of object classes. We cannot rely on having hundreds of labeled images for all of them. Furthermore, most of them will likely be unknown at the time of training.\nWe therefore need generative models that can train on one set of image classes, and then generalize to a new class using only a small quantity of new images: few-shot image generation. Unfortunately, we find that the latest and greatest generative models cannot even represent novel classes in their latent space, let alone generate them on demand (Figure 1). Perhaps because of this generalization challenge, recent attempts at few-shot image generation rely on undesirable assumptions and compromises. They need impractically large labeled datasets of hundreds of classes (Edwards & Storkey, 2016), involve substantial computation at test time (Clouâtre & Demers, 2019), or are highly domain-specific, generalizing only across very similar classes (Jitkrittum et al., 2019).\nIn this paper, we introduce a strong, efficient, unsupervised baseline for few-shot image generation that avoids all the above compromises. We leverage the finding that although the latent spaces of powerful generative models, such as VAEs and GANs, do not generalize to new classes, the representations learned by autoencoders (AEs) generalize extremely well. The AEs can then be converted into generative models by training them to interpolate between seed images (Sainburg et al., 2018; Berthelot et al., 2018; Beckham et al., 2019). These Interpolative AutoEncoders (IntAEs) would seem a natural fit for few-shot image generation. Unfortunately, we also find that although IntAEs can reproduce images from novel classes, the ability to interpolate between them breaks down upon leaving the training domain. To remedy this, we introduce a new training method based on data augmentation, which produces smooth, meaningful interpolations in novel domains. We demonstrate on three different settings (handwritten characters, faces and general objects) that our Augmentation-Interpolative Autoencoder (AugIntAE) achieves simple, robust, highly general, and completely unsupervised few-shot image generation." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 GENERATIVE MODELING", "text": "AEs were originally intended for learned non-linear data compression, which could then be used for downstream tasks; the generator network was discarded (Kramer, 1991; Hinton & Salakhutdinov, 2006; Masci et al., 2011). VAEs do the opposite: by training the latent space toward a prior distribution, the encoder network can be discarded at test time instead. New images are sampled directly from the prior (Kingma & Welling, 2013). Subsequent models discard the encoder network entirely. GANs sample from a noise distribution and learn to generate images which fool a concurrently-trained real/fake image discriminator (Goodfellow et al., 2014). Bojanowski et al. (2017) and Hoshen et al. (2019) treat latent codes as learnable parameters directly, and train separate sampling procedures for synthesizing the novel images.\nRecent work has also seen a return to AEs as conditional generators, by training reconstruction networks to interpolate smoothly between pairs or sets of seed images. This is accomplished by combining the reconstruction loss on seed images with an adversarial loss on the seed and interpolated images. Different forms of adversarial loss (Sainburg et al., 2018; Berthelot et al., 2018) and interpolation (Beckham et al., 2019) have been proposed.\nWhile all of these approaches generate new images, it is unclear if any of them can generalize to novel domains. Some results suggest the opposite: a VAE sufficiently powerful to model the training data becomes incapable of producing anything else (Bozkurt et al., 2018)." }, { "heading": "2.2 FEW-SHOT IMAGE GENERATION", "text": "Current attempts at few-shot image generation span a wide range of approaches and models. Neural Statistician, an early attempt, is similar to the AE in that it is built for few-shot classification, and largely discards the generative capability (Edwards & Storkey, 2016). Generation-oriented iterations exist, but likewise depend on a large, varied, labelled dataset for training (Hewitt et al., 2018). Other approaches based on few-shot classification include generative matching networks (Bartunov & Vetrov, 2018) and adversarial meta-learning (Clouâtre & Demers, 2019). These models also depend on heavy supervision, and are fairly complicated, involving multiple networks and training procedures working in tandem - making them potentially difficult to train in practice reliably.\nSeparate work has approached few-shot image generation from the side of generative modeling. Wang et al. (2018), Noguchi & Harada (2019) and Wu et al. (2018) investigate the ability of GANs to handle domain adaptation via fine-tuning - thus requiring substantial computation, and more novel class examples than are available in the few-shot setting. Zhao et al. (2020) train GANs directly from few examples, though still more than are at hand for few-shot learning, and can be considered orthogonal work, as AugIntAE can serve as a useful pre-trained initialization. Antoniou et al. (2017) and Liu et al. (2019) use adversarial training to produce feed-forward few-shot generators. However,\nboth models still depend on varied, labelled training data, and risk exhibiting the same problems as standard GANs: mode collapse and hyperparameter sensitivity (Arora et al., 2017; 2018).\nJitkrittum et al. (2019) introduce an algorithm for class-conditioning an unconditioned generative model. New images are produced by matching latent space batch statistics to real images from a single, possibly novel class. Nguyen et al. (2017) learn individual latent codes from a pretrained discriminator, while Wang et al. (2020) train a latent sampling network. These approaches have little to no evaluation on novel classes, and to what degree they generalize depends entirely on the pretrained image generator. They may also require substantial test-time computation. In contrast, AugIntAEs are lightweight, train robustly, and generalize broadly from completely unlabelled data." }, { "heading": "3 PROBLEM SETUP", "text": "Let X be a large, unlabelled collection of images depicting objects from a set of classes C. Let X ′ be a very small set of images - as few as two - belonging to a novel class c′ 6∈ C. Our goal is to train a network on X which, given X ′, generates images clearly belonging to c′. We refer to this as the network’s ability to generalize to new domains (note that this usage is distinct from “generalizing” to novel images in the same domain, a much simpler task). We cannot directly adapt the network to X ′ using SGD, as X ′ contains too few images to prevent overfitting.\nThis is an extremely difficult problem, since it is unclear if a neural network trained to model the data distribution in X can even represent images from a different distribution, let alone sample from it. Therefore, we first examine whether existing generative models can faithfully encode novel class images in latent space. We train a VAE and a WGAN-GP (Gulrajani et al., 2017) on MNIST handwritten digits (LeCun et al., 1998), as well as an encoder network that inverts the WGAN-GP as in Bau et al. (2019) (details in appendix). We then evaluate the ability of each generative model to recover particular images. Using the built-in VAE encoder and the WGAN-GP inversion network, we find that while both models can reconstruct training images (Fig. 2, top), the same approach fails on images from novel classes - in this case, EMNIST handwritten letters (Cohen et al., 2017). The outputs do not much resemble the inputs; crucial semantic information is lost (Fig. 2, bottom). To discount the possibility of sub-optimal encoders, we simulate an oracle encoder, refining the latent code parameters for each image directly via SGD. These reconstructions are not much better. Fig. 1 demonstrates a similar failure in a large, state-of-the-art pretrained GAN. This confirms prior findings (Bozkurt et al., 2018) that current generative approaches by default cannot even represent images from novel classes. Generating new novel class images is simply out of the question.\nWhy do sophisticated generative models fail to generalize? We argue this is largely by design. Generative models such as VAEs and GANs are trained to minimize the divergence between a prior distribution and a learned posterior distribution, where one or both are approximated by repeated sampling. VAEs push the learned latent posterior toward a Gaussian prior, while GANs map samples from the prior to a posterior distribution in image space. In both cases, latent vectors are repeatedly\nsampled and sent through the generator. Thus, by the time the generator is trained to convergence, and the posterior approaches the prior (or vice-versa), every region of the latent space feasible under the prior will have been mapped at some point to a training image - or, in the case of GANs, to an image indistinguishable from a training image. This means that a properly trained VAE or GAN cannot construct or reconstruct new object classes. If it could, then it would have been able to sample such images during training - which would mean it had not been properly trained at all." }, { "heading": "4 AUGMENTATION-INTERPOLATIVE AUTOENCODERS", "text": "AutoEncoders: As discussed above, minimizing the divergence between prior and posterior training distributions ensures good image synthesis, but poor generalization. The opposite could also hold: AEs do not enforce any latent distribution on the data posterior, and so might generalize well. More formally, given a network E that maps an image x to latent vector z, and a generator G mapping z back to image space, we refer to the function composition G(E(·)) as the autoencoder. E and G are trained jointly over X to minimize the pixel reconstruction error between x ∈ X and G(E(x)). The question of generalization becomes, to what degree does a trained AE maintain close proximity between x′ and G(E(x′)) for x′ which lies far from the data manifold of X?\nBy this measure, we find that our conjecture holds: AEs generalize surprisingly well. Examples are given in Fig. 3, demonstrating near-perfect generalization performance over three pairs of classdisjoint datasets: MNIST digits to EMNIST letters, Omniglot training alphabets to Omniglot testing alphabets (Lake et al., 2015), and CIFAR-10 to CIFAR-100 (Krizhevsky et al., 2009). We also quantitatively evaluate generalization (in terms of reconstruction error) between all the above pairs, as well as between Omniglot and MNIST/EMNIST, which includes domain shifts, e.g., stroke width (Table 1). Reconstruction quality is high across the board, especially given that the MNIST and CIFAR-10 networks learn only ten distinct classes! AEs exhibit very little overfitting to the training domain, learning a general mapping despite heavy class constraints.\nIt is possible that this generalization is a result of AEs simply learning an identity function. Fortunately, this is not the case: AEs learn clear image priors. We find that our trained AEs are much more effective at encoding real images than noise (see Fig. 3, right). We also find that low-frequency noise is encoded more faithfully than high-frequency noise - an analysis is provided in appendix. The learned AE mapping, while general, is also nontrivial.\nInterpolative AutoEncoders: The fact that AEs generalize suggests they are capable of acting as few-shot image generators for novel classes, given a method for sampling appropriate latent z vectors. One possibility is to interpolate between data points in latent space: every sampled point is a weighted sum of two or more real points. This allows us to produce novel combinations of seed images without changing the semantic content. Unfortunately, it is a known fact that AEs do not interpolate well, as shown in Fig. 5, row 2 (Berthelot et al., 2018). Prior work (Sainburg et al., 2018; Berthelot et al., 2018; Beckham et al., 2019) addresses this by applying an adversarial loss to the interpolated images, which works well in the training domain. However, we find the Interpolative AutoEncoder (IntAE) approach overly restrictive for our purposes: it constrains all interpolations between arbitrary image pairs to the training domain. For example, on MNIST, an IntAE must produce recognizable digits when interpolating between a 3 and a 4, a semantically unintuitive result. This makes the learning process harder and causes the model to learn interpolations that do not generalize. When it interpolates between letters (Fig. 5, row 3, left), we find that it does produce undesirable artifacts - that look like numbers!\nAugmentation-Interpolative AutoEncoders: We introduce a novel training procedure for IntAEs to remove such artifacts while maintaining generalizability. Instead of optimizing interpolated images using only a GAN loss, we train the network to directly recover known, semantically interpolated images from the corresponding interpolated latent code. This accomplishes two things simultaneously: first, we only interpolate between images where the interpolation makes semantic sense, since we must know the interpolated image in advance. This simplifies the learning problem significantly. Second, the model is no longer constrained to the training manifold when interpolating arbitrary training image pairs. The network can now learn simpler, more direct interpolations that work well on both training and novel domains.\nFormally, suppose we have a triplet of imagesA = f(ρ1),B = f(ρ2) and C = f(αρ1+(1−α)ρ2), where f is some unknown image generating process, ρ is a semantic variable, and α ∈ [0, 1]. Using this triplet, we train the interpolative AE to reconstruct C by decoding the interpolated latent code of A and B. Formally, we train the encoder E and the generator G by minimizing:\nLrecon = ||C −G(αE(A) + (1− α)E(B))||1 (1)\nIn practice, finding appropriate image tripletsA,B,C in a dataset of independent images is difficult. Instead, we synthesizeA,B,C using affine spatial transformations, and color jitter for 3-channel images. Given training image x, we randomly sample two sets of augmentation parameters (translation, rotation, hue, etc.). Applying each of these transformations independently to x yields A and B (for example, a 10o rotation and a −5o rotation). We then sample a weight α ∈ [0, 1] and compute a weighted average of the two transformations, which we apply to x to produce C (in our example, if α = 13 , C represents a 5\no rotation). The Augmentation-Interpolative AutoEncoder (AugIntAE) is then trained to recover C from the α-weighted interpolation between the latent embeddings for A and B. This corresponds to Equation 1.\nWe can also augment the model with the original IntAE losses: a reconstruction loss on A and B, and a GAN loss Ladv on the interpolated C. In practice, we found that the former did not noticeably affect performance, while the latter was helpful in reducing the blurriness of output images. Subsequent models include Ladv . The full procedure is displayed in Fig. 4.\nAt first glance, learning the space of affine and color transformations does not appear particularly helpful for an IntAE. Very few visual relationships in the real world can be captured by these transformations alone. However, we find that these learned interpolations act as a powerful regularizer on the latent space, allowing AugIntAE to smoothly capture far more interesting and difficult transformations as well, such as shape, lighting, and even 3D pose.\nFew-shot generation: Once the AugIntAE is trained, we can sample novel images given only a set of seeds. Simply select a random pair of images, find their latent space embeddings, sample α ∈ [0, 1], and generate the image from the α-weighted mixture of embeddings. More sophisticated sampling techniques are possible, but left to future work." }, { "heading": "5 EXPERIMENTS", "text": "All encoder/generator networks use standard 4- to 6-layer GAN architectures. We employ shallow networks to illustrate that AugIntAE itself, not network power, is responsible for good performance.\nWe use two baseline models. The first is an AE trained without interpolation but with the auxiliary GAN loss. We use an LSGAN (Mao et al., 2017) for the discriminator. The second baseline is an IntAE representing prior work: seed images are reconstructed while interpolations are trained via GAN. The GAN loss is also applied to the seed image reconstructions. The choice of LSGAN discriminator means that IntAE captures two prior models: it is an LSGAN instantiation of Beckham et al. (2019), and also a version of Berthelot et al. (2018) with discretized labels. We use the same data augmentation in all models and set parameters as large as possible without introducing significant visual artifacts. Training and evaluation details for all experiments are in the appendix." }, { "heading": "5.1 QUANTITATIVE RESULTS", "text": "We report quantitative scores for all results in Table 2. We examine four train/test dataset pairs: two handwritten character settings and two natural image settings. For each pair we report two metrics: FID score, which captures overall image quality, and test set classification rate, which captures the degree of generalization and semantic faithfulness to the target domain. For the latter metric, we train a separate classifier on the combined train and test datasets to distinguish training images from testing images. Generators that generalize well to the test domain should have higher rates of test-set classification, while those that do not generalize will produce images skewed toward the training domain. On both metrics we find that AugIntAE is superior to AE and IntAE baselines in all settings. We also include standard GAN models trained solely on the training/testing datasets as baseline/oracle models, respectively. Our FID scores generally approach but do not exceed the oracle scores. We now examine individual dataset results in detail." }, { "heading": "5.2 HANDWRITTEN CHARACTERS", "text": "Image quality: We evaluate the performance of AugIntAE on two handwritten character dataset pairs. One set of models trains on MNIST digits and evaluates on EMNIST letters, while the other transfers from the Omniglot training alphabets to the testing alphabets. The autoencoders in both cases use the 4-layer encoder/generator of InfoGAN (Chen et al., 2016), with latent dimensionality reduced to 32, in keeping with prior IntAE work (Beckham et al., 2019; Berthelot et al., 2018).\nFig. 5 shows example interpolations for these two contexts. AugIntAE produces better interpolations and removes visual artifacts, particularly discontinuities, present in the AE and IntAE images. These qualitative results are verified quantitatively in Table 2. AugIntAE, as measured by FID score (Heusel et al., 2017), outperforms all baselines. We also measure the semantic faithfulness of the interpolations to the test seed images, by training a separate classifier to distinguish training data from testing data. AugIntAE images are classified correctly more often than any baseline. These results hold not just for handwritten characters, but across all our testing regimes. In terms of both image quality and semantic fidelity, AugIntAEs are effective few-shot generators.\nGeneralizability: We compare AugIntAE as a few-shot generator to two additional baselines: Neural Statistician (Edwards & Storkey, 2016) and DAGAN (Antoniou et al., 2017). Both approaches require class labels on the training data, while ours does not. We train both models on MNIST and then attempt to synthesize new images from EMNIST. Fig. 6 makes clear that AugIntAEs generalize more broadly and are much less restricted by the narrow training domain. Neural Statistician and DAGAN overfit to the training classes, and generate number images instead of the desired letters.\nData hallucination: As a practical use case, we utilize AugIntAE for data augmentation. We train a series of classifiers on the letters in the “ByMerge” split of EMNIST (lower- and uppercase separated), each using a different interpolation strategy as data augmentation. Half of each training batch is augmented data, produced by interpolation between same-class images with the labels preserved and α = .5. The model without augmentation use the same batch size but train for twice as long. All autoencoders used for interpolation are trained on MNIST. All augmentations provide gains, as shown in Table Table 3, including pixel-level interpolation, a constrained form of MixUp (Zhang et al., 2018b). However, the largest gain is obtained by interpolating in the latent space of an AugIntAE.\nTable 3: Interpolation as a form of data augmentation on EMNIST. Results are averaged over ten runs, with 95% confidence intervals. AugIntAE provides the most effective augmentation.\nAugmentation None Mixup AE AugIntAE Accuracy 90.33 ± .06 90.66 ± .04 91.27 ± .04 91.50 ± .05\nFigure 7: Ablation study on forms of data augmentation. Each row corresponds to a distinct set of 5 seed images (left); results are midpoint interpolations between seed pairs. Each individual form of augmentation improves over the AE baseline (fewer artifacts), but none approach the performance of using them all together (right). See, for example, the “S” in the top-right corner of each plot. Best viewed digitally.\nAblation: Affine augmentation encompasses a range of independent transformations (rotation, translation, scale, skew) and so it is worth examining to what degree each is necessary. We train four MNIST AugIntAE models using each independent transformation as the sole augmentation technique. Sample outputs are given in Fig. 7, along with AE and full AugIntAE baselines for comparison. We find that interpolative training using each form of augmentation improves over the AE baseline, but at the same time no individual augmentation approaches the performance of the full AugIntAE: the different augmentations act synergistically." }, { "heading": "5.3 CELEB-A", "text": "We extend our results to the more difficult domain of higher-resolution natural images. Our models train on the male faces from Celeb-A, scaled to 128×128 resolution, and generalize to female faces. Network architectures follow DCGAN (Radford et al., 2016) with an additional half-width layer (at the bottom/top of the encoder/generator, respectively) to handle the increased resolution. Latent dimensionality is 512, in keeping with prior IntAEs (Sainburg et al., 2018; Beckham et al., 2019).\nThe results, displayed in Fig. 8 and Table 2, are similar to our findings for handwritten characters. AEs generalize well, but produce visual artifacts during interpolation: unrealistic, transparent regions around the head. IntAE produces high-saturation color artifacts. Compared to these baselines, AugIntAE removes the artifacts and restores semantic meaning to the interpolation path, even when the interpolation is non-affine (as in Fig. 8).\nOne might wonder if interpolative sampling acts as a constraint on generated image variety. Note that the number of unique image pairs grows quadratically with the number of seed images, so that\neven with just six seeds AugIntAE can produce broad variety (Fig. 9). We conclude that AugIntAE is effective on higher-resolution color images, not just handwritten characters." }, { "heading": "5.4 CIFAR", "text": "Finally, we evaluate our model in an extremely challenging setting: unconstrained natural images, and classes with large intra-class variation. The latter property is especially challenging for interpolation based models (e.g., how to interpolate between a real bear and a teddy bear?). We train our models on CIFAR-10 and evaluate on novel CIFAR-100 classes. Network architecture uses DCGAN with 512 latent dimensions. Fig. 10 displays example outputs from AE, IntAE, and AugIntAE on novel CIFAR-100 data, and quantitative results are in Table 2. We conclude that AugIntAE is the most effective interpolative few-shot generator for natural images, though much work remains for the more difficult cases (see supplementary)." }, { "heading": "6 CONCLUSION", "text": "We introduce a powerful, lightweight, and label-free method for few-shot image generation. Building on the generalizability of AEs, we introduce a novel training procedure that leads to higher quality interpolations. The resulting AugIntAEs are robust, generalize far more broadly than prior few-shot generators, and are practically useful for downstream tasks." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ARCHITECTURES AND TRAINING", "text": "Architectures: Nearly all network backbones follow either InfoGAN, if operating on handwritten characters, or DCGAN, if operating on natural images. Celeb-A networks include an additional halfwidth layer at the bottom of the encoder, and at the top of the generator, to account for increased image resolution. For convenience, we use the same network architecture for both AugIntAE encoders and auxiliary GAN discriminators. The only difference is that the discriminator has one output neuron in the final layer, while encoders have as many as there are latent dimensions.\nWe L2-normalize our latent features and perform interpolation via spherical linear interpolation (SLERP). Latent dimensionality is set to 32 for handwritten character models, which, given their 28× 28 resolution, amounts to an almost 25-fold reduction in dimensionality. Celeb-A dimensionality is set to 512, with a 128 × 128 resolution and three color channels, producing a reduction in dimensionality of almost two orders of magnitude. CIFAR images use the standard DCGAN resolution of 64× 64 and 512 latent dimensions for an almost 25-fold dimensionality reduction, similar to the handwritten character models.\nVAEs, WGAN-GPs, and classifiers all use the same architectures, plus necessary modifications to the number of input/output neurons. WGAN-GPs draw from the same latent dimensionality as the corresponding AE models, and receive L2-normalized noise samples rather than samples from a standard Gaussian. VAE encoders have output dimensionality twice the above, as they predict both a mean µ and variance γ for each coordinate. Classifiers have as many output neurons as there are classes in the given task (2 for train/test dataset classifiers, 37 for EMNIST letter classifiers) and end in a softmax layer.\nTraining: Unless stated otherwise, all networks trained on real images use the Adam optimizer with initial step size .001. Models train for 100 passes over the given dataset. Models with no adversarial component cut the learning rate by a factor of 10 at epochs 35 and 70.\nWe chose these hyperparameters without a validation set, using convergence on the training data as our sole criterion.\nGAN training: To stabilize learning, adversarial models do not change their learning rate. We also rescale all adversarial gradients so that their magnitudes match those of the reconstruction loss gradient. WGAN-GPs train for twice as long as other models, but update the generator only once per five discriminator updates.\nOn some datasets, we found that the auxiliary LSGAN discriminator could collapse and allow the generator to “win” the adversarial game. To prevent this, we introduce a scaling factor k ∈ [0, 1] that updates dynamically based on discriminator performance. Specifically:\nLAE = Lrecon + k ∗ γ ∗ Ladv (2)\nwhere\nγ = ‖∇Lrecon‖2 ‖∇Ladv‖2\n(3)\ncalculated per-image, and k is adjusted with each gradient update according to the following rules:\nk0 = 1 (4)\nk̄ = kt − .001 ∗ (1− (D(xreal)−D(xfake))) (5) kt+1 = max(0,min(1, k̄)) (6)\nThis update scheme for k ensures that whenever the scores coming from the discriminatorD for real and fake images are separated by less than 1 on average, k decreases. The generator then focuses more on the reconstruction task and becomes less competitive, allowing the discriminator network\nto ”catch up” until the margin of separation is greater than 1 again. At that point k increases back to 1, at which point the reconstruction and adversarial losses are equally weighted once more.\nAugmentation Parameters: We sample data augmentation parameters ρ uniformly over predefined ranges of values. For handwritten character datasets, we sample rotations in the range [−20, 20], translations in the range [−4, 4], scaling in the range [.8, 1.2], and shear in the range [−6, 6]. These values were chosen heuristically: we picked the largest values that would not occlude or obscure the character.\nNatural image AugIntAEs sample from half the range of rotation and skew, and double the range of translation (though because of the higher resolution, this comes out to much smaller displacement overall). These values were chosen so as not to introduce large regions of empty space into the image, while staying as large as possible. CIFAR images are symmetric-padded; Celeb-A images are center cropped and do not require padding. Additionally, we sample a range of color jitter parameters for AugIntAEs handling 3-channel RGB images. We sample brightness/contrast/saturation adjustments from the range [.75, 1.25] and hue from the range [−5%,+5%]. We again chose these values to be as large as possible without producing unrealistic or unparsable images.\nSimilar to Sainburg et al. (2018), we found that the network learned interpolations near the seed images more easily than near the midpoint of the interpolation path. We therefore biased our sampled α toward the midpoint by sampling the mean of two uniform random draws.\nTo ensure that our baselines are trained on the same data distribution as our AugIntAEs, we use the α-weighted interpolated image as the training image, even when the network does not use interpolative training. This only applies to AE and IntAE models." }, { "heading": "A.2 GAN INVERSION", "text": "Obtaining generalization results for a GAN involves inverting the generator, which we attempt via a combination of a learned inversion network and direct SGD optimization on the latent codes for each target image. For the PGAN (fig. 1), we use a publicly available pretrained model provided by Facebook Research1. For the MNIST/EMNIST generalization experiments (fig. 2) we use our own implementations, constructed and trained using the procedure described above. The learned inversion network for the PGAN generator uses the architecture described in Table 4, while the inversion network for the MNIST WGAN-GP uses the same encoder as in other experiments. Both networks are trained by sampling images from the generator, and attempt to reconstruct the corresponding latent code for each image using mean squared error. We use SGD with an initial step size of .0001 and momentum equal to .1. We train for 6400 iterations and cut the learning rate by a factor of 10 after every 1280 iterations.\nThe subsequent stage of inversion involves direct refinement of the latent codes provided by the inversion network via SGD in latent space. In both cases we use 1000 iterations of SGD, with a learning rate of .01 and no momentum. The loss is an L1 pixel reconstruction loss, resulting in the final images displayed in figures 1 and 2 of the main paper.\n1https://pytorch.org/hub/facebookresearch pytorch-gan-zoo pgan" }, { "heading": "A.3 QUANTITATIVE EVALUATION", "text": "Fréchet Inception Distance (FID) (Heusel et al., 2017) compares the distributions of embeddings of real (pr(x)) and generated (pg(x)) images. Both these distributions are modeled as multidimensional Gaussians parameterized by their respective mean and covariance. The distance measure is defined between the two Gaussian distributions as:\nd2((mr,Cr), (mg,Cg)) = ‖mr −mg‖2 + Tr(Cr + Cg − 2(CrCg) 1 2 ) (7)\nwhere (mr,Cr) and (mg,Cg) denote the mean and covariance of real and generated image distributions respectively (Shmelkov et al., 2018). We use a port of the official implementation of FID to PyTorch2. The default pool3 layer of the Inception-v3 network (Szegedy et al., 2016) is used to extract activations. We use 5k generated images to compute the FID scores, and sample 3 uniformly distributed points along each sampled interpolation path. To ensure domain fidelity of generated images, we use a binary classifier to distinguish whether the images come from the train distribution or the test set. Classifiers have the same architecture as other experiments, as described in Section A.1." }, { "heading": "A.4 NOISE RECONSTRUCTION", "text": "We analyze the ability of a trained AE to reconstruct uniform pixel noise of different frequencies. We simulate noise frequency by sampling noise at small resolutions and scaling up the resulting map to the desired image resolution (28 × 28 for handwritten characters, 64 × 64 or 128 × 128 for 3- channel color images). Fig. 11 plots the ability of trained AEs to reproduce noise patterns over given frequencies, averaged over 1000 trials. Networks have a clear low-frequency bias - though interestingly, the handwritten character datasets reach their minimum reconstruction error at a frequency level of 6-8, a possible manifestation of a learned bias toward penstroke thicknesses or certain-sized pockets of negative space associated with handwritten characters. Most tellingly, reconstruction error for novel images (dotted lines) is significantly lower than for noise of any frequency for handwritten character models, and most frequencies for natural image models. This suggests clearly that the network has learned a particular image distribution that is not reflected by uniform noise - an image prior.\nIt is also worth noting in what ways the AEs fail when reconstructing noise. Figs 12 and 13 show reconstruction attempts for random sampled noise at the given frequencies. It is clear that the handwritten character models struggle to abandon a strong, learned penstroke prior. Natural image networks are better at encoding noise, but also demonstrate a clear failure mode at high frequencies where they extract low-contrast, lower-frequency patterns and ignore the higher-frequency input. The Celeb-A model attempts to compensate for this by adding high-frequency ripple artifacts, visible also at some lower frequencies, probably reflecting a learned hair prior.\n2https://github.com/mseitzer/pytorch-fid" }, { "heading": "A.5 FEW-SHOT GENERATION BASELINES", "text": "We re-implement DAGAN using the architectures from our other experiments, with hyperparameters the same as for WGAN-GP training. The sole difference is that the critic network now takes in a pair of images, so the number of input channels is 2 instead of 1. We implement the critic network as a WGAN-GP. Fig. 14 shows that the network converges nicely on the training data: it successfully produces novel images from the same class as the initial seed images.\nNeural statistician uses a much more complex network architecture, and involves many additional hyperparameters, making direct re-implementation difficult. Instead we run the publicly available code3 as-is, keeping all hyperparameters the same. We run the omniglot experiment, and simply replace the omniglot training dataset with MNIST. Fig. 14 shows that like DAGAN, the network converges nicely and produces the desired behavior on the training data. The failure of these approaches on EMNIST is thus truly a failure of generalization." }, { "heading": "A.6 ADDITIONAL ILLUSTRATIVE EXAMPLES", "text": "Selected sets of interpolated image pairs from each of our four dataset regimes, demonstrating that AugIntAE performs smooth and intuitive interpolations where AEs and IntAEs produce artifacts. Images are organized as in the paper, with rows in each cell correspond to pixel fade, AE, IntAE, and AugIntAE. These are followed by four sets of randomly chosen pairs illustrating average-case performance, again one for each of our four dataset regimes. Begins on following page.\n3https://github.com/conormdurkan/neural-statistician" } ]
2,020
null
SP:a7605f203e883bb5d782cd9e090cebff0cf504ef
[ "This paper applies tools from neuroscience to understand how language models integrate across time. The basic approach is to present a phrase, preceded by two different context phrases: one that is natural (i.e. the phrase that actually preceded it in the corpus) and one that is randomly selected. The authors then measure how long it takes for the unit activations to become similar for the two different contexts, which provides a measure for how long the context impacts the representation. They find that (1) timescales increase at later layers of the language model (2) that only a small fraction of units exhibit long timescales (3) that long/medium-timescale units appear to come in two forms which they try and characterize using graph-style analyses. " ]
In the human brain, sequences of language input are processed within a distributed and hierarchical architecture, in which higher stages of processing encode contextual information over longer timescales. In contrast, in recurrent neural networks which perform natural language processing, we know little about how the multiple timescales of contextual information are functionally organized. Therefore, we applied tools developed in neuroscience to map the “processing timescales” of individual units within a word-level LSTM language model. This timescale-mapping method assigned long timescales to units previously found to track long-range syntactic dependencies. Additionally, the mapping revealed a small subset of the network (less than 15% of units) with long timescales and whose function had not previously been explored. We next probed the functional organization of the network by examining the relationship between the processing timescale of units and their network connectivity. We identified two classes of long-timescale units: “controller” units composed a densely interconnected subnetwork and strongly projected to the rest of the network, while “integrator” units showed the longest timescales in the network, and expressed projection profiles closer to the mean projection profile. Ablating integrator and controller units affected model performance at different positions within a sentence, suggesting distinctive functions of these two sets of units. Finally, we tested the generalization of these results to a character-level LSTM model and models with different architectures. In summary, we demonstrated a model-free technique for mapping the timescale organization in recurrent neural networks, and we applied this method to reveal the timescale and functional organization of neural language models.1
[ { "affiliations": [], "name": "Hsiang-Yun Sherry Chien" }, { "affiliations": [], "name": "Jinhan Zhang" } ]
[ { "authors": [ "Christopher Baldassano", "Janice Chen", "Asieh Zadbood", "Jonathan W Pillow", "Uri Hasson", "Kenneth A Norman" ], "title": "Discovering event structure in continuous narrative perception", "venue": "and memory. Neuron,", "year": 2017 }, { "authors": [ "Alex T Baria", "A Mansour", "Lejian Huang", "Marwan N Baliki", "Guillermo A Cecchi", "M-Marsel Mesulam", "A Vania Apkarian" ], "title": "Linking human brain local activity fluctuations to structural and functional network", "venue": "architectures. Neuroimage,", "year": 2013 }, { "authors": [ "Vladimir Batagelj", "Matjaz Zaversnik" ], "title": "An o (m) algorithm for cores decomposition of networks", "venue": "arXiv preprint cs/0310049,", "year": 2003 }, { "authors": [ "Hsiang-Yun Sherry Chien", "Christopher J Honey" ], "title": "Constructing and forgetting temporal context in the human cerebral cortex", "venue": null, "year": 2020 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Shi Gu", "Fabio Pasqualetti", "Matthew Cieslak", "Qawi K Telesford", "B Yu Alfred", "Ari E Kahn", "John D Medaglia", "Jean M Vettel", "Michael B Miller", "Scott T Grafton" ], "title": "Controllability of structural brain networks", "venue": "Nature communications,", "year": 2015 }, { "authors": [ "Kristina Gulordava", "Piotr Bojanowski", "Édouard Grave", "Tal Linzen", "Marco Baroni" ], "title": "Colorless green recurrent networks dream hierarchically", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers),", "year": 2018 }, { "authors": [ "Patric Hagmann", "Leila Cammoun", "Xavier Gigandet", "Reto Meuli", "Christopher J Honey", "Van J Wedeen", "Olaf Sporns" ], "title": "Mapping the structural core of human cerebral cortex", "venue": "PLoS Biol,", "year": 2008 }, { "authors": [ "Michael Hahn", "Marco Baroni" ], "title": "Tabula nearly rasa: Probing the linguistic knowledge of characterlevel neural language models trained on unsegmented text", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Uri Hasson", "Eunice Yang", "Ignacio Vallines", "David J Heeger", "Nava Rubin" ], "title": "A hierarchy of temporal receptive windows in human cortex", "venue": "Journal of Neuroscience,", "year": 2008 }, { "authors": [ "Uri Hasson", "Janice Chen", "Christopher J Honey" ], "title": "Hierarchical process memory: memory as an integral component of information processing", "venue": "Trends in cognitive sciences,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Christopher J Honey", "Thomas Thesen", "Tobias H Donner", "Lauren J Silbert", "Chad E Carlson", "Orrin Devinsky", "Werner K Doyle", "Nava Rubin", "David J Heeger", "Uri Hasson" ], "title": "Slow cortical dynamics and the accumulation of information over long timescales", "venue": null, "year": 2012 }, { "authors": [ "Shailee Jain", "Alexander Huth" ], "title": "Incorporating Context into Language Encoding Models for fMRI", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shailee Jain", "Amanda LeBel", "Alexander Huth" ], "title": "Improving language encoding of fMRI responses with transformers", "venue": "Annual Meeting of the Society for Neuroscience.,", "year": 2019 }, { "authors": [ "Urvashi Khandelwal", "He He", "Peng Qi", "Dan Jurafsky" ], "title": "Sharp nearby, fuzzy far away: How neural language models use context", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Yair Lakretz", "Germán Kruszewski", "Théo Desbordes", "Dieuwke Hupkes", "Stanislas Dehaene", "Marco Baroni" ], "title": "The emergence of number and syntax units in lstm language models", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),", "year": 2019 }, { "authors": [ "Yair Lakretz", "Stanislas Dehaene", "Jean-Rémi King" ], "title": "What limits our capacity to process nested long-range dependencies in sentence comprehension", "venue": null, "year": 2020 }, { "authors": [ "Yulia Lerner", "Christopher J Honey", "Lauren J Silbert", "Uri Hasson" ], "title": "Topographic mapping of a hierarchy of temporal receptive windows using a narrated story", "venue": "Journal of Neuroscience,", "year": 2011 }, { "authors": [ "Tal Linzen", "Emmanuel Dupoux", "Yoav Goldberg" ], "title": "Assessing the ability of lstms to learn syntaxsensitive dependencies", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2016 }, { "authors": [ "M-Marsel Mesulam" ], "title": "From sensation to cognition. Brain: a journal of neurology", "venue": null, "year": 1998 }, { "authors": [ "Caroline A Runyan", "Eugenio Piasini", "Stefano Panzeri", "Christopher D Harvey" ], "title": "Distinct timescales of population coding across cortex", "venue": "Nature,", "year": 2017 }, { "authors": [ "Jiang Xu", "Stefan Kemeny", "Grace Park", "Carol Frattali", "Allen Braun" ], "title": "Language in context: emergent features of word, sentence, and narrative", "venue": "comprehension. NeuroImage,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Language processing requires tracking information over multiple timescales. To be able to predict the final word “timescales” in the previous sentence, one must consider both the short-range context (e.g. the adjective “multiple”) and the long-range context (e.g. the subject “language processing”). How do humans and neural language models encode such multi-scale context information? Neuroscientists have developed methods to study how the human brain encodes information over multiple timescales during sequence processing. By parametrically varying the timescale of intact context, and measuring the resultant changes in the neural response, a series of studies (Lerner et al., 2011; Xu et al., 2005; Honey et al., 2012) showed that higher-order regions are more sensitive to longrange context change than lower-order sensory regions. These studies indicate the existence of a “hierarchy of processing timescales” in the human brain. More recently, Chien & Honey (2020) used a time-resolved method to investigate how the brain builds a shared representation, when two groups of people processed the same narrative segment preceded by different contexts. By directly mapping the time required for individual brain regions to converge on a shared representation in response to shared input, we confirmed that higher-order regions take longer to build a shared representation. Altogether, these and other lines of investigation suggest that sequence processing in the\n1The code and dataset to reproduce the experiment can be found at https://github.com/ sherrychien/LSTM_timescales\nbrain is supported by a distributed and hierarchical structure: sensory regions have short processing timescales and are primarily influenced by the current input and its short-range context, while higher-order cortical regions have longer timescales and track longer-range dependencies (Hasson et al., 2015; Honey et al., 2012; Chien & Honey, 2020; Lerner et al., 2011; Baldassano et al., 2017; Runyan et al., 2017; Fuster, 1997).\nHow are processing timescales organized within recurrent neural networks (RNNs) trained to perform natural language processing? Long short-term memory networks (LSTMs) (Hochreiter & Schmidhuber, 1997) have been widely investigated in terms of their ability to successfully solve sequential prediction tasks. However, long-range dependencies have usually been studied with respect to a particular linguistic function (e.g. subject-verb number agreement, Linzen et al. 2016; Gulordava et al. 2018; Lakretz et al. 2019), and there has been less attention on the broader question of how sensitivity to prior context – broadly construed – is functionally organized within these RNNs. Therefore, drawing on prior work in the neuroscience literature, here we demonstrate a model-free approach to mapping processing timescale in RNNs. We focused on existing language models that were trained to predict upcoming tokens at the word level (Gulordava et al., 2018) and at the character level (Hahn & Baroni, 2019). The timescale organization of these two models both revealed that the higher layers of LSTM language models contained a small subset of units which exhibit long-range sequence dependencies; this subset includes previously reported units (e.g. a “syntax” unit, Lakretz et al., 2019) as well as previously unreported units.\nAfter mapping the timescales of individual units, we asked: does the processing timescales of each unit in the network relate to its functional role, as measured by its connectivity? The question is motivated by neuroscience studies which have shown that in the human brain, higher-degree nodes tend to exhibit slower dynamics and longer context dependence than lower-degree nodes (Baria et al., 2013). More generally, the primate brain exhibits a core periphery structure in which a relatively small number of “higher order” and high-degree regions (in the prefrontal cortex, in default-mode regions and in so-called “limbic” zones) maintain a large number of connections with one another, and exert a powerful influence over large-scale cortical dynamics (Hagmann et al., 2008; Mesulam, 1998; Gu et al., 2015). Inspired by the relationships between timescales and network structure in the brain, we set out to test corresponding hypotheses in RNNs: (1) Do units with longer-timescales tend to have higher degree in neural language models? and (2) Do neural language models also exhibit a “core network” composed of functionally influential high-degree units? Using an exploratory network-theoretic approach, we found that units with longer timescales tend to have more projections to other units. Furthermore, we identified a set of medium-to-long timescale “controller” units which exhibit distinct and strong projections to control the state of other units, and a set of longtimescale “integrator units” which showed influence on predicting words where the long context is relevant. In summary, these findings advance our understanding of the timescale distribution and functional organization of LSTM language models, and provide a method for identifying important units representing long-range contextual information in RNNs." }, { "heading": "2 RELATED WORK", "text": "Linguistic Context in LSTMs. How do LSTMs encode linguistic context at multiple timescales? Prior work suggested that the units sensitive to information that requires long-range dependencies are sparse. By ablating one unit at a time, Lakretz et al. (2019) found two units that encode information required for processing long-range subject-verb number agreement (one for singular and one for plural information encoding). They further identified several long-range “syntax units” whose activation was associated with syntactic tree-depth. Overall, Lakretz et al. (2019) suggests that a sparse subset of units tracks long-range dependencies related to subject-verb agreement and syntax. If this pattern is general – i.e. if there are very few nodes tracking long-range dependencies in general – this may limit the capacity of the models to process long sentences with high complexity, for reasons similar to those that may limit human sentence processing (Lakretz et al., 2020). To test whether long-range nodes are sparse in general, we require a model-free approach for mapping the context dependencies of every unit in the language network.\nWhole-network context dependence. Previous work by Khandelwal et al. (2018) investigated the duration of prior context that LSTM language models use to support word prediction. Contextdependence was measured by permuting the order of words preceding the preserved context, and\nobserving the increase in model perplexity when the preserved context gets shorter. Khandelwal et al. (2018) found that up to 200 word-tokens of prior context were relevant to the model perplexity, but that the precise ordering of words only mattered within the most recent 50 tokens. The tokenbased context-permutation method employed in this study was analogous to the approach used to measure context-dependence in human brain responses to movies (Hasson et al., 2008) and to auditory narratives (Lerner et al., 2011).\nInspired by the findings of Khandelwal et al. (2018) and Lakretz et al. (2019), in the present study we set out to map the context-dependence across all of the individual units in the LSTM model. This enabled us to relate the timescales to the effects of node-specific ablation and the network architecture itself. In addition, our context manipulations included both context-swapping (substituting alternative meaningful contexts) and context-shuffling (permuting the words in the prior context to disrupt inter-word structure), which allowed us to better understand how individual words and syntactically structured word-sequences contribute to the the context representation of individual hidden units." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 LANGUAGE MODELS AND CORPUS", "text": "We evaluated the internal representations generated by a pre-trained word-level LSTM language model (WLSTM, Gulordava et al., 2018) as well as a pre-trained character-level LSTM model (CLSTM, Hahn & Baroni, 2019) as they processed sentences sampled from the 427804-word (1965719-character) novel corpus: Anna Karenina by Leo Tolstoy (Tolstoy, 2016), translated from Russian to English by Constance Garnett.\nFor the WLSTM, we used the model made available by Gulordava et al. (2018). The WLSTM has a 650-dimensional embedding layer, two 650-dimensional hidden layers and an output layer with vocabulary size 50,000. The model was trained and tested on Wikipedia sentences and was not fine-tuned to the novel corpus. Therefore, we only used sentences with low perplexity from the novel in our main timescale analysis. We performed the same analysis using the Wikipedia test set from Gulordava et al. (2018) and obtained similar results (See Section 5.3, Figure A.4A, Appendix A.2.1). For the CLSTM, we used the model made available by Hahn & Baroni (2019). The CLSTM has a 200-dimensional embedding layer, three 1024-dimensional hidden layers and an output layer with vocabulary size 63. The model was trained on Wikipedia data with all characters lower-cased and whitespace removed. We tested the model with sentences sampled from Anna Karenina as the WLSTM model, and we obtained bits-per-character (BPC) similar to what Hahn & Baroni (2019) reported in their original work." }, { "heading": "3.2 TEMPORAL CONTEXT CONSTRUCTION PARADIGM", "text": "In order to determine the processing timescales of cell state vectors and individual units, we modified the “temporal context construction” method developed by Chien & Honey (2020). Thus, the internal representations of the model were compared across two conditions: (1) the Intact Context condition and (2) the Random Context condition. In both conditions, the model was processing the same shared sequence of words (for example, segment B), but the preceding sentence differed across the two conditions. In the Intact Context condition, the model processed segment B (the shared segment) preceded by segment A, which was the actual preceding context from the original text. In the current study, for example, segment A and B are connected by “, and” within long sentences from the novel corpus (Figure 1A), to ensure the temporal dependencies between A and B. In the Random Context condition, however, the model processed the same shared input (segment B), but the context was replaced by segment X, which was a randomly sampled context segment from the rest of the corpus. Segment X was therefore not usually coherently related to segment B. For the WLSTM timescale analysis, we chose long sentences in the Intact Context condition that satisfied the following constraints: (1) mean perplexity across all words in the sentence < 200, (2) the shared segment was longer than 25 words, and (3) the context segment was longer than 10 words. 77 sentences are included as trials in our analyses. In the Random Context condition, we preserved the same shared segments and randomly sampled 30 context segments (each longer than 10 words) from other parts of the novel. For the CLSTM timescale analysis, we used the same 77 long sentences in the Intact\nContext condition, and randomly sampled 25 context segments (with length > 33 characters) for the Random Context condition.\nIn brief, the model is processing the same input (the shared segment) with different preceding context (the intact vs. random context). We can now measure the context dependence of individual units by examining how the cell state activations differ between the two conditions, while the network is processing the shared segments with identical input. Any difference in internal representations must arise from the context manipulation, since the current input is the same. A decrease in activation difference over time implies that the units exposed in the Intact context and Random context start to build a similar representation as they process the shared input. For a long-timescale unit, whose current state is dependent on information in the far-preceding context, we will see that the activation difference is preserved across contexts (Figure 1B, green curve), even while the unit is processing the shared input. On the other hand, for a short-timescale unit whose activation is driven largely by the current input, we will see that the activation difference drops quickly (Figure 1B, red curve) as the unit processes the shared input." }, { "heading": "4 HIERARCHICAL ORGANIZATION OF TIMESCALES ACROSS LAYERS", "text": "Do higher levels of the LSTM model exhibit greater context-dependence? Lakretz et al. (2019) observed that long-range functional units were more common in higher layers, and in general, higherlevels of hierarchical language model exhibit longer range context-dependence (Jain et al., 2019; Jain & Huth, 2018). Therefore, to validate our stimuli and the sensitivity of our methods, we first compared the processing timescales of different hidden layers in both of the LSTMs, by correlating the cell state vectors, column by column, between the Intact condition and Random condition.\nWe found that both layers showed near-zero correlation when processing the different context, and the correlation increased as they began to process the shared input. In the WLSTM, the correlation increased more slowly for second-level cell state vectors than for first-level cell state vectors. Thus, the representation of second-level cell state is more sensitive to the different context than the first level. Similarly, for the CLSTM model, the third-level cell state exhibited longer-lasting context sensitivity than lower levels (Figure 2). This observation of longer context-dependence in higher stages of processing is consistent with prior machine learning analyses (Lakretz et al., 2019; Jain & Huth, 2018) and is also analogous to what is seen in the human brain (Hasson et al., 2015; Chien & Honey, 2020; Lerner et al., 2011; Jain et al., 2019). Based on the finding of longer context dependence in higher layers, we examined single units in the highest level hidden units, i.e. the second level of WLSTM (n=650) and the third level of CLSTM (n=1024)." }, { "heading": "5 PROCESSING TIMESCALES OF INDIVIDUAL UNITS WITHIN LSTM LAYERS", "text": "" }, { "heading": "5.1 QUANTIFYING SINGLE UNIT TIMESCALES", "text": "We examined the absolute single unit activation difference when processing the shared segments preceded by different context. As expected, most of the hidden units showed different activation when the input tokens were different (i.e. while processing the non-shared context in the Intact Context and Random Context conditions). However, once the shared input tokens begin (at t = 0) the Intact-Random activation differences drop (Figure A.1A, A.1B).\nWe used the rate at which the curves drop to quantify the processing timescale, as this is a measure of how quickly the responses align across different context conditions. To quantify the timescale of individual units, we fit the activation difference curves with a logistic function:\nY (x) = L\n1 + e−k(x−x0) + d (1)\nAs shown in Figure A.1A and Figure A.1B, the logistic function fit the raw activation difference curves. We then computed the ”timescale” of each unit as the time-to-half-maximum of the logistic decay. In particular, for the WLSTM we used the activation difference Y (0) at the beginning of the shared segment, and at the end of the shared segment Y (24) (Y (79) for the CSLTM) to calculate the time-to-half-maximum of unit i as:\ntimescalei = dY −1( Yi(0)− Yi(24)\n2 )e (2)\nwhere the inverse function Y −1(y) identifies the largest integer t, for which Y (t) < y. We included 635 units in WLSTM and 1012 units in CLSTM for further analysis after excluding the units which could not be accurately fit by a logistic function (See Appendix A.1)." }, { "heading": "5.2 DISTRIBUTION OF UNIT TIMESCALES IN LSTM LANGUAGE MODELS", "text": "The results showed that of the 635 WLSTM units whose processing timescale we mapped, approximately 70% of the units were insensitive to long-range context (processing timescale < 3 words): their activation difference dropped immediately at onset of the shared segment. In contrast, only approximately 13% of the units had a timescales > 7 words (Figure A.2A). Figure 3A shows the absolute activation difference of all units in WLSTM sorted by timescale (long to short). Some of the longer-timescale units continued to exhibit a large activation difference even when processing the shared segments for more than 20 tokens.\nAs we were testing the same word-level LSTM previously studied by Lakretz et al. (2019), we began by examining the timescales of hidden-state units that were already known to be involved in processing context-dependence language information: a “singular number unit” 988, a “plural number unit” 776, and a “syntax unit” 1150. We found that, compared to other units, both “number” units had medium timescales (∼ 3 words, ranked 129 of 635 units), while the “syntax” unit had a long timescale (∼ 7 words, ranked 64 of 635 units) (Figure A.1). We repeated the timescale mapping in the CLSTM model, and again identified a small subset of long-timescale units (Figure 3B, Figure A.2B). Although there were overall more units in CLSTM, over 63% of the units were insensitive to the context (timescale < 3 characters). Fewer than 15% of the units exhibited timescale > 10 characters, and the unit with the longest timescale only dropped to its half-maximum activation-difference after 50 characters of shared input." }, { "heading": "5.3 TIMESCALE VARIATION ACROSS DATASETS AND CONTEXT CONDITIONS", "text": "To ensure that the timescales we measured were robust across datasets, we conducted the same analysis on WLSTM using the Wikipedia testing dataset used in Gulordava et al. (2018). The mapped timescales were highly correlated (r=0.82, p<0.001) across the Anna Karenina dataset and the Wikipedia dataset (Appendix A.2.1, Figure A.4A).\nSimilarly, to confirm that the timescales measured were not specific to our testing using the “, and” conjunction point, we also measured timescales at an alternative segmentation point, and found that the timescales were largely preserved (r=0.83, p<0.001), notwithstanding there were a small set of notable exceptions (Appendix A.2.2, Figure A.4B).\nAlthough we measured the timescales of context dependence using “token distance”, these measures are not invariant to changes in the the “syntactic distance”. For example, if one were to replace a comma with a ”full stop”, then the token distance would be unaltered but the syntactic distance could be greatly altered. Indeed, we found that most units showed little context dependence when the preceding context segment ended with a “full stop”, which served as a clear signal for the end of a sentence (Appendix A.2.3, Figure A.4C).\nFinally, we examined whether the contextual information retained by the language models (and the associated timescales measurement) was sensitive to linguistic structure in the context, or whether it was primarily driven simply by the presence or absence of individual words. To this end, we generated text for the Random Context condition by shuffling the order of words from the Intact segment. We found that while the presence of individual words did play an important role in determining the context representations (and thus the timescales), several units showed a longer timescale when the prior context was composed of coherently structured language (Appendix A.2.4, Figure A.4D)." }, { "heading": "6 CONNECTIVITY OF MEDIUM- TO LONG-TIMESCALES UNITS IN LSTMS", "text": "Having mapped the timescales of each processing unit, we next asked: how does the processing timescale of a unit relate to its functional role within the network? More specifically, are units with longer timescales also units with high degree in the connectivity network? To answer these questions, we analyzed (1) the projection strength of each unit and (2) the similarity of the overall projection pattern (hidden-to-gates) across different units. The projection patterns were defined using the direct weight projections from one hidden unit at time t to the input and forget gate of other hidden units at time t+ 1.\nIn LSTMs, the amount of contextual (ct−1) and input (c̃t) information stored in the cell state (ct) is determined by the forget gate (ft) and input gate (it) activation (Eq. 3); and the activation of the gates it and ft are determined by the current input at time t and the hidden units at time t−1 through weight matrices U and W (Eq. 4, 5).\nct = ft ct−1 + it c̃t (3) it = σ(Uixt +Wiht−1 + bi) (4) ft = σ(Ufxt +Wfht−1 + bf ) (5)\nHere, we were interested in understanding how the contextual information over different timescales is projected from the hidden units to the input and forget gates of other units, and further influence the update of cell states. Thus, we analyzed the network connectivity focusing on the weight matrices Wi and Wf within the highest layer of the WLSTM or CLSTM." }, { "heading": "6.1 STRONG PROJECTIONS FROM LONG-TIMESCALE HIDDEN UNITS TO GATE UNITS", "text": "Units with longer processing timescales made a larger number of strong projections (|z-score|> 5, Appendix A.3) to the input and forget gates of other units in both WLSTM (r=0.31, p<0.001, Figure 4A) and CLSTM models (r=0.24, p<0.001, Figure A.5A). Furthermore, we found that the “syntax” unit (Unit 1150) reported by Lakretz et al. (2019) in the WLSTM model possessed the largest number of strong projections to the input and forget gates of all other units, and the major recipients from Unit 1150 were units with medium- to long-timescale units (Figure 4B)." }, { "heading": "6.2 IDENTIFY CONTROLLER UNITS IN LSTM LANGUAGE MODELS", "text": "The presence of strong projections from the “syntax” unit to other long-timescale units motivated us to further explore whether high-degree, long-timescale units in the LSTM also densely interconnect to form a “core network”, perhaps analogous to what is seen in the brain (Hagmann et al., 2008; Mesulam, 1998; Baria et al., 2013). If so, this set of units may have an especially important role in controlling how prior context is updated and how it is used to gate current processing, analogous to the controller system in the brain (Gu et al., 2015). To identify these putative “controller units”, we binarized the network by identifying the top 258 projection weights from the weight matrices (see Appendix A.3), which provided the edges for a network analysis. We then used k-core analysis (Batagelj & Zaversnik, 2003) to identify the “main network core” (the core with the largest degree) of the network (Figure A.3). At the maximal k = 5, the k-core analysis yielded a set of densely interconnected nodes, composed of many long-timescale and medium-timescale units (Figure A.3), also labeled in red in Figure 4A). We (tentatively) refer to this set as the “controller” set of the network. Performing the same k-core analyses on the CLSTM model, we observed that the main core network was again composed of disproportionately many medium and long-timescale “controller” units (Figure A.5A)." }, { "heading": "6.3 DISTINCTIVE ROLES OF LONG-TIMESCALE CONTROLLER AND INTEGRATOR UNITS", "text": "We used multi-dimensional scaling (MDS) to visualize the similarity of projection patterns across LSTM units. We recovered a 2-dimensional MDS embedding, in which the inter-unit distances was defined based on the similarity of their hidden-to-gate projection patterns (i.e., similarity of values in the unthresholded LSTM weight matrices Wi and Wf ). We visualized the MDS solution as a graph structure, in which each node is a unit, and the edges reflect connection properties of that unit. Figure 4D shows the resulting 2-D space, with units color-coded by their timescale.\n“Controller units” (labeled on Figure 4D) were positioned around the periphery of the MDS space, suggesting that these units expressed projection patterns that were distinct from other “controller” units and also from the rest of the network. In contrast, we observed several long-timescale units\npositioned in the center of the MDS space, suggesting that the projection patterns of these units were similar to the mean projection pattern. We refer to this more MDS-central set as the “integrator units” (labeled in green in Figure 4A). Similar to the WLSTM, the projection patterns of the “controller units” in the CLSTM were distinct from other units in the network, according to the MDS results (Figure A.5C). However, we did not observe “integrator units” positioned in the center of the MDS space of the CLSTM.\nAre the “controller” and “integrator” units particularly important for the model’s ability to predict the next token? To test the functional importance of these subsets of units, we conducted group ablation analyses (See Appendix A.4). Ablating controller units reduced the accuracy of token prediction overall, while ablating integrator units only reduced prediction accuracy for the last words of the sentences (Figure 4C). The results confirm that the putative controller and integrator nodes are functionally significant, with distinctive roles in the WLSTM language model.\nFinally, to test the generalization of the timescale and connectivity analyses to a different model architecture, we conducted preliminary analyses on a Gated Recurrent Unit (GRU) language model (Cho et al., 2014) and another word-level LSTM model with a smaller hidden size (100 units) per layer. The models were trained using similar parameter settings as in Gulordava et al. (2018) until they converged without any model-specific optimization. We found similar sparsity of longtimescale units in both models, but did not observe the same relationship between timescales and connectivity (Appendix A.5; A.6; Figure A.7; A.8; A.9; A.10)." }, { "heading": "7 DISCUSSION", "text": "We demonstrated a new method for mapping the timescale organization in recurrent neural language models. Using this method, we mapped the timescale distributions of units within word-level and character-level LSTM language models, and identified a small set of units with long timescales. We then used network analyses to understand the relationship between the timescale of a unit and its connectivity profile, and we distinguished two subsets of long-timescale units with seemingly distinctive functions. Altogether, we proposed methods combining timescale and connectivity analyses for discovering timescale and functional organization in language models.\nThe units with longer processing timescales included some units whose role in long-range language dependencies had already been established (Lakretz et al., 2019), but almost all of the long timescale units are of unknown function. The timescale mapping procedure described here provides a model-free method for identifying nodes necessary for long-range linguistic and discursive processes (e.g. tracking whether a series of words constitutes an assertion or a question). Future studies of these neural language models could focus on the specific linguistic information tracked by the long-timescale units, especially the “controller” units which control the information flow of other units in the network.\nThe current study measured unit timescales using a simple token distance, and so the method may be applied to understanding recurrent neural nets beyond language models. It will be insightful for future studies to investigate whether the processing timescales characterized via token distance are comparable to those measured using functional measures, such as syntactic distance. Relatedly, while we explored the timescale variance under several context conditions, a more thorough investigation will be needed to examine how the timescales of individual units may vary at different positions within a sentence, both in terms of token location and syntactic location.\nProcessing timescales may exhibit an analogous hierarchical organization in LSTMs and in the human cerebral cortex: in both cases, a subset of nodes with high degree and high inter-connectivity express unusually long timescales. More detailed testing of this apparent correspondence is required, however, because units within an LSTM layer are not spatially embedded and constrained as in biological brains, and thus the LSTM units do not express a spatially graded timescale topography." }, { "heading": "ACKNOWLEDGMENTS", "text": "C.J.H and H-Y.S.C gratefully acknowledge the support of the National Institutes of Mental Health (grant R01MH119099)" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 UNITS EXCLUDED FROM TIMESCALE ANALYSIS", "text": "We excluded 1 unit in the WLSTM model and 5 units in CLSTM model which were not properly fit using the logistic function; we further excluded 14 units in the WLSTM model and 7 units in the CLSTM model which either did not show a non-zero activation difference before the shared segment started, or whose activation differences increased when started to process the shared segment. After these exclusions, 635 units remained in the WLSTM and 1012 units remained in the CLSTM for further analysis." }, { "heading": "A.2 TIMESCALE ANALYSES ACROSS DIFFERENT DATASETS AND CONTEXT CONDITIONS", "text": "" }, { "heading": "A.2.1 WIKIPEDIA TEST DATASET", "text": "The Anna Karenina corpus used in the current study has a different linguistic structure from the Wikipedia corpus on which the WLSTM and CLSTM models were trained. Although we analyzed only the Anna Karenina sentences with low perplexity, it was important to test the robustness of our results across datasets. Thus, we mapped the timescale of each unit using the Wikipedia test set, as used by Gulordava et al. (2018). Specifically, we sampled 500 long sentences containing “, and” for the Intact Context condition. As before, we generated sentences by preceding the “shared input” segment (after the conjunction) with either the original prior context segment, or a randomly chosen prior context segment. Same as the original analysis, we then replaced the context segment with 30 context segments randomly sampled from other parts of the test set for generating the Random Context condition. The mapped timescales using the Wikipedia test set were highly correlated with the novel corpus, suggesting the robustness of unit timescales (Figure A.4A)." }, { "heading": "A.2.2 TIMESCALES MEASURED IN THE MIDDLE OF A SENTENCE", "text": "To examine how the timescales of individual units may vary across different positions in a sentence, we varied the location of the segmentation point. Instead of using the conjunction (“, and”) as the segmentation point, we chose an arbitrary segmentation point: the 15th token of a long sentence, to separate context segment and shared input segment. In the Random Context condition, we replaced the context segment with the first 15 tokens from other sentences of the corpus. We found that the unit timescales were highly correlated with the condition where we used the conjunction as the segmentation point with several units shift their timescales to either directions (Figure A.4B). This analysis was conducted using Wikipedia test set." }, { "heading": "A.2.3 TIMESCALE RESET AT THE BEGINNING OF A SENTENCE", "text": "To examine if the timescales of individual units can flexibly reset at the beginning of a sentence, we conducted the same timescale analysis but using a “full stop” as the segmentation point instead of the conjunction “, and”. Thus, if the original test string was “The girl kicked the call, and the boy caught it”, then the full-stop version of the test string would be “The girl kicked the ball. The boy caught it.” In this setting, the context segment and shared input segment in the Intact Context condition are two consecutive sentences. To ensure the temporal dependence between the context segment and shared input segment, we sampled 100 consecutive sentence pairs from the Anna Karenina corpus. Note that this is not possible using the Wikipedia test set from Gulordava et al. (2018), because that set is composed of unrelated sentences. The Random Context condition was generated by replacing the first sentence with randomly sampled sentences from other parts of the novel. We found that when using “full stop” to segment context and shared input, most units in the network showed timescale near 0, indicating near-zero dependence on the linguistic context from the text preceding the full stop (Figure A.4C). This suggests that the units in LSTM tend to “reset” their context representation at the beginning of a sentence." }, { "heading": "A.2.4 CONTEXT REPRESENTATION SHAPED BY INDIVIDUAL WORDS", "text": "Inspired by the token-shuffling procedure of Khandelwal et al. (2018), we explored whether the context representations of individual units in the LSTM were shaped by individual words, rather\nthan coherent sequences of words. For this analysis, instead of replacing the context with syntactically structured segments from other part of the corpus, we generated the “random context” by shuffling the order of words within the context segment. We then mapped the unit timescales as before, by examining the unit activation difference as a function of the distance from the onset of shared input. Intriguingly, we found that most of the units showed similar timescales across the context-replacement and context-shuffling procedures (Figure A.4D). This suggests that the context representations in LSTMs largely depend on the presence of individual words in the context, rather than their appearance within coherent linguistic sequences. However, we did observe a subset of units (labeled in the Figure, and almost all long-timescale units) whose timescales were longer when context was replaced rather than shuffled. For this subset of units, the ability to maintain a representation of prior context over many tokens depends on that prior context being a coherent linguistic sequence. This subset of units are a promising target for future studies of syntactic representations in LSTMs.\nA.3 IDENTIFYING STRONG HIDDEN-TO-GATE PROJECTIONS\nFirst, for each hidden unit, we concatenated the corresponding rows in the Whi and Whf matrices, to generate a single “hidden-to-gate” projection vector for that hidden unit. Next we we z-scored the vector to get standardized projection values from that unit to all other units in the network. Using |z-score|> 5 as criterion, we identified a total of 258 “strong projections” from all hidden units to the input gate and forget gate in the WLSTM. The projection strength of each unit was then calculated based on its number of ”strong projections” (Figure 4A). Although the criterion |zscore|> was selected to better visualize the results in Figure 4, different criteria did not change the results that units with longer timescales have more strong projections. For example, using |z-score|> 3 as threshold we obtained corr(timescale, projections) = 0.30, p<0.001; |z-score|> 4 we obtained corr(timescale, projections) = 0.35, p<0.001.\nNext, we identified the edges corresponding to the top 258 magnitude weight-values within the combined Whi and Whf matrices. Together, these edges formed a ”strong-projection network”. Finally, we used k-core analysis to identify the main core of the strong-projection network. This main core composed our ”controller units” (Figure A.3).\nUsing the same criteria and method, we identified a total of 390 “strong projections” from all hidden units to the input gate and forget gate in the CLSTM. We then extracted the top 390 weight values from the weight matrices to construct a “strong-projection network” and again identified the main core network, composed the “controller units” for the CLSTM model (Figure A.5A, A.5B)" }, { "heading": "A.4 ABLATION ANALYSES ON PUTATIVE CONTROLLER AND INTEGRATOR UNITS", "text": "To examine the non-trivial roles of the controller and integrator units identified in the word-level LSTM model, we performed a preliminary group ablation analysis to look at how ablating the controller units influences model performance on predicting the next token, relative to the ablation of a random set of units. Specifically, since long-timescale integrator units should have most effect predicting tokens at the later part of the sentences (i.e., when more context is integrated), we examined the model performance on predicting tokens at two different positions: (1) all the tokens regardless of their positions in the sentences (“All tokens” condition), and (2) the last tokens of sentences (“Final tokens” condition).\nWe evaluated the effects of ablation on model performance by measuring the differences of probabilities (∆P) assigned to the target words (∆P = probability of target word in ablated model minus probability of target word in original model). Ablation effects for controller units (N=9) and integrator units (N=10) were compared against a baseline of ablating the same number of randomlyselected units from layer 2 of the LSTM (Figure 4C). We used the test corpus used by Gulordava et al. (2018) and measured the average performance of each model across 100 text-batches, randomly sampled from the Wikipedia test dataset. Each text-batch was composed of 1000 tokens that start at the beginning of a sentence.\nIn the “All tokens” condition, we calculated the ∆P for every token in the tested text, while in the “Final tokens” condition, we calculated ∆P only at the last token of every sentence (i.e. the token\nright before the full stop“.” of each sentence). We then average the ∆P in both conditions across text-batches to get a mean performance difference between the ablated model and the intact model.\nAblating controller units reduced the probabilities assigned to the target words, more so than ablating random units (Figure 4C, controller vs. random across 100 text batches: Cohen’s d = -4.85, t = - 34.28, p<0.001). In contrast, ablating integrator units reduced the probabilities less than ablating random units (integrator vs. random: Cohen’s d = 2.50, t = 17.67, p<0.001). We hypothesized that that the integrator units mostly influence the model’s prediction performance for tokens where long-range information is especially relevant, such as in the later portions of clauses and sentences. Consistent with this, we found that, when we examined the ablation effects only for tokens in the final position of a sentence, ablating integrator units reduced the probabilities more than ablating random units (Cohen’s d = -0.34, t = -2.41, p = 0.017). Interestingly, ablating controller units reduced the probability of sentence-final targets less than random units (Cohen’s d = 0.67, t = 4.74, p<0.001).\nIn summary, these ablation results indicate a non-trivial functional role for the controller and integrator units, despite the fact that each subset of units is composed of only 10 amongst 650 total hidden units. Also, the putative controller and integrator sets appear to have distinctive roles within the WLSTM, with the controllers supporting accurate predictions overall, while the integrator units appear to boost accurate predictions at the end of sentences." }, { "heading": "A.5 MAPPING THE TIMESCALE ORGANIZATION IN A GRU LANGUAGE MODEL", "text": "" }, { "heading": "A.5.1 TRAINING", "text": "To explore whether the timescale mapping methods, and our findings, may generalize to other model architectures, we trained and studied a word-level GRU language model (Cho et al., 2014). As far as possible, we applied similar parameters in the GRU as were used for the LSTM by Gulordava et al. (2018): the same Wikipedia training corpus, the same loss function (i.e. cross-entropy loss), and the same hyperparameters except for a learning rate initialized to 0.1, which we found more optimal to train the GRU. The GRU model also had two layers, with 650 hidden units in each layer.\nWe trained the GRU model for 30 epochs, at which point the GRU converged to a validation perplexity of 118.36. Note that since we adapted similar training settings as were used for training the LSTM model by Gulordava et al. without model-specific optimization, the perplexity is higher than that of the LSTM model reported in Gulordava et al. (2018) (perplexity = 52.1 in the English corpora, after training for 40 epochs and selecting the model with the lowest perplexity out of 68 combinations of different hyperparamters). We then analyzed the timescale of its hidden units using the same method as was used for analyzing the LSTMs, and using the test data derived from the training Wikipedia corpus." }, { "heading": "A.5.2 TIMESCALE ORGANIZATION OF A GRU MODEL", "text": "Similar to the LSTM model of Gulordova et al, the majority of the units in the GRU also showed shorter timescales. More specifically, we found: (1) the second layer of the GRU model was more sensitive to prior context than the first layer, as in the LSTM (A.7A); (2) the distribution of timescales across units was similar in the GRU and LSTM, although the GRU showed a more right-skewed distribution with a larger proportion of short-timescale units (A.7B, C)." }, { "heading": "A.5.3 TIMESCALE VERSUS NETWORK CONNECTIVITY IN A GRU MODEL", "text": "We also performed the timescale vs. network connectivity analyses on the GRU model. Because the update of hidden states in GRU are controlled by the reset and update gate, we measured the projection patterns of hidden units by analyzing the matrix of combined hidden-to-update-gate and hiddento-reset-gate weights. In contrast to the LSTM models, hidden units in the GRU that we trained did not show a relationship between longer timescales and stronger hidden-to-gate projections (A.8A). Moreover, when using k-core analysis to identify subunits of interconnected high-degree units, the core network in the GRU contained many units with long to short timescales. Interestingly, when we visualized the position of the k-core units in the MDS space, they tended to locate at the edge of the space, similar to what we found in LSTM. This indicates that, as in the LSTM, the core units\nin the GRU have distinctive profiles, distant from one another and from other units in the network (A.8B). However, we did not observe the pattern of “integrator units” in the GRU as in the LSTM.\nThese apparent similarities and differences between LSTM and GRU are intriguing, but we emphasize that (1) the perplexity of this GRU model is much higher than the LSTM, due to the sub-optimal parameter settings, and that (2) comparing the LSTM and GRU connection patterns is not straightforward, as the overall distribution of weights is different. Further work will be required to determine comparable thresholds for “strong” projections and “high-degree units” in each case. As we noted in the manuscript and above, the connectivity results are exploratory; however, we believe that the GRU analysis demonstrates how these methods can be extended to map and compare the functional organization of language models of different architecture.\nFinally, we note that when conducting the timescale analysis on an incompletely trained GRU model (trained∼10 epochs, validation perplexity≈ 350), the timescale distribution was more right-skewed (Figure A.6B) than the better-trained GRU (Figure A.7B). Altogether, these results suggest that the long-timescale units in GRU were gradually formed during the training process." }, { "heading": "A.6 MAPPING THE TIMESCALE ORGANIZATION IN A WORD-LEVEL LSTM WITH DIFFERENT HIDDEN SIZE", "text": "To examine whether the number of hidden units in the model would affect the timescale organization in an LSTM, we trained another 2-layer word-level LSTM model with the same Wikipedia corpus and similar parameter settings as in Gulordava et al. (2018), but with only 100 hidden units in each layer. We called this model LSTM-100. We trained the model for 56 epochs until the model converged to a validation perplexity 98.75, and conducted the same analysis as described in the main text to map the timescales of LSTM-100. Because LSTM-100 have overall less weight connections, we use |z-score|> 3 as criteria to determine the “strong” hidden-to-gate projections for connectivity analyses.\nRegarding the timescale distribution in LSTM-100, we found that the results were similar to the 650-unit word-level LSTM model, in that: (1) the second layer of LSTM-100 showed more context sensitivity than the first layer, and (2) although it was difficult to quantitatively compare the unitlevel timescale distribution between the LSTM-100 model and the LSTM with 650 units, they both contain a similarly small subset of long-timescale units. (A.9).\nWe did not observe a significant correlation between the unit timescale and number of strong projections generated by each unit in the LSTM-100 model: the long-timescale units in the LSTM-100 did not have more connections than short-timescale units. When visualizing the MDS space of connectivity similarity of LSTM-100, the “controller units” identified using the k-core analysis were located in the edge of the space, similar to the 650-unit LSTM model. Interestingly, we observed a subset of long-timescale units in the center of the MDS space, analogous to the “integrator units” found in the 650-unit LSTM model. Altogether, the pattern of “integrator units” might be a commonly evolved feature that is shared between LSTM model architectures, but not with GRU architectures." } ]
2,021
TIMESCALE ORGANIZATION OF NEURAL LANGUAGE MODELS
SP:345a245503d9e3acaf695de66d73d9f4ff3eab83
[ "This paper focuses on the problem of generating sparse l2-adversarial examples in a white-box and surrogate/transfer setting. The authors consider “local attacks” – perturbing on a limited number of pixels while achieving high attack success rate. The main contribution of this work is to define the region to perturb using grad-cam based saliency maps to identify regions that have a greater impact on the classification decision. Having identified this region, the author use SGD to find the adversarial perturbations. The experimental results show that a high attack success rate can be achieved with this method. " ]
Recently, to deal with the vulnerability to generate examples of CNNs, there are many advanced algorithms that have been proposed. These algorithms focus on modifying global pixels directly with small perturbations, and some work involves modifying local pixels. However, the global attacks have the problem of perturbations’ redundancy and the local attacks are not effective. To overcome this challenge, we achieve a trade-off between the perturbation power and the number of perturbed pixels in this paper. The key idea is to find the feature contributive regions (FCRs) of the images. Furthermore, in order to create an adversarial example similar to the corresponding clean image as much as possible, we redefine a loss function as the objective function of the optimization in this paper and then using gradient descent optimization algorithm to find the efficient perturbations. Our comprehensive experiments demonstrate that FCRs attack shows strong attack ability in both white-box and black-box settings for both CIFAR-10 and ILSVRC2012 datasets.
[]
[ { "authors": [ "Naveed Akhtar", "Ajmal Mian" ], "title": "Threat of adversarial attacks on deep learning in computer vision: A survey", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Ting Deng", "Zhigang Zeng" ], "title": "Generate adversarial examples by spatially perturbing on the meaningful area", "venue": "Pattern Recognition Letters,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yoav Goldberg" ], "title": "Neural network methods for natural language processing", "venue": "Synthesis Lectures on Human Language Technologies,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Alain Hore", "Djemel Ziou" ], "title": "Image quality metrics: Psnr vs. ssim", "venue": "In 2010 20th international conference on pattern recognition,", "year": 2010 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Nina Narodytska", "Shiva Kasiviswanathan" ], "title": "Simple black-box adversarial attacks on deep neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "arXiv preprint arXiv:1605.07277,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "IEEE European symposium on security and privacy (EuroS&P),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Yaniv Taigman", "Ming Yang", "Marc’Aurelio Ranzato", "Lior Wolf" ], "title": "Deepface: Closing the gap to human-level performance in face verification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "arXiv preprint arXiv:1801.02612,", "year": 2018 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kaidi Xu", "Sijia Liu", "Pu Zhao", "Pin-Yu Chen", "Huan Zhang", "Quanfu Fan", "Deniz Erdogmus", "Yanzhi Wang", "Xue Lin" ], "title": "Structured adversarial attack: Towards general implementation and better interpretability", "venue": "arXiv preprint arXiv:1808.01664,", "year": 2018 }, { "authors": [ "Kaidi Xu", "Sijia Liu", "Gaoyuan Zhang", "Mengshu Sun", "Pu Zhao", "Quanfu Fan", "Chuang Gan", "Xue Lin" ], "title": "Interpreting adversarial examples by activation promotion and suppression", "venue": null, "year": 1904 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "arXiv preprint arXiv:1612.03928,", "year": 2016 }, { "authors": [ "Jianming Zhang", "Sarah Adel Bargal", "Zhe Lin", "Jonathan Brandt", "Xiaohui Shen", "Stan Sclaroff" ], "title": "Top-down neural attention by excitation backprop", "venue": "International Journal of Computer Vision,", "year": 2018 }, { "authors": [ "Yonggang Zhang", "Xinmei Tian", "Ya Li", "Xinchao Wang", "Dacheng Tao" ], "title": "Principal component adversarial example", "venue": "IEEE Transactions on Image Processing,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "The development of deep learning technology has promoted the successful application of deep neural networks (DNNs) in various fields, such as image classification (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), computer vision (He et al., 2016; Taigman et al., 2014), natural language processing (Devlin et al., 2018; Goldberg, 2017), etc. In particular, convolutional neural networks (CNNs), a typical DNNs, have shown excellent performance applied in image classification. However, many works have shown that CNNs are extremely vulnerable to adversarial examples (Szegedy et al., 2013). The adversarial example is crafted from clean example added by well-designed perturbations that are almost imperceptible to human vision, while can fool CNNs. Scholars have proposed a variety of methods to craft adversarial samples, such as L-BFGS (Szegedy et al., 2013), FGSM (Goodfellow et al., 2014), I-FGSM (Kurakin et al., 2016), PGD (Madry et al., 2017) and C&W (Carlini & Wagner, 2017). These attack strategies can successfully mislead CNNs to make incorrect predictions, restricting the application of CNNs in certain security-sensitive areas (such as autonomous driving, financial payments based on face recognition, etc.). Therefore, learning how to generate adversarial examples is of great significance.\nWe can categorize these attacks into two categories, i.e., the global attacks and the local attacks, according to the region added perturbations. The global attacks tempt to perturb all pixels of the clean image, such as FGSM (Goodfellow et al., 2014), PGD (Madry et al., 2017) and C&W (Carlini & Wagner, 2017); the local attacks only modify some pixels of the clean image, such as one-pixel attacks (Su et al., 2019) and JSMA (Papernot et al., 2016b). At present, the global attacks perturb all pixels on the whole image, which not only fail to destroy the feature contributive regions (the critical semantics of an image), but they also increase the degree of image distortion. We explain in detail in the experimental part. The local attacks seem to be able to solve this problem, but the current proposed local attacks don’t well realize that focus on undermining the image feature contributive regions. Papernot et al. (2016b) proposed a method of crafting adversarial example based on the Jacobian Saliency Map by constraining the `0 norm of the perturbations, which means that only a few pixels in the image are modified. However, this method has the disadvantage of over-modifying the value of the pixels, making the added perturbations easily perceptible by the naked eye, and its adversarial strength is weak (Akhtar & Mian, 2018). Su et al. (2019) proposed an extremely adversarial attack—one-pixel attack. One-pixel attack can fool CNNs by changing 1 to 5 pixels, but\nthis method is better for low-resolution images attack (such as CIFAR-10), and the attack success rate for high-resolution images will be greatly reduced (such as ImageNet), and the cost is very large `1 distortion (Xu et al., 2018).\nIn this paper, we propose a novel attack method to overcome the redundant perturbations of the global attacks and the poor strength of the proposed local attacks. Inspired by the work of CAM (Zhou et al., 2016) and Grad-CAM (Selvaraju et al., 2017), it is the most effective way to reduce image distortion, high efficiency and reduce computational complexity by adding perturbations to the critical semantics. As we all know, CNN is an end-to-end representation learning model, which starts from simple low-level features and combines them into abstract high-level features layer by layer. Thus, Grad-CAM (Selvaraju et al., 2017) uses the gradient information of the last convolutional layer as the metric to understand the decision of each neuron for target classification, and explains in a visual way that not all image pixels contribute to the model classification. Similarly, as shown in Figure 1, the red area is the main contributive area. Therefore, perturbing the image globally is not the most efficient strategy. We propose the FCRs attack strategy, which only adds perturbations in Feature Contributive Regions (FCRs) with the aim of generating sparse and more excellent perturbations. Especially, compared with existing local attacks, our proposed method perturbs continuous semantic regions rather than discrete pixels. In this work, we use Grad-CAM to locate regions that have a greater impact on the classification decision of CNNs. To ensure the similarity between the adversarial example and the corresponding clean image as much as possible, the objective function we optimize is the sum of the two parts of the function: the `2 norm of the perturbations and the loss function of the generated adversarial examples. We thus use the stochastic gradient descent optimization algorithm to find efficient perturbations. In order to avoid the situation where the perturbations do not update when the objective function tends to zero, we also introduce inverse temperature T under the inspiration of Hinton et al. (2015).\nCompared to previous work, the contributions of our work are summarized as follows:\n• We propose an attack via feature contributive regions (FCRs) for achieving a trade-off between the powerful attack and the small perturbations. More importantly, this work implements an effective local attack algorithm by redefining an objective function.\n• Specially, we novelly propose an inverse temperature T , which avoids the situation where the loss function of the generated adversarial example tends to be zero when the stochastic gradient descent optimization algorithm is used to find the perturbations.\n• Comprehensive experiments demonstrate that FCRs attack consistently outperforms stateof-the-art methods on the CIFAR-10 and ILSVRC-2012 datasets. In addition, we verify the importance of FCRs by dividing the original clean image into two parts (i.e., FCRs and Non-FCRs)." }, { "heading": "2 RELATED WORK", "text": "In many cases, the CNNs are vulnerable to adversarial attacks which have caused extensive research in academia. Szegedy et al. (2013) used the constrained L-BFGS algorithm to craft adversarial ex-\namples. L-BFGS attack has a high attack success rate, but the computational cost is also high (Narodytska & Kasiviswanathan, 2017). Therefore, Goodfellow et al. (2014) proposed FGSM, which can quickly generate adversarial examples but has a low attack success rate. Kurakin et al. (2016) proposed the Iterative attack method (I-FGSM) on the basis of FGSM and Madry et al. (2017) proposed PGD. Dong et al. (2018) proposed an iterative algorithm based on momentum (MI-FGSM) to improve the transferability of adversarial samples. Xie et al. (2019) combined the input diversity strategy with iterative attacks on I-FGSM and MI-FGSM to further improve the transferability of adversarial examples. The aforementioned attacks belong to the gradient attack family, and they destroy the semantic information of the whole image. Papernot et al. (2016b) proposed an attack method based on the Jacobian Saliency Map by minimizing the `0 norm of adversarial perturbations and used a greedy algorithm to find saliency pixels. However, this method has the problems of over-modifying pixels too much and weak attack intensity. Su et al. (2019) proposed an adversarial attack method based on the differential evolution algorithm. This method also focuses on the number of pixels to be modified, but does not limit the power of a single change, thus leading to very large `1 distortion (Xu et al., 2018). In this work, we expect to achieve a more effective attack that can be as successful as existing attacks but achieves a trade-off between the perturbation power and the number of perturbed pixels. We will show that the proposed FCRs attack is able to destroy the feature contribution regions that make attacks successful, but without incurring extra pixel-level perturbations.\nRelated to our work is Deng & Zeng (2019), who proposed a spatial transformed attack method based on attention mechanism. This work expands the stadv (Xiao et al., 2018) to A-stadv. The purpose of this work is to generate adversarial examples with less interference and less visible. The author only conducts experiments on the ImageNet dataset, and does not discuss the black-box attack effect of this method. But while verifying that many pixel-level perturbations are redundant, our work proposes a new algorithm to craft perturbations, and demonstrates its white-box and blackbox attack effects on the CIFAR-10 and ILSVRC2012 datasets. In addition, Xu et al. (2019) used CAM to explain adversarial perturbations but their target is not to generate adversarial examples, but to understand and interpret adversarial examples. Zhang et al. (2020) proposed a target-free method to generate adversarial examples via principal component analysis and made adversarial examples relate to the data manifold, but their experiment showed that the performances of their method were not always better than FGS and C&W. Here we pay more attention to the feature contribution regions and finally, we achieve a trade-off between the powerful attack and the number of perturbed pixels." }, { "heading": "3 METHODOLOGY", "text": "Inspired by “attention mechanism” (Zagoruyko & Komodakis, 2016), we believe the classifier’s performance is greatly affected by some specific feature regions that is termed as feature contributive regions (FCRs) in this paper. This intuition is also confirmed by Deng & Zeng (2019) proposed Astadv which is an attention based on spatial transformed adversarial example. Therefore, if we find FCRs and add perturbations to them, it will be more effective to fool the classifier with fewer perturbations than previous methods. Our idea is to divide an image into two semantic parts: FCRs and Non-FCRs and then perturbs feature contributive regions. The result of fewer perturbations ensures maximumly adversarial effects on local regions of clean images." }, { "heading": "3.1 NOTIONS", "text": "Deep neural networks (DNNs): A DNN can be expressed as a high-dimensional approximation function: f(X, θ) : Rm → Rn, whereX ∈ Rm is the input variable, Y ∈ Rn is the true class,X and θ represents the model parameters. In this work, we focus on a specific DNN, convolutional neural networks (CNNs) that are typically comprised of convolutional layers with some method of periodic downsampling (either through pooling or strided convolutions). Here, we define the Logits layer. The input before the softmax layer of the CNNs, namely the Logits layer (the penultimate layer): Yj = w T j A, j = 1, 2, . . . , C, where w T j is the weight matrix and A is the input vector of the Logits layer, which contains a mapping function X 7→ A. Then the softmax function can be expressed as Sj = expYj/ ∑c i=1 expYi, and the final model can be expressed as f(X) = S ( wTj A ) . Given an input X , then the predicted class of X can be expressed as Ŷ = argmaxj=1,...,k f(X)j . The goal\nof model training is to minimize the cross-entropy loss function, which can be expressed as:\nJ = − C∑ j=1 Yj logSj = − logSj (1)\nwhere Y is a 1 × C vector and there are C values in it. Only one value is 1 (corresponding to the true label), and the other C − 1 values are all 0. For N input-label pairs (Xi, Yi), the cross-entropy loss function of the model can be expressed as:\nJ = − 1 N N∑ i=1 C∑ j=1 Yj logSj = − 1 N N∑ i=1 logSj (2)\nAdversarial examples: An adversarial example can be represented as X ′ = X + δ, where δ is the perturbation. Normally, the perturbation δ is constrained by the `0, `2 or `∞ norm, that is ‖X ′ −X‖p ≤ . For untargeted attacks, we only need to search for an X ′ satisfying Y ′ = argmaxj f (X ′)j , where Y ′ 6= Y and we also do not need to specify which class will be misclassified; for targeted attacks, we specify a target class Y ∗ 6= Y , so that the target model not only misclassifies the example, but also needs to classify them into the specified class. In general, the targeted attacks are more difficult than untargeted attacks." }, { "heading": "3.2 FEATURE CONTRIBUTIVE REGIONS (FCRS)", "text": "FCRs refer to the regions in an image that are critical for model prediction. We can utilize GradCAM (Selvaraju et al., 2017), CAM (Zhou et al., 2016) and c-MWP (Zhang et al., 2018) to observe FCRs. However, compared with CAM and c-MWP, Grad-CAM is not restricted by a specific CNNs architecture. In addition, it can generate better quantitative and qualitative results with less computation. As a result, we use Grad-CAM to search for FCRs in our work.\nSuppose the input image X is forward propagated through the CNNs, and the last layer of convolutional layer outputs the high-level feature map A of the image, where Ak ∈ Rn×v represents the activation of the k-th convolution kernel with the size of u × v. A outputs the score vector Y (also called logits) of each class after passing through a fully connected layer FC, where Y C represents the logits value of the C-th class. To this end, we compute the gradient of Y C toAk, i.e. ∂Y C/∂Ak to measure the classification prediction importance of the k-th convolution kernel to the C-th class. Furthermore, we adopt the global average pooling operation to calculate the weight λCk of the k-th convolution kernel:\nλCk = 1\nZ ∑ i ∑ j ∂Y C ∂Akij (3)\nwhere Z = u×v,Akij is the activation at cell (i, j) of the k-th convolution kernel. We use the weight λck to perform a weighted summation ofA k, and calculate a feature activation map ∑ k λ C k A\nk for the C-th class. Considering that only the positive value in ∑ k λ C k A\nk will have a positive effect on the final classification result, the final weighted result is reactivated by ReLU to remove the influence of the negative value, and the activation map of the C-th class is obtained:\nLX = ReLU (∑ k λCk A k ) (4)\nWe can visualize LX in the form of heatmap (e.g. Figure 1), in which the red area is the feature contribution regions (FCRs) to the C-th class.\nSince the FCRs are usually irregular, we introduce a masking mechanism to locate. Formally, the mask is a 0-1 matrix with the same size of the input image. The element is 0 in maskX indicates the corresponding pixel in the image is not in the FCRs. On the contrary, the element is 1 indicates the corresponding pixel is in the FCRs. Thus, we can obtain the FCRs of the image by simply Hadamard product applied between the mask and the image. For obtaining the mask, a simple threshold mechanism can be utilized:\nmaskX = { 1 [LX ] ≥ t 0 others (5)\nwhere t is a threshold and LX indicates that the input image X is the C-th class activation map. Our proposed method uses maskX to locate the location of the added perturbations δFCR." }, { "heading": "3.3 GENERATE PERTURBATIONS FOR FCRS", "text": "We now turn to our approach to generate adversarial perturbations. To begin, we rely on the initial formulation of adversarial perturbations (Goodfellow et al., 2014) and formally define the problem as follows:\nmin δ\n‖δ‖p (6)\ns.t. f(X + δ) 6= y , X + δ ∈ [0, 1]m .\nwhere ‖ · ‖p is the norm that constrains perturbations δ. The commonly used p-norm is `0, `2 or `∞. X is fixed, and the goal is to find the minimal δ that can fool the CNNs.\nOur method is different in that only perturbs FCRs, so we solve this problem by formulating it as follows:\nmin δFCR\n‖δFCR‖p (7)\ns.t. f(X + δFCR) 6= y , X + δFCR ∈ [0, 1]m .\nHowever, the exact and direct computation of ‖δFCR‖p is difficult for existing algorithms, as the constraint f(X + δ) 6= y is highly non-linear. Therefore, we approximate ‖δFCR‖p in a different form that is better suited for optimization. We define an objective function F satisfying f(X + δFCR) 6= y. This objective function consists of two parts: (1) a loss function for generating adversarial examples, and (2) an `2 regularization function to limit the perturbations. In theory, the `0 and `∞ norms can also be considered as a regularization function. However, we notice that the `0 norm is non-differentiable and cannot be adopted for the standard gradient descent algorithm. In addition, the `∞ norm only focuses on the largest value in δFCR, it is easy to oscillate between the two suboptimal solutions during the gradient descent process (Carlini & Wagner, 2017). Therefore, we use the `2 norm of the perturbations δFCR as the distance metric. Thus we define the objective function as follows:\nF = β 1\n‖δFCR‖2 + J (fθ (X + δFCR) , Y ) (8)\nwhere β is a hyper-parameter that controls the degree of distortion. For the clean image X , our optimization goal is to find the δFCR that maximizes the objective function F when the model is misclassified:\nmax δFCR F (9)\ns.t. X + δFCR ∈ [0, 1]m . Since maximizing F and minimizing 1/F are equivalent, we can get the following optimization problem:\nmin δFCR 1/F (10)\ns.t. X + δFCR ∈ [0, 1]m . Then we use the stochastic gradient descent (SGD) algorithm to solve the δFCR. The gradient of 1/F in δFCR is∇δFCR(1/F ) and it is used to update δFCR iteratively:\nδFCR = (δFCR −∇δFCR(1/Loss)× LR) maskX (11) where LR is a hyper-parameter, which is equivalent to the learning rate.\nFirstly, we generate a random perturbation δFCR, and get the initial adversarial example X ′ = X + δFCR. From Eq. (1) we can know that when Sj → 1, Jadv → 0, we set P = β (1/ ‖δFCR‖2), Jadv = J (fθ (X + δFCR) , Y ). At this time 1/F = 1/P , ∇δFCR(1/F ) = ∇δFCR(1/P ), then continue to use the stochastic gradient descent (SGD) algorithm to update δFCR will not lead to Jadv becoming bigger. In order to avoid this situation, we use the distillation idea to introduce the hyper-parameter T (T ≤ 1). Applying T will make Jadv increase log T , then Eq. (1) becomes the following form:\nJTadv = − log (Sj/T ) (12)\nThus our objective function is modified to:\nF = β 1\n‖δFCR‖2 + JTadv (13)\nAlgorithm 1 Generate Adversarial Examples via FCRs Input: A clean image X; the iterations N ; the learning rate LR; the degree of distortion β; the threshold t; the inverse temperature T Output: δFCR\n1: initialize δFCR // K is the number of feature maps in the last layer of convolution layers\n2: λCk ← 1Z ∑ p ∑ q ∂Y C ∂Akpq , k = 1, . . . ,K\n3: LX ← ReLU (∑ k λ C k A k )\n4: maskX = { 1 [LX ] ≥ t 0 others // Get FCRs\n5: for i = 1, . . . , N do 6: X ′ ← X + δFCR 7: 1/F ← 1/ ( P + JTadv ) 8: δFCR ← (δFCR −∇δFCR(1/F )× LR) maskX // Update δFCR 9: end for" }, { "heading": "4 EXPERIMENTS", "text": "We verify the proposed method in Section 3 by experiments: (1) FCRs is an important basis for the final classification decision; (2) FCRs attack will produce less perturbations and reduce the pixel search space; (3) In this section, we show the experimental results of white-box attack and black-box attack, which shows that FCRs attack has powerful white-box attack capability and high transferability." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "Datasets and Models: We validate our method on two benchmark datasets: CIFAR-10 (Krizhevsky et al., 2009) and ILSVRC2012 (Russakovsky et al., 2015). The CIFAR-10 consists of 60,000 images sized , including 10 categories, each with 6,000 images. There are 50,000 images for training and 10,000 for testing. The ILSVRC2012 image classification dataset contains 1,200 thousand images from 1,000 categories, and 50,000 images are used as the validation set. There is no point in attacking images that have been misclassified, so the images we use to generate adversarial examples are all images that are correctly classified by all models. We use VGG (Simonyan & Zisserman, 2014) and ResNet (He et al., 2016) series models on the two datasets.\nEvaluation indicators: The evaluation indicators setting in this article are the attack success rate (ASR), the image quality assessment index—peak signal-to-noise ratio (PSNR) (Hore & Ziou, 2010) and the `2 distortion of perturbations. In an ideal situation, we need to conduct stronger attacks with smaller perturbations, so that a higher PSNR and smaller `2 distortion can be guaranteed." }, { "heading": "4.2 VALIDATE THE IMPORTANCE OF FCRS", "text": "This section uses VGG and ResNet model structure to conduct experiments on CIFAR-10 to further illustrate that FCRs are the basis for model classification. We divide into FCRs and Non-FCRs by using (Figure 2(a)). The accuracy rate of the input FCRs is up to 85% and above. However, the accuracy rate of the input Non-FCRs is very low (Figure 2(b)). The experimental results show that FCRs have the greatest semantics to model decision-making and are the areas that have a positive contribution to model classification.\nIn order to show that global attacks will produce not powerful perturbations and FCRs adding perturbations is the most efficient way, we improve the FGSM algorithm and add different perturbations to FCRs and Non-FCRs (Appendix A). We conduct experiments on the CIFAR-10. We add different perturbations on FCRs and Non-FCRs for comparison and the results are summarized in Appendix A. Obviously, the perturbations only in FCRs hardly reduce the attack success rate, that say, the FCRs are the best areas to optimize perturbations in the optimization landscape." }, { "heading": "4.3 FCRS ATTACK", "text": "We generate adversarial examples on two datasets under the white-box setting. The results in Table 1 show the classification accuracy of the clean test data and the ASR of the adversarial examples generated by FCRs attack on different models. Figure 3 shows the perturbations and adversarial examples generated by the global attacks and FCRs attack. These are randomly selected from the examples that can be successfully attacked. It can be seen that the FCRs attack not only generates perturbations in the FCRs but also the adversarial examples are very close to the corresponding images. However, the images of global attacks are distorted greatly. When we use the same constraint of the `2 distortion, we observe that the ASR of PGD is 74.33% and 56.50% on the two datasets; the ASR of C&W is 72.11% and 45.00%. In contrast, FCRs attack can still have powerful attack performance when it only attacks the local semantics." }, { "heading": "4.4 COMPARISON WITH OTHER METHODS", "text": "Table 2 reports the ASR, PSNR, and `2 distortion of different attack methods (it is pointed out here that we are giving the average difference between the adversarial examples and the clean images). We show that the FCRs attack not only generates small perturbations (smaller `2 distortion) but also has powerful attack performance (higher ASR), and the crafted adversarial examples are more similar to the original images (larger PSNR). Specifically, the distortion performance of C&W is the worst, that `2 distortion is the largest on the two datasets, and the PSNR is also the smallest. Given that JSMA and one-pixel attack are both local attacks, we do a comparative experiment with\nthese two methods. On the CIFAR-10, the performance of JSMA is lower than the FCRs attack (ASR: 90.33% vs 100.00%); and its `2 distortion is very large. On the ILSVRC2012, our method outperforms it in all metrics. We choose to attack 5 pixels for the one-pixel attack. On the CIFAR10, one-pixel attack is not only large in `2 distortion, but also has poor attack performance. On the ILSVRC2012, although the `2 distortion of the one-pixel attack is the smallest, its attack success rate is only 40.56% and we observe that the one-pixel attack requires a lot of memory during the experiments. Thus we observe that the reduction of attack semantics does not reduce the performance of FCRs attack." }, { "heading": "4.5 BLACK-BOX ATTACK", "text": "In this section, we explore a more challenging black-box scenario where the attacker first specifies an alternative model of the black-box model, and then generates a set of adversarial examples that can successfully attack the alternative model. Normally, this set of adversarial examples is considered to have strong transferability, that is, in the case of misleading alternative model, it will also mislead the target model (Papernot et al., 2016a). The underlying assumption is that highly transferable adversarial examples can achieve similar attack performance on many different target models (Papernot et al., 2017). Therefore, we can expect that transferable adversarial samples will reduce the accuracy of the alternative model and at the same time reduce the accuracy of the target model, resulting in high black-box attack capabilities. In order to prove the black-box attack capability of the FCRs attack, we conduct black-box attack experiments on different target models and datasets. As shown in Table 4 and 5, the adversarial examples generated by FCRs attack is more transferable in most cases." }, { "heading": "5 CONCLUSIONS", "text": "This work explores the method of generating perturbations via the feature contribution regions. This article provides evidence to prove that the attack on the local semanticsis is the most effective. As our theory and experiments have shown, we have devised a more excellent attack method. We conduct extensive experiments with the CIFAR-10 and ILSVRC2012 datasets. The results show that FCRs attack are much stronger than existing global attacks (such as PGD and C&W) and local attacks (such as JSMA and One-Pixel), and the attack based on feature contribution regions may also provide a new perspective for future research on better defensive methods." }, { "heading": "B ANALYSIS OF HYPER-PARAMETERS", "text": "Iteration Times N and Inverse Temperature T : N and T are the dominant hyper-parameter in the proposed algorithm, and here we explore them effects on ASR. We can observe that both N and T have positive trends on the ASR (Figure 4(a) 4(b)). As N and T increase, the ASR also tends to increase. When N = 30, the ASR of the FCRs attack can reach 100% on both datasets. The ASR increases fastest when N = 1 to N = 5, and then it tends to grow slowly until 100%. With the increase of the iteration times, our objective function can better find the global optimal solution, thereby avoiding falling into the local optimal solution. The increase of T also obviously leads to the high ASR because the increase of T can make JTadv and the regularization function P become smaller, which makes our objective function 1/F continue to decrease and is better to find the optimal solution when performing stochastic gradient descent. It needs to be explained here that we find that the best situation can be achieved when T = 0.05, especially on the ILSVRC2012 dataset, but the attack effect may be reduced if T continues to increase.\nThreshold t: The threshold t is also the dominant hyper-parameter which size directly determines the size of the maskX , that is, the size of the range of adding perturbations. Specifically, we vary t while keeping the other parameters fixing to observe the influence of t changes on the ASR and\nthe `0 norm of the perturbations. When t = 0, the ASR on both datasets reaches 100%. At the same time `0 is 2903 and 198402, respectively. As the threshold t continues to increase, the range of perturbations continue to decrease. The direct manifestation is that the norm decreases linearly. When reaching 0.5, the norms drop to 1529 and 24026 on the two datasets, respectively, which are 1/2 and 1/10 of t = 0. However, the ASR does not drop dramatically, which is reduced by 0.7% on the CIFAR-10 and 5.07% on the ILSVRC2012 (Figure 5(a) 5(b)). In the experiments of this paper, we set the threshold t = 0.2 on both datasets." }, { "heading": "C THE RESULTS OF BLACK-BOX ATTACK", "text": "" } ]
2,020
null
SP:652a231a924a97e438595264ea869986e40d45a7
[ "In this paper a novel top-down control network is introduced for multi-task learning. Different from the traditional bottom-up attention models, the authors introduce a top-down module to modify the activation of recognition network based on different tasks. Specifically,the proposed module consists of three identical networks, which are BU1, TD, BU2 streams. Given the input, the BU1 is firstly trained, and then the TD streams is trained by assigning the specific labels. After that, the BU2 is updated with the top-down parameters. Experimental results demonstrate the effectiveness of proposed model." ]
As the range of tasks performed by a general vision system expands, executing multiple tasks accurately and efficiently in a single network has become an important and still open problem. Recent computer vision approaches address this problem by branching networks, or by a channel-wise modulation of the network feature-maps with task specific vectors. We present a novel architecture that uses a dedicated top-down control network to modify the activation of all the units in the main recognition network in a manner that depends on the selected task, image content, and spatial location. We show the effectiveness of our scheme by achieving significantly better results than alternative state-of-the-art approaches on four datasets. We further demonstrate our advantages in terms of task selectivity, scaling the number of tasks and interpretability. Code is supplied in the Supplementary material and will be publicly available.
[]
[ { "authors": [ "Hakan Bilen", "Andrea Vedaldi" ], "title": "Integrated perception with recurrent multi-task neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Chunshui Cao", "Xianming Liu", "Yi Yang", "Yinan Yu", "Jiang Wang", "Zilei Wang", "Yongzhen Huang", "Liang Wang", "Chang Huang", "Wei Xu" ], "title": "Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp. 2956–2964,", "year": 2015 }, { "authors": [ "Joao Carreira", "Pulkit Agrawal", "Katerina Fragkiadaki", "Jitendra Malik" ], "title": "Human pose estimation with iterative error feedback", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": "arXiv preprint arXiv:1711.02257,", "year": 2017 }, { "authors": [ "Brian Cheung", "Alex Terekhov", "Yubei Chen", "Pulkit Agrawal", "Bruno Olshausen" ], "title": "Superposition of many models into one", "venue": "arXiv preprint arXiv:1902.05522,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Adam Gazzaley", "Anna C Nobre" ], "title": "Top-down modulation: bridging selective attention and working memory", "venue": "Trends in cognitive sciences,", "year": 2012 }, { "authors": [ "Charles D Gilbert", "Mariano Sigman" ], "title": "Brain states: top-down influences in sensory processing", "venue": null, "year": 2007 }, { "authors": [ "Kazuma Hashimoto", "Caiming Xiong", "Yoshimasa Tsuruoka", "Richard Socher" ], "title": "A joint many-task model: Growing a neural network for multiple nlp tasks", "venue": "arXiv preprint arXiv:1611.01587,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Joseph B Hopfinger", "Michael H Buonocore", "George R Mangun" ], "title": "The neural mechanisms of top-down attentional control", "venue": "Nature neuroscience,", "year": 2000 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C Lawrence Zitnick", "Ross Girshick" ], "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alexander Kirillov", "Kaiming He", "Ross Girshick", "Carsten Rother", "Piotr Dollár" ], "title": "Panoptic segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Iasonas Kokkinos" ], "title": "Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Victor AF Lamme", "Hans Super", "Henk Spekreijse" ], "title": "Feedforward, horizontal, and feedback processing in the visual cortex", "venue": "Current opinion in neurobiology,", "year": 1998 }, { "authors": [ "Shikun Liu", "Edward Johns", "Andrew J Davison" ], "title": "End-to-end multi-task learning with attention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kevis-Kokitsi Maninis", "Ilija Radosavovic", "Iasonas Kokkinos" ], "title": "Attentive single-tasking of multiple tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Alejandro Newell", "Kaiyu Yang", "Jia Deng" ], "title": "Stacked hourglass networks for human pose estimation", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Valentin Piëch", "Wu Li", "George N Reeke", "Charles D Gilbert" ], "title": "Network model of top-down influences on local gain and contextual interactions in visual cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2013 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: better, faster, stronger", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Deepak Babu Sam", "R Venkatesh Babu" ], "title": "Top-down feedback for crowd counting convolutional neural network", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Multi-task learning as multi-objective optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gjorgji Strezoski", "Nanne van Noord", "Marcel Worring" ], "title": "Many task learning with task routing", "venue": "arXiv preprint arXiv:1903.12117,", "year": 2019 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Theodore P Zanto", "Michael T Rubens", "Jacob Bollinger", "Adam Gazzaley" ], "title": "Top-down modulation of visual feature processing: the role of the inferior frontal junction", "venue": null, "year": 2010 }, { "authors": [ "Xiangyun Zhao", "Haoxiang Li", "Xiaohui Shen", "Xiaodan Liang", "Ying Wu" ], "title": "A modulation module for multi-task learning with applications in image retrieval", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The goal of multi-task learning is to improve the learning efficiency and increase the prediction accuracy of multiple tasks learned and performed in a shared network.\nIn recent years, several types of architectures have been proposed to combine multiple tasks training and evaluation. Most current schemes assume task-specific branches, on top of a shared backbone (Figure 1a) and use a weighted sum of tasks losses for training (Chen et al., 2017; Sener & Koltun, 2018). Having a shared representation is more efficient from the standpoint of memory and sample complexity (Zhao et al., 2018), but the performance of such schemes is highly dependent on the relative losses weights that cannot be easily determined without a “trial and error” search phase (Kendall et al., 2018).\nAnother type of architecture (Zhao et al., 2018; Strezoski et al., 2019) uses task-specific vectors to modulate the feature-maps along a feed-forward network, in a channel-wise manner (Figure 1b). Channel-wise modulation based architecture has been shown to decrease the destructive interference between conflicting gradients of different tasks (Zhao et al., 2018) and allowed Strezoski et al. (2019) to scale the number of tasks without changing the network. Here, both training and evaluation use the single tasking paradigm: executing one task at a time, rather than getting responses to all the tasks in a single forward pass. Executing one task at a time is also possible by integrating task-specific modules along the network (Maninis et al., 2019). A limitation of using task-specific modules (Maninis et al., 2019) or of using a fixed number of branches (Strezoski et al., 2019), is that it may become difficult to add additional tasks at a later time during the system life-time.\nWe propose a new type of architecture with no branching, which performs a single task at a time with no task-specific modules. Our model is trained to perform a set of tasks ({ti}Ti=1) one task at a time. The model receives two inputs: the input image, and a learned vector that specifies the selected task tk to perform. It is constructed from two main parts (Figure 1c): a main recognition network that is common to all tasks, termed below BU2 (BU for bottom-up), and a control network that modifies the feature-maps along BU2 in a manner that will compute a close approximation to the selected task tk. As detailed below, the control network itself is built from two components (Figure 1d): a top-down (TD) network that receives as inputs both a task vector as well as image information from a bottom-up stream termed BU1 (Figure 1d). As a result, the TD stream combines task information with image information, to control the individual units of the feature-maps along BU2. The modification of units\nactivity in BU2 therefore depends on the task to perform, the spatial location, and the image content extracted by BU1. As shown later, the task control by our approach becomes highly efficient in the sense that the recognition network becomes tuned with high specificity to the selected task tk.\nOur contributions are as follow:\na. Our new architecture is the first to modulate a multi-task network as a function of the task, location (spatial-aware) and image content (image-aware). All this is achieved by a top-down stream propagating task, image and location information to lower levels of the bottom-up network.\nb. Our scheme provides scalability with the number of tasks (no additional modules / branches per task) and interpretability (Localization of relevant objects at the end of the top-down stream).\nc. We show significantly better results than other state-of-the-art methods on four datasets: MultiMNIST (Sener & Koltun, 2018), CLEVR (Johnson et al., 2017), CELEB-A (Liu et al., 2015) and CUB-200 (Welinder et al., 2010). Advantages are shown in both accuracy and effective learning.\nd. We introduce a new measure of task specificity, crucial for multi-tasking, and show the high task-selectivity of our scheme compared with alternatives." }, { "heading": "2 RELATED WORK", "text": "Our work draws ideas from the following research lines:\nMultiple Task Learning (MTL) Multi-task learning has been used in machine learning well before the revival of deep networks (Caruana, 1997). The success of deep neural networks in the performance of single tasks (e.g., in classification, detection and segmentation) has revived the interest of the computer vision community in the subject (Kokkinos, 2017; He et al., 2017; Redmon & Farhadi, 2017). Although our primary application area is computer vision, multi-task learning has also many applications in other fields like natural language processing (Hashimoto et al., 2016; Collobert & Weston, 2008) and even across modalities (Bilen & Vedaldi, 2016).\nOver the years, several types of architectures have been proposed in computer vision to combine the training and evaluation of multiple tasks. First works used several duplications (as many as the tasks) of the base network, with connections between them to pass useful information between the tasks (Misra et al., 2016; Rusu et al., 2016). These works do not share computations and cannot scale with the number of tasks. More recent architectures, which are in common practice these days, assume task-specific branches on top of a shared backbone, and use a weighted sum of losses to train them. The joint learning of several tasks has proven beneficial in several cases (He et al., 2017), but can also decrease the accuracy of some of the tasks due to limited network capacity, the presence of uncorrelated gradients from the different tasks and different rates of learning (Kirillov et al., 2019). A naive implementation of multi-task learning requires careful calibration of the relative losses of the different tasks. To address these problem several methods have been proposed: ‘Grad norm’ (Chen\net al., 2017) dynamically tunes gradient magnitudes over time to obtain similar rates of learning for the different tasks. Kendall et al. (2018) uses a joint likelihood formulation to derive task weights based on the intrinsic uncertainty in each task. Sener & Koltun (2018) applies an adaptive weighting of the different tasks, to force a pareto optimal solution on the multi-task problem.\nAlong an orthogonal line of research, other works suggested to add task-specific modules to be activated or deactivated during training and evaluation, depending on the task at hand. Liu et al. (2019) suggests task specific attention networks in parallel to a shared recognition network. Maninis et al. (2019) suggests adding several types of low-weight task-specific modules (e.g., residual convolutional layers, squeeze and excitation (SE) blocks and batch normalization layers) along the recognition network. Note that the SE block essentially creates a modulation vector, to be channelwise multiplied with a feature-map. Modulation vectors have been further used in Strezoski et al. (2019) for a recognition application, in Cheung et al. (2019) for continual learning applications and in Zhao et al. (2018) for a retrieval application and proved to decrease the destructive interference between tasks and the effect of catastrophic forgetting.\nOur design, in contrast, does not use a multi-branch architecture, nor task-specific modules. Our network is fully-shared between the different tasks. Compared to Zhao et al. (2018), we modulate the feature-maps in the recognition network both channel-wise and spatial-wise, also depending on the specific image at hand.\nTop-Down Modulation Networks Neuroscience research provides evidence for a top-down context, feedback and lateral processing in the primate visual pathway (Gazzaley & Nobre, 2012; Gilbert & Sigman, 2007; Lamme et al., 1998; Hopfinger et al., 2000; Piëch et al., 2013; Zanto et al., 2010) where top-down signals modulate the neural activity of neurons in lower-order sensory or motor areas based on the current goals. This may involve enhancement of task-relevant representations or suppression for task-irrelevant representations. This mechanism underlies humans ability to focus attention on task-relevant stimuli and ignore irrelevant distractions (Hopfinger et al., 2000; Piëch et al., 2013; Zanto et al., 2010).\nIn this work, consistent with this general scheme, we suggest a model that uses top-down modulation in the scope of multi-task learning. Top down modulation networks with feedback, implemented as conv-nets, have been suggested by the computer vision community for some high level tasks (e.g., re-classification (Cao et al., 2015), keypoints detection (Carreira et al., 2016; Newell et al., 2016), crowd counting (Sam & Babu, 2018), curriculum learning (Zamir et al., 2017), etc.) and here we apply them to multi-task learning applications." }, { "heading": "3 APPROACH", "text": "A schematic illustration of our network is shown in Figures 1c and 2a, and explained further below. A control network is used in this scheme to control each of the units in the main recognition network (BU2) given the task and the current image. In practice, this scheme is implemented by using three separate sub-networks (two of them identical) with lateral inter-connections. We next describe the network architecture and implementation in detail.\nOverall structure and information flow Our model contains three essentially identical subnetworks (BU1, TD, BU2), with added lateral connections between them. The networks BU1, BU2 share identical weights. The network receives two inputs: an input image (to both BU1, BU2), and a task specification provided at the top of the TD network by a one-hot vector selecting one of k possible tasks. The processing flows sequentially through BU1, TD (in a top-down direction), BU2, and the final output is produced at the top of BU2. In some cases, discussed below, we used an additional output (object locations) at the bottom of TD. During the sequential processing BU1 first creates an initial image representation. The TD creates a learned representation of the selected task that converts the one-hot vector to a new form (task embedding), and propagates it down the layers. On the way down the TD stream also extracts relevant image information from the BU1 representation via the BU1-TD lateral connections. Finally, the TD stream controls the BU2 network, to apply the selected task to the input image via the TD-BU2 lateral connections.\nBottom-Up streams The BU streams use a standard backbone (such as Resnet, VGG, LeNet, etc.), which is usually subdivided into several stages followed by one or more fully-connected layers,\nincluding the final classifier. The lateral connections between streams are placed at the end of each stage, connecting between tensors of the same sizes, allowing element-wise modifications.\nTop-down stream The TD stream we use (unless stated otherwise) is a replica of the BU stream in terms of number of layers, type of layers (convolutional / residual) and number of channels in each layer. The downsampling layers (used in the BU stream) are replaced with upsampling layers (nearest neighbour interpolation layers). This design allows us to immediately extend any given BU backbone to our scheme, and it gives good results in our comparison, but the optimal TD structure is subject to future studies. The TD stream has two inputs: the selected task at its top, and inputs from BU1 via lateral connections. The selected task is usually specified by a one-hot vector, which is transformed via learnable weights, into a learned task-representation tensor (called the task embedding, ‘Emb’ in Figure 2a) that serves as an input to the TD stream.\nlateral connections For the lateral connections, we experimented extensively with different types, and based on the results we selected two types of connections, one for BU1-to-TD, which is additive, the second for TD-to-BU2, which is multiplicative, shown in Figures 2b and 2c. More details and ablation studies of the lateral connections are given in the Supplementary.\nAuxiliary losses The use of three sub-networks (BU1, TD, BU2) suggest the natural use of auxiliary losses at the end of the BU1 or TD streams. In the scope of multi-task learning, the TD auxiliary loss can be used to train the extraction of useful spatial information such as the detection of task-relevant objects. This issue is further discussed in Section 4.2 where we demonstrate the use of a localization loss in the last TD feature-map. We show that applying a localization loss allows us to obtain a task-dependent spatial map in inference time, helping interpretability by locating objects of interest.\nTraining & evaluation During training, the learning optimizes all the weights along the BU and TD streams, shared by all tasks, as well as the task specific embedding parameters. Learning uses a standard backpropagation, as the full model forms an end-to-end trainable model.\nIn training time, the network is supplied with an input image and a selected task, drawn at random from the different tasks. During testing, the different tasks are applied sequentially to each test image.\nWe used in our implementation shared weights between BU1 and BU2 streams. The main motivation for this design was to allow in future applications a multi-cycle use of the model, by using the BU and TD streams iteratively. With this broader goal in mind, our scheme can also be seen as an unfolded version of a BU-TD recurrent network for one and a half cycles, which is the minimal cycles that allow an image-aware modification process." }, { "heading": "4 EXPERIMENTS", "text": "We validated our approach on four datasets (Multi-MNIST, CLEVR, CELEB-A and CUB-200) with tasks ranging from low-level (e.g., colors) to higher-level (e.g., CLEVR configurations, facial features) recognition, and from simple (e.g., classification by location) to more complex tasks (e.g., classification by combined attributes and spatial relations). We compared our approach to alternatives in terms of prediction accuracy, scaling to a larger number of tasks and task selectivity." }, { "heading": "4.1 DATASETS & TASKS", "text": "Multi-MNIST (Sabour et al., 2017) is a version of the MNIST dataset in which multiple MNIST images are placed on a grid with some spatial overlaps, previously used in Sener & Koltun (2018). We used 2x1, 2x2, 3x3 grids; several training examples are shown in Figure 1e. We used Multi-MNIST to test performance on two sets of tasks: (a) recognizing a digit at a selected location (‘by loc’, e.g., to recognize the digit in the upper-right location) and (b) recognizing a digit which is right to another digit (‘by ref’, e.g., to recognize the digit to the right of the digit ‘7’).\nCLEVR is a synthetic dataset, consisting of 70K training images and 15K validation images. The dataset includes images of simple 3D objects, with multiple attributes (shape, size, color and material) together with corresponding (question-answer) pairs. We used the CLEVR dataset to test performance on sets of ‘by ref’ tasks, scaling the number of tasks up to 1645, with a fixed model size. We created multiple tasks by randomly choosing 40, 80, 160 and 1645 queries about an attribute of an object to the (left, right, up, down) of a referred object. An example task is: “What is the color of the object to the left of the metal cylinder?” (metallic cylinder is the referred object).\nCELEB-A is a set of real-world celeb face images, intensively used in the scope of MTL (e.g., Sener & Koltun (2018), Strezoski et al. (2019)) on attribute classification tasks. The dataset consists of 200K images with binary annotations on 40 face attributes related to expression, facial parts, etc.\nCUB-200 is a fine-grained recognition dataset that provides 11,788 bird images of 200 bird species, previously used in (Strezoski et al., 2019). We used CUB-200 to test performance on real-world images with low-level features, and to demonstrate our use of interpretability. An example task is to recognize the color of the bird’s crown.\nThe datasets and corresponding tasks are illustrated in Figure 1e and further discussed in the Supplementary material.\n4.2 IMPLEMENTATION DETAILS\nWe performed five randomly initialized training runs for each of our experiments, and present average accuracy and standard deviation.\nWe used LeNet, VGG-11, VGG-7 and resnet-18, as our BU backbone architectures for the Multi-MNIST, CLEVR, CELEB-A and CUB200 experiments respectively. We used the Adam optimizer, and performed learning rate search over {1e−5, 1e−4, 1e−3, 1e−2} on a small validation set; the main hyperparameters (number of epochs, learning rate and batch size) are shown in Table 1.\nWe trained the full model end-to-end, using cross entropy loss at the end of BU2. In some of the experiments of CLEVR and CUB-200 datasets we added an auxiliary loss at the end of the TD stream. The target in this case is a 224x224 mask, where a single pixel, blurred by a Gaussian kernel (s.d. 3 pixels) was labeled as the target location. Training one task at a time, we minimized the cross-entropy loss over the 224x224 image at the end of the TD softmax output (which encourages a small detected area), for each visible ground-truth annotated object or part. This auxiliary loss, allows us, at inference, to create task-dependent spatial maps of detected objects; examples of interest are shown in figure 4 and in the Supplementary material. For a fair comparison, we also trained another version of the channel modulation architecture with an additional regression loss, calculated by a FC layer at the top of the network, using the same ground truth annotations.\nDatabase details, full architecture description, more hyperparameters and an analysis of the number of parameters in the architectures can be found in the Supplementary material." }, { "heading": "4.3 COMPARISONS:", "text": "We compared our method with existing alternatives from three main approaches (listed in Table 2): (i) A ‘Single task’ approach, where each task is performed by its own network, (ii) a ‘Multi-branched’ approach, where the sum of the individual losses is minimized (in ‘Uniform scaling’ the losses are equally weighted, whereas in ‘mult-obj-opt’ (Sener & Koltun, 2018), ‘kendall’ (Kendall et al., 2018) and ‘grad-norm’ (Chen et al., 2017) the weights are dynamically tuned) and (iii) a ‘Modulation’ approach, where channel-wise vectors modulate the recognition network in several of the net’s stages (‘ch-mod’ (Zhao et al., 2018) uses learnable weights and ‘task-routing’ (Strezoski et al., 2019) uses constant binary weights). For a fair comparison with our approach we placed the task embeddings (‘modulation’ approach) and the lateral connections from the TD (‘ControlNet, ours’ approach) in the same recognition network locations followed by a single branch for final recognition (the original implementation of ‘task-routing’ uses multiple branches)." }, { "heading": "4.4 RESULTS", "text": "Table 2 summarizes our results on the Multi-MNIST experiment with 2, 4 and 9 digits. We show the average accuracy of all tasks based on 5 experiments for each row. The ‘#P ’ column shows the number of parameters as a multiplier of the number of parameter in a standard LeNet architecture. Detailed results with standard deviations and number of parameters are specified in the Supplementary material. Our method achieves significantly better results than the other approaches, even compared with the single-task baseline. Scaling the number of tasks increases the accuracy gap almost without additional parameters. The ‘by ref’ test (10 tasks) proved to be more difficult (lower accuracies), and shows a significant gap between our approach and other methods. Extending the channel-modulation scheme to a wider backbone (‘ch-mod(extended)’ in the table, with 15 and 25 channels featuremaps) with roughly the same number of parameters as in our scheme, maintains a large accuracy gap.\nOur quantitative results for the CLEVR, CELEB-A and CUB-200 experiments are summarized in Table 3. The ‘#P ’ column shows the number of parameters as a multiplier of the number of parameter used in the corresponding BU architecture. Experiments that used localization ground-truth data are indicated with √ on the ‘loc’ column. The results show better accuracy of our scheme compared to all baselines. Table 3a shows that our results on CLEVR, 40 tasks setting, surpass other methods by a significant margin. Our advantage in the CELEB-A and CUB-200 experiments is smaller than in the former tests, possibly due to database biases (e.g., color of different bird’s parts are highly correlated) and the diverse appearance of relevant attributes in these databases.\n4.5 EXPERIMENTS DISCUSSION\nWe discuss below additional aspects of the experiments and general conclusions.\nAccuracy vs. model size tradeoff Figure 3 demonstrates the average accuracy of the 9-class experiment as a function of the number of parameters in four types of architectures; top-left is better. Large markers in the figure correspond to the fixed architectures used in the experiments above. Within each family, parameters can be changed by changing (uniformly) the number of channels along the network.\nSmall markers correspond to modified network sizes: wider LeNet architectures (termed above ‘ch-mod(extended)’), or to reduced TD implementations (using TD streams with 1, 4 or 6 channels along its feature-maps). Exact design choices are summarized in the Supplementary material. Our control network architectures correspond to the highest (red) curve in the plot, indicating higher performance for a similar number of parameters. A similar comparison for the CLEVR dataset is reported in the Supplementary material.\nScaling the number of tasks Table 2 above shows the accuracies of the Multi-MNIST experiment with the 2, 4 and 9 tasks datasets. Note that as the number of digits increases, the task also increases in difficulty due to the digits overlap. Increasing the number of tasks increases the accuracy gap compared with alternative models with a similar or even larger number of parameters.\nTable 6a below shows the accuracies of gradually increasing the number of tasks up to 1645 on the CLEVR dataset. Compared with alternatives, our results are scalable with the number of tasks, with a smaller decrease in performance for 1645 tasks. The uniform scaling approach cannot scale above 80 tasks due to memory restrictions, and did not take part in this experiment. Overall, the results show that our scheme deals better in performing multiple tasks in limited capacity, the performance gap increases with the number of tasks and complexity and the scheme benefits from the contribution of spatial information and image content.\nAdding Tasks A general question of interest in mutli-task learning is whether the tasks need to be pre-defined, compared with the possibility of adding at least some tasks at a later stage to an already trained model. We used in this task an extension of the M-mnist ‘byref’ experiment. Specifically, our architecture has been trained and evaluated on the defined ‘by ref’ tasks, excluding the task involving\nthe digit ‘9’ (9 tasks in all). We then extended the embedding layer and trained it, while keeping the rest of the model fixed, on the new task examples. Table 4 shows the results. The obtained accuracy for the added digit ‘9’ task are 64.68%, other tasks mean accuracy remains unchanged (74.89%). The new task accuracy is lower than the mean, but shows significant learning, compared with the pre-trained accuracy of 16.3%. The accuracy of all other tasks is unaffected (avoiding ‘catastrophic forgetting’) without requiring any further training of the previous tasks.\nTasks heterogeneity Our model performs multiple tasks in the same network by applying activation-modifications according to the task and to the image at hand. A question of interest is whether the set of tasks is limited to a homogenous set, such as similar classification tasks (as in the M-MNIST experiment), or extends to more heterogeneous tasks. In particular, performing both recognition and pixel-labeling tasks in the same network using task selection schemes (e.g., channel modulation, task-routing, ours) has not been studied in the past.\nTable 5: Hetrogenous tasks: executing recognition and segmentation with / without tasks selection.\n9 digits - by loc 9 digits - by ref loc inst. cls/seg inst. # branches CLS (Acc) SEG (IOU) # branches CLS (Acc) SEG (IOU)\nMulti-branched × × 18 74.10 59.07 20 31.68 36.29 ControlNet √ × 2 88.40 65.34 2 73.83 46.73 ch-mod √ √ 2 76.67 61.16 2 46.96 38.97 ControlNet √ √ 2 88.53 67.46 2 75.53 48.07\nWe extended our proposed architecture to perform both recognition tasks (producing class labels) and segmentation tasks (producing a spatial map), guided by the TD instruction provided to the network. The instruction is composed of two parts: selecting a digit, either by location or by reference (the ‘by loc’ and the ‘by ref’ vector in the M-MNIST experiments above), and then applying either recognition or segmentation, using a two slots one-hot-vector (results in Table 5 row 4). We compared this task selection with three alternatives. The first includes no instruction, implemented with an individual branch for each task (top row, multi-branched architecture). The second performs both segmentation and recognition together, implemented with two simultaneously executed branches (second row in the table). The final compares with channel modulation (row 3). To deal with both recognition and segmentation tasks we used two separate output branches (one producing one of 10 class labels, the other producing a 28*28 map) rather than a single one. The branches were used according to the task instruction: using one at a time when classification/segmentation instruction is given (rows 3, 4), or using both branches simultaneously when no such instruction is provided (row 2). Results show that our model with the selected branch (row 4) performs better than all alternatives. A direct comparison with the alternative of using the two branches together, as in standard (un-instructed) branching models (second row), reveals that the selected branch increases in performance, and at the same time, the results of the un-selected branch significantly decreases in performance, by 10-30%.\nAblations and the use of image content We compared our scheme, which uses image information (via BU1) and performs full tensor element-wise modification of the feature maps (of BU2) to the channel modulation scheme (which performs channel modulation with no image content information) with roughly the same number of parameters, shown in Table 6b, top row (ch-mod (extended)). We also tested two ablations of our network: one without the BU1 stream, removing the image-content contribution (TD, second row) and second without the TD stream, where the task is supplied to both BU streams, concatenated to the image data (BU, third row, details in Supplementary). We conducted the experiments on two sets of tasks applied to the 9-location MNIST: classification by location (‘by loc’) and classification by reference (‘by ref’), and on two sets of tasks used in the CLEVR dataset (40 and 1645 tasks), which are inherently a ‘by ref’ tasks, using reference objects. Table 6b below shows that ControlNet achieves significantly better results than its ablations, with a gap that increases as the complexity or the number of tasks increase. The comparison between our model (4th row) and our ablated model (2nd row), which does not use the image content in the modification process, shows the significant advantage of using image content information. The addition of BU1 (which shares weights with BU2) improves the accuracy substantially with almost no additional parameters.\nTask Selectivity Task-selectivity is likely to be a crucial property of architectures that execute one task at a time (such as ch-mod, ours) in order to fully utilize the network resources for the selected task. We defined a task-selectivity measure by comparing the prediction accuracy of the model for the selected task vs. non-selected tasks. To make this comparison, we trained readout heads to predict from the final representation produced by the BU2 stream not just the selected task, but all possible tasks. For example, in the 4-digits MNIST, we trained four readout branches on the top of the final representation (trained to produce the selected location) to predict the digit identities at all four locations. We define task-selectivity by the ratio between the accuracy for the selected task (above chance level) and the average accuracy of the non-selected tasks (above chance level). Detailed results are shown in Figure 4a for the multi-MNIST, 4 tasks setting (top) and 9 tasks-setting (bottom).The results show over 90% accuracy for the selected task branch (along the diagonal), and close to chance-level accuracies for all other branches. Figure 4b summarizes the results of the same experiment for the channel modulation architecture, which shows less selectivity. The corresponding selectivity indices for the 4-class case are 26.5 and 8.25 and for the 9-class case 37.23 and 7.97 for our model and channel-modulation respectively. The higher selectivity index of our method is likely\nto be related to the increased gap in performance with respect to alternative models in Tables 2 and 3. Results in the section on tasks heterogeneity further show that even for heterogeneous tasks, task selection increases the performance of the selected task at the expense of the non-selected tasks.\nTask-dependent spatial maps Using a task dependent localization loss at the end of the TD stream in train time allows us to obtain task-dependent spatial maps in inference time, helping interpretability of the result by locating intermediate objects of interest. Figures 4c and 4d demonstrates the location maps produced by our architecture at inference time. In both examples, the predicted mask/object is well localized (on the crown of the bird or on the object below the small metal object) and the attribute (color or shape) is correctly predicted. In case of wrong result, it is possible to examine whether the error in the attribute was associated with mis-detecting the relevant part. Additional examples of interest and failure cases are shown in the Supplementary material." }, { "heading": "5 SUMMARY", "text": "We described an architecture for multi-task learning, which is qualitatively different from previous models, primarily by the use of the BU1 stream and a TD convolutional stream, which controls the final BU2 stream as a function of the selected task, location and image content. We tested our network on four different datasets, showing improvements in accuracy compared with other schemes, along with scaling the number of tasks with minimal effect on performance, and helping interpretability by pointing to relevant image locations. Comparisons show higher task-selectivity of our scheme, which may explain at least in part its improved performance.\nMore generally, multiple-task learning algorithms are likely to become increasingly relevant, since general vision systems need to deal with a broad range of tasks, and executing them efficiently in a single network is still an open problem. Our task-dependent TD control network is a promising direction in this field in terms of accuracy and scalability. In future work we plan to adapt our architecture to a wider range of applications (e.g., scene understanding, images generation), to a wider range of architectures with higher capacity and to examine the possible combination of branching strategy with our TD task selection approach, extending our heterogeneous tasks example. Our architecture is also potentially useful for problems such as online learning, domain adaptation and catastrophic forgetting, demonstrated in part by our example of adding a new task to an already trained model, and we plan to further explore these central problems in the future." } ]
2,020
null
SP:d00483a38437b6c706f04cc03b34cc593a3f7273
[ "Deep neural networks are known to be brittle, and can lead to dangerous consequences if left unverified. Forward reach set computation can be used as a basic primitive to verify properties of deep neural networks used in a robotic setting. There has been a rising interest in verifying larger neural networks used in safety critical setting. " ]
To apply an algorithm in a sensitive domain it is important to understand the set of input values that result in specific decisions. Deep neural networks suffer from an inherent instability that makes this difficult: different outputs can arise from very similar inputs. We present a method to check that the decisions of a deep neural network are as intended by constructing the exact, analytical preimage of its predictions. Preimages generalize verification in the sense that they can be used to verify a wide class of properties, and answer much richer questions besides. We examine the functioning and failures of neural networks used in robotics, including an aircraft collision avoidance system, related to sequential decision making and extrapolation. Our method iterates backwards through the layers of piecewise linear deep neural networks. Uniquely, we compute all intermediate values that correspond to a prediction, propagating this calculation through layers using analytical formulae for layer preimages.
[]
[ { "authors": [ "David Avis" ], "title": "A Revised Implementation of the Reverse Search Vertex Enumeration Algorithm, pp. 177–198", "venue": "Birkhäuser Basel, Basel,", "year": 2000 }, { "authors": [ "C. Bradford Barber", "David P. Dobkin", "Hannu Huhdanpaa" ], "title": "The quickhull algorithm for convex hulls", "venue": "ACM Trans. Math. Softw.,", "year": 1996 }, { "authors": [ "Jens Behrmann", "Sören Dittmer", "Pascal Fernsel", "Peter Maaß. Analysis of Invariance", "Robustness via Invertibility of ReLU-Networks. arXiv e-prints", "Jun" ], "title": "URL http://arxiv", "venue": "org/abs/1806.09730.", "year": 2018 }, { "authors": [ "Alberto Bemporad", "Carlo Filippi", "Fabio D. Torrisi" ], "title": "Inner and outer approximations of polytopes using boxes", "venue": "Computational Geometry,", "year": 2004 }, { "authors": [ "Benno Büeler", "Andreas Enge", "Komei Fukuda" ], "title": "Exact Volume Computation for Polytopes: A Practical Study, pp. 131–154", "venue": "Birkhäuser Basel, Basel,", "year": 2000 }, { "authors": [ "Stefan Carlsson", "Hossein Azizpour", "Ali Sharif Razavian" ], "title": "The preimage of rectifier network activities. 2017", "venue": "URL https://openreview.net/pdf?id=HJcLcw9xg", "year": 2017 }, { "authors": [ "Komei Fukuda" ], "title": "Lecture: Polyhedral computation, 2014. URL https://inf.ethz.ch/ personal/fukudak/lect/pclect/notes2014/PolyComp2014.pdf", "venue": null, "year": 2014 }, { "authors": [ "Komei Fukuda", "Alain Prodon" ], "title": "Double description method revisited", "venue": "Combinatorics and Computer Science,", "year": 1996 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "Deep ReLU Networks Have Surprisingly Few Activation Patterns", "venue": "arXiv e-prints, Jun 2019", "year": 1906 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Kyle Julian", "Jessica Lopez", "Jeffrey Brush", "Michael Owen", "Mykel Kochenderfer" ], "title": "Policy compression for aircraft collision avoidance systems", "venue": "In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10,", "year": 2016 }, { "authors": [ "Kyle D. Julian", "Mykel J. Kochenderfer" ], "title": "Guaranteeing safety for neural network-based aircraft collision avoidance systems", "venue": "IEEE/AIAA 38th Digital Avionics Systems Conference (DASC),", "year": 2019 }, { "authors": [ "Guy Katz", "Clark Barrett", "David L. Dill", "Kyle Julian", "Mykel J. Kochenderfer" ], "title": "Reluplex: An efficient smt solver for verifying deep neural networks", "venue": "In Rupak Majumdar and Viktor Kunčak (eds.), Computer Aided Verification. Springer International Publishing,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "arXiv e-prints,", "year": 2014 }, { "authors": [ "Mykel J. Kochenderfer", "James P. Chryssanthacopoulos" ], "title": "Robust Airborne Collision Avoidance through Dynamic Programming", "venue": "Massachusetts Institute of Technology, Lincoln Laboratory, Project Report ATC-371,", "year": 2011 }, { "authors": [ "Thiago Serra", "Christian Tjandraatmadja", "Srikumar Ramalingam" ], "title": "Bounding and Counting Linear Regions of Deep Neural Networks", "venue": "arXiv e-prints,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna Estrach", "Dumitru Erhan", "Ian Goodfellow", "Robert Fergus" ], "title": "Intriguing properties of neural networks", "venue": "URL http://arxiv.org/abs/1312.6199. 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Vincent Tjeng", "Kai Xiao", "Russ Tedrake" ], "title": "Evaluating Robustness of Neural Networks with Mixed Integer Programming", "venue": "arXiv e-prints, Nov 2017. URL http://arxiv.org/abs/1711", "year": 2017 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv e-prints, Nov 2017", "venue": "URL http://arxiv.org/abs/1711", "year": 2017 }, { "authors": [ "Kai Y. Xiao", "Vincent Tjeng", "Nur Muhammad Shafiullah", "Aleksander Madry" ], "title": "Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability", "venue": "ICLR 2019,", "year": 2018 }, { "authors": [ "Xiaodong Yang", "Hoang-Dung Tran", "Weiming Xiang", "Taylor Johnson" ], "title": "Reachability Analysis for Feed-Forward Neural Networks using Face Lattices", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "Hao Zhou", "Jose M. Alvarez", "Fatih Porikli" ], "title": "Less is more: Towards compact CNNs", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Brockman" ], "title": "The fitting procedure is based upon the Monte Carlo policy gradient vignette from the PyTorch project,1 with a considerably simplified control policy, consisting of DNN wih only five hidden units", "venue": null, "year": 2016 }, { "authors": [ "Julian" ], "title": "2016)’s mean-squared-error-based criterion with cross entropy loss to directly model", "venue": null, "year": 2016 }, { "authors": [ "Julian", "Kochenderfer" ], "title": "2019) we see a highly accurate network that exhibits unusual “islands", "venue": null, "year": 2019 }, { "authors": [], "title": "ADDITIONAL COMPUTATIONAL DETAILS Definition 1, as well as Lemma 3 and Lemma 4 worked with what is known as the H representation of a polytope. What Fukuda (2014) terms the “Minkowski-Weyl Theorem” states that there is an equivalent representation, as the Minkowski sum of (1) a convex combination", "venue": null, "year": 2014 }, { "authors": [ "Büeler" ], "title": "how” about this difficulty. In high dimensions analysis of volumes seems difficult. However, if we already plan to compute the V representation of the preimage – also a highly complex operation – we may be able to compute the volume “for free", "venue": null, "year": 2000 } ]
[ { "heading": null, "text": "To apply an algorithm in a sensitive domain it is important to understand the set of input values that result in specific decisions. Deep neural networks suffer from an inherent instability that makes this difficult: different outputs can arise from very similar inputs. We present a method to check that the decisions of a deep neural network are as intended by constructing the exact, analytical preimage of its predictions. Preimages generalize verification in the sense that they can be used to verify a wide class of properties, and answer much richer questions besides. We examine the functioning and failures of neural networks used in robotics, including an aircraft collision avoidance system, related to sequential decision making and extrapolation. Our method iterates backwards through the layers of piecewise linear deep neural networks. Uniquely, we compute all intermediate values that correspond to a prediction, propagating this calculation through layers using analytical formulae for layer preimages." }, { "heading": "1 INTRODUCTION", "text": "Folk wisdom holds that although deep neural networks (DNNs) can achieve excellent predictive accuracy, reasoning about their performance is difficult, even for experts. Our goal is to enable nonexpert stakeholders, such as clinical health workers, investors, or military commanders to build trust a statistical model in high-stakes environments. To do this, we posit that decisionmakers want to understand a model in both directions, both from inputs to outputs, but also being able to start with hypothetical outputs, and understand the inputs that lead to them.\nIn this paper, we develop an equivalent, but much simpler, representation of a certain class of DNN classifiers. This representation, which requires only a basic numeracy to productively interact with, can be used by domain experts to build intuition and trust. We apply this method to a reinforcement learning agent trained to solve the cart-pole problem, and find that a DNN implementing a successful policy makes a particular type of mistake on 24% of the mass of the 1/8th of the state space for which we know the optimal action (Section 3.2). We also show how using the preimage in place of verification can yield a more efficient and interpretable end-to-end system for analyzing aircraft collision avoidance systems (Section 3.3)." }, { "heading": "1.1 PREVIOUS WORK", "text": "DNNs have the property that knowing the output tells us very little about the input it corresponds to. This is most apparent in image classifiers, where totally different outputs can arise from inputs that are visually indistinguishable (Szegedy et al. (2014)). We build upon the mathematical framework developed for verifying DNNs that grew out of a desire to prove the absence of adversarial examples, for example Tjeng et al. (2017) and Wong & Kolter (2017). However, we depart from these studies along with Katz et al. (2017), being more oriented towards small DNNs that map to and from lowdimensional spaces with considerable structure. These DNNs arise especially in systems which interoperate with the physical world, for example mapping measurements of positions and velocities to movements. Table 1 orients our work to the literature.\nWe have phrased verification in this unusual fashion to facilitate comparison with the other points. Stated in the familiar application to image classifiers X would be an epsilon ball around an input, and Y would be the halfspace where one coordinate is higher than all others.\nVerification ultimately amounts to a simple yes or no, and so answering higher-level questions typically requires many verifications: for example, Katz et al. (2017) describes a suite of 45 tests, and image classifiers often wish to verify the absence of adversarial examples around the entire training set. Yang et al. (2020) is an interesting extension to verification in that it computes the entire image of, say, an epsilon ball around a data point, and not just whether it intersects with a decision boundary.\nReasoning forward, about the outputs that can arise from inputs, is only half of the picture. Carlsson et al. (2017) and Behrmann et al. (2018) are oriented backwards, they attempt to reconstruct the inputs that result in an output. These related papers study the statistical invariances that nonlinear layers encode. Behrmann et al. (2018) examines the preimage of a single point through a single ReLU layer, analyzing stability via an approximation-based experiment. Carlsson et al. (2017) analyzes the preimage of a single point through the repeated application of a nonlinearity, purely theoretically. Our paper looks at the preimage of non-singleton subsets of the codomain, which is much more practically useful, and requires considerable extension to their approaches." }, { "heading": "2 METHOD", "text": "Our method is easily stated: build up the preimage of a DNN from the preimage of its layers, using simple analytical formulae. We start by developing some properties of the preimage operator, then we describe the class of sets that we compute the preimage of, and finally we discuss the class of DNNs that our algorithm addresses." }, { "heading": "2.1 PROPERTIES OF PREIMAGES", "text": "Lemma 1 shows how to build up the preimage of a DNN from the preimages of its constitutent layers.\nLemma 1 (Preimage of composition is reversed composition of preimages). For functions fj : Rnj → Rnj+1 ,\n(f`+k ◦ f`+k−1 ◦ . . . ◦ f`)−1 = f−1` ◦ . . . ◦ f −1 `+k−1 ◦ f −1 `+k. (1)\nSecondly, we mention an intuitive property of f−1 that is handy for building up the preimage of any set from the preimages of any partition of that set.\nLemma 2 (Preimage of union is union of preimages). f−1 ( ∪Ni=1Si ) = ∪Ni=1f−1(Si)." }, { "heading": "2.2 POLYTOPES", "text": "Our method is not applicable to arbitrary sets Y , but rather sets that, roughly, have piecewise linear boundaries. The basic building block of these sets are polytopes.\nDefinition 1 (Polytope). A polytope in Rn is a set that can be written as {x ∈ Rn : b − Ax ≥ 0} for some m ∈ N, b ∈ Rm, and A ∈ Rm×n.\nPut more simply: a polytope is the intersection of half-planes. Definition 1 does not require that polytopes be bounded, but polytopes are convex. Sets with linear boundaries, though they may be non-convex, can decomposed into the union of polytopes. We term such sets region-unions, and the set of polytopes which comprise them, regions.\nDefinition 2 (Region and region-union). For N ∈ N, bi ∈ Rmi , Ai ∈ Rmi×n, with mi ∈ N, a region is\n{{x : bi −Aix ≥ 0} ; i = 1, . . . , N} . (2)" }, { "heading": "A region-union is a set ∪r∈R r for some region R.", "text": "Region-unions are interesting because the the preimage polytopes under piecewise linear functions are regions-unions. However, we need to also keep information on how to form a region-union, hence the notion of a region. It is trivial to observe that if R1 and R2 are regions, then R1 ∪ R2 is likewise a region, and correspondingly for region-unions." }, { "heading": "2.3 LINEAR AND RELU POLYTOPE PREIMAGES", "text": "In this section, we give formulae for the preimage of linear and ReLU functions, giving significant content to Lemma 1. The preimage of polytopes under linear mappings are polytopes:\nLemma 3 (Preimage of Linear layer).\n(x 7→Wx+ a)−1({x : b−Ax ≥ 0}) = {x : (b−Aa)−AWx ≥ 0}. (3)\nReLU is a piecewise linear function, so if we carefully treat the portions of the domain on which it exhibits different behavior, we obtain a similar formulation for each:\nLemma 4 (Preimage of ReLU layer).\nReLU−1({x : b−Ax ≥ 0}) = ⋃\nν∈{0,1}n {x : b−Adiag(ν)x ≥ 0,−diag(1− ν)x ≥ 0, diag(ν)x ≥ 0} . (4)\nTo understand Lemma 4 let s(x) be the vector given by s(x)i = 1 if xi ≥ 0 and zero otherwise. Then diag(s(x))x = ReLU(x). This expression separates x 7→ ReLU(x) into a pattern of signs over its coordinates and x itself. This means that once we restrict attention to a set on which the sign does not change, we can apply familiar linear algebra routines to compute the preimage set, akin to Lemma 3. The nonnegative values are denoted by ν ∈ {0, 1}n in the above, and the set of x such that xi ≥ 0 ⇐⇒ νi = 1 is given by diag(ν)x ≥ 0. Similarly, xi ≤ 0 ⇐⇒ νi = 0 for i = 1, 2, . . . , n if and only if −diag(1− ν)x ≥ 0. Equation 4 follows by partitioning Rn into the 2n sets where each coordinate is nonnegative or not.\nComputing the preimage of a ReLU layer is unavoidably intractable at scale, though the problem exhibits considerable structure. We expect that it is possible to compute the preimage of networks of a similar scale to those that can be completely verified, such as small image-scale networks. Preimages are most insightful and useful when the inputs and outputs have definite interpretation – application areas where the need for massive networks is less." }, { "heading": "2.4 THE SUFFICIENCY OF LINEAR AND RELU LAYERS", "text": "In familiar terms a DNN classifier might consist of some “feature building” modules, say composed of alternating convolution and maxpooling, then flattened, and passed onto the prediction logic consisting of alternating linear and ReLU layers, possibly including dropout or batch normalization, and concluding with a softmax function to normalize the predictions to a probability distribution. Resnets (He et al. (2016)) do not strictly fit this pattern, bu can be handled with similar reasoning (see Appendix B).\nHow do the results of Section 2.3 suffice to invert such DNNs? Firstly, under our convention that layers operate on flat tensors, flattening is superfluous. Next, dropout affects inference only through the weights – this layer can be omitted entirely in computing the preimage. Convolution is essentially linear. Maxpool is straightforwardly rewritten in terms of the ReLU and linear function. {x : b− Asoftmax(x) ≥ 0} is not a polytope. However, if the classification alone suffices then the softmax layer can be elided entirely since arg maxj xj = arg maxj softmax(x)j ." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 TWO MOONS CLASSIFICATION", "text": "To cultivate some intuition about the preimage of a DNN we start by examining a classic test problem in nonlinear classification. We fit a DNN f : [−3,+3]2 → R2 consisting of two nonlinear layers with eight neurons each on an instance of the “two moons” dataset. This data is shown in Figure 1a (further details of details of f and the data are in Section D.1). Figure 1b plot the corresponding logits, along with the sets to be inverted {x : x1 ≶ x2} ⊆ R2. Figure 1c shows the corresponding preimages, with different hues of the same color corresponding to different sign patterns ν in Equation 4." }, { "heading": "3.2 CART-POLE REINFORCEMENT LEARNING AGENT", "text": "In the “cart pole” control problem a pole is balanced atop a cart which moves along a one dimensional track (Figure 2). Gravity pulls the pole downward, the falling of the pole pushes the cart, and external movement of the cart pushes the pole in turn. The control problem is to keep the pole upright by accelerating the cart.\nIn the formulation of Brockman et al. (2016) controller inputs are: the position of the cart, x, velocity of the cart ẋ, the angle of the pole θ from upright, and the angular velocity of the pole θ̇. Possible actions are to accelerate the cart in the positive or negative x direction. The reward environment encourages balancing by a unit reward per period before failure, where failure means that the pole is not sufficiently upright (θ 6∈ [−π/15,+π/15]), or the cart not near enough the origin (x 6∈ [−2.4,+2.4]). We have no prescribed limits for ẋ and θ̇, but via a methodology described in Section D.2.1, we interpret these states as taking values in [−3.0,+3.0]× [−3.5,+3.5].\nConsider a still cart and pole (ẋ = θ̇ = 0), with the cart left of zero (x ≤ 0) and the pole left of vertical (θ ≤ 0). Keeping x and θ near zero is preferable, since these are further from failure, so moving left will steady θ but worsen x. Nonzero velocities make this reasoning more complicated, but one configuration is unambiguous: if x ≤ 0, ẋ ≤ 0, θ ≥ 0, θ̇ ≥ 0, then pushing right is clearly\n-2.0 -1.0 +0.0 +1.0 +2.0\nx -0.2\n-0.1\n+0.0\n+0.1\n+0.2\nθ\nFigure 3: Projection of subsets of the domain where the wrong action is taken, with the hue of the area being proportional to the volume of the wrong sets, divided by the volume of the projection.\nthe correct action. Figure 2 gives depicts a value in this orthant. Let D+1 = (−∞, 0]2 × [0,∞)2, and correspondingly, let D−1 = [0,+∞)2 × (−∞, 0]2. We fit a one hidden layer neural network control function f : R4 → R2 using policy gradient reinforcement learning. Details of this calculation are in Section D.2. This agent achieves its goal of balancing the pole: in 1000 trials of 200 periods, (x, θ) remains uniformly in [−.75,+.75] × [−.05,+.05] with very low velocities. Nonetheless there are many states for which pushing right is clearly the correct action, but for which the DNN controller predicts −1: in the same simulation of 1000 trials of 200 steps, roughly 7% of actions performed by the agent fail this sanity check. This behavior is not a numerical fluke – it holds if we consider states only nonnegligibly interior to D+1 and D−1, and also if we only count predictions that are made with probability greater than .51. One such pockets of counterintuitive behavior is\n[−2.399,−1.462]× [−2.922,−2.262]× [+1.798× 10−8,+0.1067]× [+1.399,+1.728] ⊆ D+1 ∩ f−1({x ∈ R2 : x1 > x2}).\nWe find this box large – for example the first coordinate comprises almost 20% of that dimension of the state space. The size of this box is even more suprising because it is inscribed within a larger polytope (using the algorithm of Bemporad et al. (2004)) that has a volume about 40 times larger. The total volume in R4 of these sets is 3% of the state space volume, and thus 24% of the volume of D−1 ∪ D+1. Figure 3 parses this surprising fact a bit further by plotting the projection of the four-dimensional domain onto the (x, θ) plane. The hue of the gray is proportional to the volume of the four-dimensional polytope divided by the volume of the two-dimensional projection, so darker areas mean more (ẋ, θ̇) mass that is wrong. Since the entirety of the second and fourth quadrants are grey at every (x, θ) ∈ [−2.4,+2.4] × [−π/15,+π/15] there are some (ẋ, θ̇) where the wrong action will be taken." }, { "heading": "3.3 COLLISION AVOIDANCE SYSTEMS", "text": "The final application shows how to use domain knowledge to anticipate dangerous behavior of a DNN in a complex modelling domain." }, { "heading": "3.3.1 BACKGROUND", "text": "Aircraft automated collision avoidance systems (ACAS) are navigational aids that use data on aircraft positions and velocities to issue guidance on evasive actions to prevent collisions with an intruding aircraft. The ACAS developed in Kochenderfer & Chryssanthacopoulos (2011) uses dynamic programming to formulate the optimal control of a partially observed Markov process, and issues advisories to optimize a criterion that penalizes near collisions and raising false or inconsistent warnings. Unfortunately, evaluating the policy function is too resource-intensive to run on certified avionics hardware. Small DNNs have been been found to be adequate approximators that require little storage and can perform inference quickly. A downside of this approximation is that even accurate DNNs can give very wrong predictions on some inputs – Katz et al. (2017), for example show that when another aircraft is nearby and approaching from the left, a DNN-based approximation need not advise the correct action of turning right aggressively.\nVerification can check that one-step behavior in a DNN-based ACAS behaves as intended. However, it cannot answer higher level questions like “will a near-collision occur if this policy is followed?” The idea of Julian & Kochenderfer (2019) is to verify dynamic properties of such systems by combining single-step verification with worst-case assumptions about randomness in state transitions and (constrained) behavior of other aircraft.\n3.3.2 DISCRETIZE AND VERIFY: JULIAN & KOCHENDERFER (2019)\nIn Julian & Kochenderfer (2019), the state consists of x and y distances between the two aircraft, and an angle of approach angle between them, ψ. The actions are five turning advisories: (1) “clear of conflict” (COC), (2) weak left [turn] (WL), (3) strong left (SL), (4) weak right (WR), and (5) strong right (SR). The initial condition is given by the boundary of the domain where the distance of the intruding aircraft are at their maxima. Transition dynamics are denoted by Ψ(a, S), a set-valued function which gives the set of states that are reachable from states in S under action a. Ψ encompasses both randomness in the transition, and behavior of the other aircraft. The change in (x, y) is controlled by the angle between the crafts, and the update to the angle is the difference between the turning of the two crafts, with some randomness. To compute the states that can arise under a policy, the idea is to begin from an initial set of states that are known to be reachable, and to iteratively\nappend states that are reachable from any of those states, until a fixed point is reached. U denotes the set of states that we wish to preclude.\nThis idea is formalized by Julian & Kochenderfer (2019) as Algorithm 1. Because multiple advisories will be issued whenever a cell straddles the decision boundary, the discretized algorithm will wrongly include some states as reachable since a worst-case analysis needs to take account of all reachable states. Table 2 gives an indication of the magnitudes of overestimation, presenting how much of the state space will lead to multiple advisories under a simple discretization scheme.\nJulian & Kochenderfer (2019) do not use an equispaced grid, but the basic point – that discretization error cannot be made negligible – is an inescapable feature of this approach. And any false positives in a single-step decision function will be amplified in the dynamic analysis, as more reachable states at one point time lead to even more reachable points at the next step, so a 1% overestimation at one step may be compounded to considerably more through the dynamics. Coincidentlly, Julian & Kochenderfer (2019) are able to reach a usable solution, but are unable to guarantee the absence of near collisions under some realistic parameter configuations.\nNote how the cells can be traversed in any order. This is a simple way to see that this algorithm is not fully using the spatial structure of the problem. Next, we incorporate this knowledge.\nData: Maximum distance setR0, policy f , an “unsafe set” U , transition dynamics Ψ, encounter length T . Result: Guaranteed to not reach an unsafe state fromR0 under policy f? initialization: t = 0, done = False; Partition the state space into cells c ∈ C; while not done do\nt = t+ 1; Rt = ∅; for c ∈ C such that c ∩Rt−1 6= ∅ do\nfor i such that f(c) ∩ {x : xi ≥ xj for j 6= i} 6= ∅ do for c′ ∈ C such that c′ ∩Ψ(i, c) 6= ∅ do Rt ← Rt ∪ c′\nend end\nend done =Rt == Rt−1 or U ∩Rt 6= ∅ or t > T .\nend ReturnRt ∩ U == ∅ Algorithm 1: Algorithm from Julian & Kochenderfer (2019) for computing whether an unsafe set U can be reached under a policy f beginning fromR0 under transition dynamics Ψ." }, { "heading": "3.3.3 OUR PREIMAGE-BASED ALTERNATIVE", "text": "Rather than looping first the domain, then over actions at those points, Algorithm 2 loops over actions and, using the preimage, computes all reachable points under that action.\nData: R0, f , U , Ψ, T . Result: Guaranteed to not reach an unsafe state fromR0 under policy f? initialization: t = 0, done = False; for i = 1, 2, . . . , nL do\nΞi = f −1({x : xi ≥ xj for j 6= i})\nend while not done do\nt = t+ 1; Rt = ∅; for i = 1, 2, . . . , nL do Rt ← Rt ∪Ψ(i,Ξi ∩Rt−1); end done =Rt == Rt−1 or U ∩Rt 6= ∅ or t > T .\nend Return U ∩Rt == ∅. Algorithm 2: Our preimage-based, exact algorithm for computing the dynamically reachable states in an ACAS.\nWhile Algorithm 2 is exact – it will never wrongly say that a state can be reached – the accuracy of Algorithm 1 is ultimately controlled by the number of cells, |C|. This is because it is necessary to perform nL verifications for each reachable cell, and the number of reachable cells is proportional to |C|. Let V denote the cost of a verification. Verification is known to be NP-complete (Katz et al. (2017)), so V dominates all others calculation such as computing intersections or evaluating Ψ(i, c). Thus, the computational cost of Algorithm 1 is O(|C|V nL). In Algorithm 2 must initially compute nL preimages which dominates the entire calculation, which consists of relatively fast operations – applying the dynamics and computing intersections up to T times, for T a number around 40.\nLet P denote the cost of computing a preimage, then Algorithm 2 isO(PnL). So whilst it dispenses with the need to solve O(|C|) verifications, but may be more intractable if P is significantly higher than V . Let the dimensions of the nonlinear layers in a DNN be n`i , then because in the worst case it is necessary to check each nonlinearity, each of which can be independently in a negative or positive configuration, V = O(2 ∑ i n`i ). Exact verification for even a single cell is impossible at present for\nlarge networks. We believe that preimages can be computed roughly (within a small constant factor) as easily as a verification – P = O(V ). We are currently developing this conjecture formally, the idea is that, as shown in Lemma 4, each nonlinear layer `i generates up to 2n`i sets, the preimage of which must be computed through earlier layers.\nIn any case, as is true of any exponentially hard problem, the practical tractability of both P and V hinges importantly upon theoretical arguments showing that not all 2n configurations of the nonlinearities of an n-dimensional layer can be achieved (Serra et al. (2017); Hanin & Rolnick (2019), and clever implementations that take account of the structure of the problems (e.g. Tjeng et al. (2017); Katz et al. (2017)).\nThe distinction between the two algorithms is made clearer by examining an encounter plot such as Figure 4. Encounter plots are concise summarizations of the policy function, here depicting the possible advisories, for a fixed angle of approach (which is here conveyed by the orientation of the red aircraft relative to the black). This figure, which replicates Figure 4 of Julian & Kochenderfer (2019), differs from it in a crucial respect: it is depicts the analytically-computed preimage of the five sets where each of the advisories are issued (details of the experiment are in Section D.3). The shaded areas arise from plotting polytopes, as in Algorithm 2. Julian & Kochenderfer (2019), on the other hand, produce such plots by evaluating the predictions of the network on a fine grid. The different manner in which the plots are produced is an exact analogue of the different way that the networks are summarized and analyzed through time." }, { "heading": "4 CONCLUSION", "text": "In many areas, safety and interpretation inhibit the use of DNNs, because their use still requires a good deal of indirect experimentation and oversight to have confidence that it will not act in an unintuitive way. This paper has proposed computing the preimage of the decisions of a DNN as an intuitive diagnostic that can help to anticipate problems and help domain experts gain trust in a DNN, even if they are unable to formally articulate what makes a DNN trustworthy. In order to do this, we developed the preimage of a DNN and presented an algorithm to compute it. We demonstrated the utility of the preimage to understand counterintuitive behavior from a cart pole agent, and to more precisely characterize the set of states that would be reachable in an existing application of DNNs to aircraft automated collision avoidance systems." }, { "heading": "A PROOFS", "text": "A.1 PROOF OF LEMMA 1\nProof. Unroll Equation 1. Let S ⊆ Rn`+k be arbitrary.\n(f`+k ◦ f`+k−1 ◦ . . . ◦ f`)−1(S) ={x : (f`+k ◦ f`+k−1 ◦ . . . ◦ f`)(x) ∈ S} ={x : f`+k((f`+k−1 ◦ f`+k−2 ◦ . . . ◦ f`)(x)) ∈ S} ={x : (f`+k−1 ◦ f`+k−2 ◦ . . . ◦ f`)(x) ∈ f−1`+k(S)}\n= ...\n={x : (f`+1 ◦ f`)(x) ∈ (f−1`+2 ◦ . . . ◦ f −1 `+k−1 ◦ f −1 `+k)(S)} ={x : f`(x) ∈ (f−1`+1 ◦ f −1 `+2 ◦ . . . ◦ f −1 `+k−1 ◦ f −1 `+k)(S)} =(f−1` ◦ . . . ◦ f −1 `+k−1 ◦ f` + k) −1(S).\n(5)\nA.2 PROOF OF LEMMA 2\nProof.\nx ∈ f−1(∪Ni=1Si) ⇐⇒ f(x) ∈ ∪Ni=1Si ⇐⇒ f(x) ∈ S1 or f(x) ∈ S2 or . . . or f(x) ∈ SN ⇐⇒ x ∈ f−1(S1) or x ∈ f−1(S2) ∈ S2 or . . . or x ∈ f−1(SN ) ⇐⇒ x ∈ ∪Ni=1f−1(Si).\nNote that an identical argument shows that f−1 ( ∩Ni=1Si ) = ∩Ni=1f−1(Si). This can be useful in some applications where where Si can be wrtten as Ψ∩Ξi – writing ∪iSi as Ψ∩∪iΞi may be more efficient." }, { "heading": "B THE INVERSE OF A RESIDUAL BLOCK", "text": "The key function in a residual block is\nx 7→W2ReLU(W1x) + x.\nCombining arguments similar to Lemma 3 and Lemma 4, we have that Lemma 5 (Preimage of residual block).\n(z 7→W2ReLU(W1z) + z)−1({x : b−Ax ≥ 0}) ={x : b−A(W2ReLU(W1x) + x) ≥ 0}\n= ⋃\nν∈{0,1}n {x : b−A(W2diag(ν)W1 + I)x ≥ 0,−diag(1− ν)W1x ≥ 0, diag(ν)W1x ≥ 0} .\n(6)" }, { "heading": "C COLLECTING THIS ALL UP", "text": "Section 2.1, Section 2.2, Section 2.3, and Section 2.4 together give us a recipe for inverting a wide class of image sets (region-unions) for a wide class of DNNs (those which can be written as the composition of linear and ReLU functions). To summarize the steps are:\n1. Put the network into “standard form”: (a) Embed any transformations that are “off” at inference time, such as dropout or batch\nnormalization into the weights. (b) Rewrite the network in flattened form, for example replacing 3 × 32 × 32 tensors by\n3072× 1 vectors. This is a convention to facilitate our polytope formulation. (c) Rewrite all transformations as compositions of linear and ReLU functions. For exam-\nple, convolution and average pooling are linear functions. Maxpooling, hard tanh, and leaky ReLU can be written as the composition of linear and ReLU functions.\n2. Let f = fL ◦ fL−1 ◦ . . . ◦ f1 denote the network in this form. 3. Let RL = ∪i∆i be the image set that we wish to invert, for example RL = ∆1 = {x : x1 ≥ x2} ⊆ R2 in a binary classifier.\n4. Compute f−1L (∆i) for all i, using Lemma 3 or Lemma 4. 5. Each term above is a region-union, thus ∪if−1L (∆i) is a region-union. 6. By Lemma 2, RL−1 , f−1L (RL) = ∪if −1 L (∆i). 7. RL−1 is a region-union, so apply the same argument to compute RL−2 , f−1L−1(RL−1) = f−1L−1(f −1 L (RL)).\n8. Repeat for ` = L − 2, . . . , 1 to compute R0 = f−11 (R1) = . . . = (f −1 1 ◦ f −1 2 ◦ . . . ◦\nf−1L )(RL).\n9. Appeal to Equation 1 to conclude that R0 = f−1(RL)." }, { "heading": "D DETAILS OF EXPERIMENTS", "text": "D.1 SECTION 3.1\nThe dataset of 500 observations is generated using the scikit-learn function sklearn.datasets.make_moons with noise = .2.\nWeights are initialized according to a uniform ( − √ in features,+ √ in features )\ndistribution (the PyTorch default), and were run for 1000 epochs of with a batch size of 128. Gradient steps were chosen by the adam optimizer (Kingma & Ba (2014)) with learning rate of 0.005 and (β1, β2) = (0.9, 0.999).\nD.2 SECTION 3.2\nOur experiment is based upon the “CartPole-v1” environment from Brockman et al. (2016). The fitting procedure is based upon the Monte Carlo policy gradient vignette from the PyTorch project,1 with a considerably simplified control policy, consisting of DNN wih only five hidden units.\nD.2.1 VELOCITY MAGNITUDE\nAs far as we can tell, there is no single best methodology for computing limits on the velocities (ẋ, θ̇) in the cart pole problem. In premise, very high velocities could be supported by the discretization scheme, but these are unlikely to be achieved by any feasible sequence of actions. If we restrict attention to limits described by actions, how should we characterize the set of actions we consider? For example, should we simply observe the behavior of some non-optimized agent? Should we deliberate construct agents to pursue velocity-maxmizing strategies? Should we force agents to have the same initialization as that prescribed in the fitting?\nWe concluded that an interpretable baseline which gave quantitative bounds robust to details of parameterizations would be best. For this, we chose our limits on ẋ, θ̇ as the values that answer the question “how fast can the cart and pole be moving if we start from rest with the cart all the\n1https://github.com/pytorch/examples/blob/master/reinforcement_ learning/reinforce.py\nway to the right, and the pole all the way to the left, and continually push left until failure?”. This experiment is plotted in Figure 5, where we see that the implied limits are ±3.0 and ±3.5 for ẋ and θ̇, respectively.\nIn addition to being easy to envision and interpret, a policy that pushes in a uniform direction is a natural boundary between benigh “dumb” policies and those that more actively seek to exercise worst-case scenarios through some deliberately degenerate behavior.\nThese limits largely agree with three the other candidates we considered:\n• The same experiment, although constrained to obey the same initialization as in Brockman et al. (2016) (these limits are tighter, understandably, roughly 2.25, 2.75 respectively).\n• A simple one-parameter agent which (starting from an initialization near the origin) seeks out high velocities by beginning to push in a uniform direction, then switches to the opposite direction.\n• The most extreme values emitted from a small (and hence exhibitting more erratic behavior early on the in the fitting) DNN that is able to eventually achieve good performance.\nD.3 SECTION 3.3\nThe analysis presented in Section 3.3 is based entirely on data generated by Julian & Kochenderfer (2019)’s system that formulates and solves dynamic programs to deliver lookup tables of optimal collision avoidance behavior in the same manner as the FAA’s proprietary software. Our DNN modelling is somewhat different, however, and whilst we think that our results can be interpreted within their framework, in this section we detail the aspects of our analysis that differ from Julian & Kochenderfer (2019)’s.\nD.3.1 FITTING – OPTIMIZATION CRITERION\nThe first manner in which our approach is different is the fitting criterion: Julian & Kochenderfer (2019) issue advisories as a function of position and velocities indirectly: by first fitting the continuation value to taking each action, and then choosing the action with the highest predicted continuation value. This oblique approach is understandable: this work is the continuation of an extended project to build (Kochenderfer & Chryssanthacopoulos (2011)) and compress the Q-table (Julian et al. (2016), Katz et al. (2017)).\nAnd although the Q-values themselves have some interpretation, issuing advisories requires only knowing the greatest. We hypothesize that it is easier to solve a problem which recognizes an invariance of the prediction to any increasing transformation. And that is what we find – by replacing Julian et al. (2016)’s mean-squared-error-based criterion with cross entropy loss to directly model the optimal decision, we are able to achieve better performance with smaller networks. One statement of the improvement is that Julian & Kochenderfer (2019) use a five layer fully connected layers with 25 neurons each to achieve an accuracy of 97% to 98%. We are able to achieve comparable accuracy with a two layer 25 neuron fully connected network (a network of the same size targetting MSE loss only attains an accuracy around 93%).\nWhy is anything less than complete fidelity to the Q-table acceptable in an approximation? The answer seems to be twofold: firstly the Q-table is itself not perfect, because of discretization artifacts. One can observe physically implausible sawtooth-like decision boundaries that arise from a coarse grid in the top plot of Julian & Kochenderfer (2019) Figure 4. The second is that accuracy alone does not capture the genuine metric of goodness, for example in the bottom plot of Figure 4 of Julian & Kochenderfer (2019) we see a highly accurate network that exhibits unusual “islands” of SR completely encompassed by a region of WR that are both not present in the ground truth, and also prescibe a conceptually wrong relationship (a pilot could be initially advised a strong right turn, then after some period of lessened danger have it downgraded to a weak right, only to have it re-upgraded to a strong right, seemingly although the danger continues to lessen). The correct metric seems to rather be plausibility of the prescribed encounter plot. These observation leads us to not worry too much about small differences in model accuracies in favour of plausibility of the encounter plots.\nD.3.2 FITTING – SYMMETRY\nThe second manner in which our approach differs from Julian & Kochenderfer (2019) is in the domain being fitted. Julian & Kochenderfer (2019) fixed a lookup table over (x, y, ψ) ∈ [−56000,+56000]2 × [−π,+π). However, if we let Q : R3 → R5 denote the Q-function as a function of the state s = (x, y, ψ), then the physics of the problem insure that\nQ(Tis) = ToQ(s) where Ti = ( +1 0 0 0 −1 0 0 0 −1 ) and To = +1 0 0 0 0 0 0 +1 0 0 0 +1 0 0 0 0 0 0 0 +1 0 0 0 +1 0 . This relationship clearly only works for aprev = COC, but similar symmetries will exist more generally. Thus, strictly speaking, half of the lookup table is unneeded, and moreover it would seem wasteful to ask a network to learn (what we already know to be) the same thing twice. Thus, our method is to only fit f over (x, y, ψ) ∈ [−56000,+56000]2 × [0,+π), and when needed to infer f(s) = Tof(Tis) for s = (x, y, ψ) with ψ < 0. In so doing, we halve the data set size, but leave other data fitting parameter unchanged.\nTo continue the analysis above describing comparable performance from smaller networks, exploiting symmetry enables us to achieve accuracy above 97% from a one layer, 24 neuron network. For computational ease, Figure 4 is computed on a 16 neuron network that achieves about 96% accuracy.\nD.3.3 INVERSION – PROJECTION\nFigure 4 was formed by taking the fitted n0 = 3 DNN, fixing ψ to a given value, and inverting the resultant n0 = 2 DNN (if W1, b1 denote the weights and bias of the original DNN, then projecting\nonto the W ′1 = W1 ( 1 0 0 1 0 0 ) , b′1 = b1 + ψW1 ( 0 0 1 ) ." }, { "heading": "E ADDITIONAL COMPUTATIONAL DETAILS", "text": "Definition 1, as well as Lemma 3 and Lemma 4 worked with what is known as the H representation of a polytope. What Fukuda (2014) terms the “Minkowski-Weyl Theorem” states that there is an equivalent representation, as the Minkowski sum of (1) a convex combination of vertices, and (2) a conic combination of some rays.\nDefinition 3 (Polytope (V representation)). A polytope in Rn is a set that can be written as\n∑ i λivi + ∑ j νjrj : λi ≥ 0, νj ≥ 0, ∑ i λi = 1 for some vi ∈ Rn, rj ∈ Rn.\nThe Minkowski-Weyl Theorem assures us that Definition 3 is equivalent to Definition 1.\nTo introspect on DNNs it is handy to have both the H and V representation of the preimage. Unfortunately, computing the V representation from the H representation is computationally challenging, both theoretically and practically. Analyzing formally the computational complexity of the problem is technical, and also complicated by a host of special cases, but roughly speaking:\n• Evidently the complexity of the problem depends not only on the size of the input, but also the output.\n• Small inputs can have large outputs. For example, a cube in d dimensions has an H representation with 2d rows, but 2d vertices.\n• In general, there are no known algorithms that have polynomial time complexity in the input and output size.\n• For some polytopes, there are algorithms that have polynomial time and space complexity in the input and output. However, but this is still exponentially large in the dimension of the polytopes, again because output size can be exponentially large in the input size.\nThe more precise statements of this summary can be found in Fukuda (2014), Section 9.\nOur problem may lie in some more easily-solved subclass of problems (though the fact that even simple geometric objects like cubes exhibit exponential growith of the output as a function of the input dimension makes this perhaps less likely). Our review of the theoretical literature was unsuccessful int this regard, and empirically we observed empirically that the runtime of our calculation did rise at an exponential-like rate with the dimension of the input. Thus: we did not find the naïve approach of computing the H representation of the preimage, then computing from that the V representation to be practical.\nHappily, when W is full rank, we can give the V representation corollary of Lemma 3.\nLemma 6. Suppose that W is full-rank, and let W † be its pseudoinverse and W⊥ be a basis for the nullspace (with kth column W⊥k ) then, for λi ≥ 0, νj ≥ 0, ∑ i λi = 1:\nWx+ a = ∑ i viλi + ∑ j rjνj ⇐⇒\nx = ∑ i (W †vi − a)λi + ∑ j (W †rj)νj + ∑ k W⊥k γk\n= ∑ i (W †vi − a)λi + ∑ j (W †rj)νj + ∑ k W⊥k γ + k + ∑ k (−1×W⊥k )γ−k\nwhere γ−k ≥ 0, γ + k ≥ 0.\n(7)\nThis is a V representation with vertices (W †vi − a) and rays (W †rj),W⊥k ,−1×W⊥k .\nChecking the rank of W , computing the pseudoinverse, and computing a basis for the nullspace are all quick and standard linear-algebraic routines.\nFor example, in our experiments were were unable to come anywhere near computing the V representation of a an MNIST classifier (n0 = 784) directly using a standard software such as cdd (Fukuda & Prodon (1996)). However, for networks with n`−1 ≥ n` for all ` ≥ 1 (which in our simple fitting procedure, was sufficient to insure that W would be full-rank), it was quite possible using Lemma 6: (1) compute the V representation of the polytopes to be inverted (these will be ten dimensional, and highly structured), then (2) apply Equation 7 iteratively backwards.\nE.1 POLYTOPE DIMENSION\nOne slightly subtle point is that the terms in Equation 4 overlap at the boundaries, for example if b ≥ 0 then the origin is in fact contained in every term in the union. In this work, we only form full dimensional polytopes, roughly those which have strictly positive n-dimensional volume. Removing lower-dimensional sets does not change the union, so this does not substantively change the preimage. A study which was more explicitly interested in the details of decision boundaries might not want do make such a restriction. Some more discussion of this point is in Section E.1.\nWe mentioned in Section 2.4 that our analyses restrict attention to polytopes that have full dimension. Formally, the dimension of a polytope is the maximum number of affinely independent points in that polytope, minus one. And a full dimensional polytop in Rd is one with dimension d. Geometrically, sets which are not full dimensional lie on the boundary between sets, and if they have a nonempty preimage, it will lie on a boundary shared by another polytope, which does have full dimension.\nThis idea is made clearer by explaining how we check if a polytope has full-dimension. Algorithm 8.4 from Section 8.3 of Fukuda (2014) states that {x : b − Ax ≥ 0} is full-dimensional iff the optimization problem\nmaximize κ subject to Ax+ 1κ ≤ b,κ ≥ 1. (8)\nachieves a positive criterion. Here 1 is a conformable column vector of ones, representing the intuition that it is possible to loosen all inequality conditions by a strictly positive amount, meaning that there is some volume interior to the polytope. The ancillary condition that κ ≤ 1 is used to keep the problem well-conditioned.\nEmpirically, most sets in the region-union comprising Equation 4 are not full dimensional, and as soon as we know that an element of a preimage region is not full dimensional, we need not consider it anymore. Future will will use more careful analysis to detect analytically when sets must necessarily be less than full dimensional, but for now we query each of the 2n subsets of the region-union. Thus most of our computational time is spent in calculations of the form of Equation 8. We tested both Mosek and Gurobi as software for solving linear programs such as Equation 8, and found Gurobi to be faster.\nHidden layer widths Accuracy # parameters Storage (MB) Time (s)" }, { "heading": "8 0.973 42 1.105 0.205", "text": "" }, { "heading": "12 0.975 62 35.256 2.907", "text": "" }, { "heading": "16 0.974 82 985.767 52.912", "text": "" }, { "heading": "4→ 4 0.972 42 0.260 0.269", "text": "" }, { "heading": "6→ 6 0.967 74 6.764 3.205", "text": "" }, { "heading": "8→ 8 0.971 114 42.089 26.256", "text": "" }, { "heading": "4→ 4→ 4 0.969 62 3.125 1.802", "text": "" }, { "heading": "6→ 6→ 6 0.968 116 279.138 117.248", "text": "" }, { "heading": "4→ 4→ 4→ 4 0.973 82 43.695 27.253", "text": "E.2 POLYTOPE VOLUME\nIn low dimensions, computing the volume of a polytope from its V form is fast and space efficient. In the analysis presented in Section 3.2, we used the Qhull software which implements the Quickhull algorithm (Barber et al. (1996)) via scipy.spatial.ConvexHull. This computation scales poorly with dimension, however, for instance in our experiments it stopped being usable around ten dimensions. Büeler et al. (2000) gives some “why” and “how” about this difficulty.\nIn high dimensions analysis of volumes seems difficult. However, if we already plan to compute the V representation of the preimage – also a highly complex operation – we may be able to compute the volume “for free”. Avis (2000) shows how to compute the volume as a byproduct of his algorithm for converting between H and V representation (solve the vertex enumeration problem), and it seems quite plausible that the same ideas could be adapted to other algorithms that solve the vertex enumeration problem as well. We hope to investigate this further and possibly incorporate it into our software.\nE.3 CLOCK TIME TO SOLVE PROBLEMS OF VARYING SIZES\nIn this section, we give a sense of the computational difficulty of inverting a DNN. The general finding is that for simple problems, one layer networks of around 16 neurons, two layer networks of about 8 neurons apiece, or three layer networks of about six neurons apiece are easily inverted on a low-powered laptop, but the rate of growth is empirically very fast. Certainly even the five layer, 25 neurons apiece networks employed in Julian & Kochenderfer (2019) are out of reach under the current implementation.\nWe do not use any multiprocessing, though the essential computation, checking polytope emptiness, is embarassingly parallel.\nWe perform ten fittings and compute preimages per size, and report the average model accuracy, time to compute the preimage, and standard deviation of each. To get a sense of the space complexity, we also present the disk storage necessary to hold both the H and V forms of a complete preimage partitiion (pickled dense numpy arrays of float64s). Since we do not re-tune hyperparameters across runs, accuracy is solely indicative.\nAll timings were performed on a 1.6 GHz Dual-Core Intel Core i5 CPU.\nA more formal analysis of the complexity of the computation will follow in future work, as will speed improvements from a more sophisiticated logic for handling empty preimage regions, along with further experimentation on “tricks” such as sophisticated regularization or clever initialization schemes that might enable greater modelling capacity without increasing the scale of the network (e.g. Zhou et al. (2016) or Xiao et al. (2018)).\nThe problem is as described in Section 3.1, with further detail given in Section D.1." } ]
2,020
null
SP:c00c16048e11229025e209fc0e547af1471dae90
[ "The paper builds on a branch of recent works that consider and analyse Variational autoencoders (VAE) from the view point of data compression. This started from Alemi et al. 2018, where the authors consider and analyse the mutual information between data and latent codes and culminates in Kato et al. 2020, where the authors consider models in which both the encoder and the decoder are assumed as deterministic (isometric) mappings. The submitted paper aims at reconciling this type of models with standard VAEs by claiming to prove that VAEs can be obtained from the former by an non-linear component-wise scaling of the latent space. The authors claim among others that this approach allows to estimate the data probability density from the learned VAE model." ]
Variational autoencoder (VAE) estimates the posterior parameters (mean and variance) of latent variables corresponding to each input data. While it is used for many tasks, the transparency of the model is still an underlying issue. This paper provides a quantitative understanding of VAE property by interpreting VAE as a non-linearly scaled isometric embedding. According to the Rate-distortion theory, the optimal transform coding is achieved by using a PCA-like orthonormal transform where the transform space is isometric to the input. From this analogy, we show theoretically and experimentally that VAE can be mapped to an implicit isometric embedding with a scale factor derived from the posterior parameter. As a result, we can estimate the data probabilities in the input space from the prior, loss metrics, and corresponding posterior parameters. In addition, the quantitative importance of each latent variable can be evaluated like the eigenvalue of PCA.
[]
[ { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A. Saurous", "Kevin Murphy" ], "title": "Fixing a broken ELBO", "venue": "In Proceedings of the 35th International Conference on Machine Learning(ICML),", "year": 2018 }, { "authors": [ "Johannes Ballé", "Laparra Valero", "Simoncelli Eero P" ], "title": "Density modeling of images using a generalized normalization transformation", "venue": "In Proceedings of the 4t International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Johannes Ballé", "David Minnen", "Saurabh Singh", "Sung Jin Hwang", "Nick Johnston" ], "title": "Variational image compression with a scale hyperprior", "venue": "In Proceedings of the 6th International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Toby Berger (ed" ], "title": "Rate Distortion Theory: A Mathematical Basis for Data Compression", "venue": null, "year": 1971 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bin Dai", "David Wipf" ], "title": "Diagnosing and enhancing vae models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Bin Dai", "Yu Wang", "John Aston", "Gang Hua", "David Wipf" ], "title": "Hidden talents of the variational autoencoder", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Vivek K Goyal" ], "title": "Theoretical foundations of transform coding", "venue": "IEEE Signal Processing Magazine,", "year": 2001 }, { "authors": [ "Qing Han", "Jia-Xing Hong" ], "title": "Isometric Embedding of Riemannian Manifolds in Euclidean Spaces", "venue": "American Mathematical Society,", "year": 2006 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In Proceedings of the 5th International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Sicong Huang", "Alireza Makhzani", "Yanshuai Cao", "Roger Grosse" ], "title": "Evaluating lossy compression rates of deep generative models", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "F. Jin", "P. Fieguth", "L. Winger", "E. Jernigan" ], "title": "Adaptive wiener filtering of noisy images and image sequences", "venue": "In IEEE International Conference on Image Processing,", "year": 2003 }, { "authors": [ "Keizo Kato", "Zhing Zhou", "Tomotake Sasaki", "Akira Nakagawa" ], "title": "Rate-distortion optimization guided autoencoder for generative analysis", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proceedings of the 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Rätsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Romain Lopez", "Jeffrey Regier", "Michael I Jordan", "Nir Yosef" ], "title": "Information constraints on autoencoding variational bayes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "William A. Pearlman", "Amir Said" ], "title": "Digital Signal Compression: Principles and Practice", "venue": null, "year": 2011 }, { "authors": [ "Kamisetty Ramamohan Rao", "Pat Yip (eds" ], "title": "The Transform and Data Compression Handbook", "venue": null, "year": 2000 }, { "authors": [ "Michal Rolı́nek", "Dominik Zietlow", "Georg Martius" ], "title": "Variational autoencoders pursue pca directions (by accident)", "venue": "In Proceedings of Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Gary J. Sullivan", "Thomas Wiegand" ], "title": "Rate-distortion optimization for video compression", "venue": "IEEE Signal Processing Magazine,", "year": 1998 }, { "authors": [ "Naftali Tishby", "Fernando C. Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "In The 37th annual Allerton Conference on Communication,", "year": 1999 }, { "authors": [ "Zhou Wang", "Alan Conrad Bovik", "Hamid Rahim Sheikh", "Eero P. Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Trans. on Image Processing,", "year": 2001 }, { "authors": [ "Norbert Wiener" ], "title": "Extrapolation, Interpolation, and Smoothing of Stationary Time Series", "venue": null, "year": 1964 }, { "authors": [ "Tishby" ], "title": "Gx, the metric D(", "venue": "RELATION TO TISHBY ET AL", "year": 1999 }, { "authors": [ "Alemi" ], "title": "2018) discuss the rate-distortion trade-off by the theoretical entropy analysis", "venue": null, "year": 2018 }, { "authors": [ "Alemi" ], "title": "2018) can be roughly verified in the optimized VAE with clearer", "venue": null, "year": 2018 }, { "authors": [ "Tishby" ], "title": "They suggest that VAE with β = 1 is sensitive (unstable) becauseD andR can be arbitrary value on the line R = H − βD = H −D", "venue": null, "year": 1999 }, { "authors": [ "signal. Huang" ], "title": "2020) show this property experimentally in their figures", "venue": null, "year": 2020 }, { "authors": [ "Dai" ], "title": "2018) analyses VAE by assuming a linear model", "venue": "As a result,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Variational autoencoder (VAE) (Kingma & Welling, 2014) is one of the most successful generative models, estimating posterior parameters of latent variables for each input data. In VAE, the latent representation is obtained by maximizing an evidence lower bound (ELBO). A number of studies (Higgins et al., 2017; Kim & Mnih, 2018; Lopez et al., 2018; Chen et al., 2018; Locatello et al., 2019; Alemi et al., 2018; Rolı́nek et al., 2019) have tried to reveal the property of latent variables. However, quantitative behavior of VAE is still not well clarified. For example, there has not been a theoretical formulation of the reconstruction loss and KL divergence in ELBO after optimization. More specifically, although the conditional distribution pθ(x|z) in the reconstruction loss of ELBO is predetermined such as the Gaussian or Bernoulli distributions, it has not been discussed well whether the true conditional distribution after optimization matches the predetermined distribution.\nRate-distortion (RD) theory (Berger, 1971), which is an important part of Shannon information theory and successfully applied to image compression, quantitatively formulates the RD trade-off optimum in lossy compression. To realize a quantitative data analysis, Rate-distortion (RD) theory based autoencoder, RaDOGAGA (Kato et al., 2020), has been proposed with isometric embedding (Han & Hong, 2006) where the distance between arbitrary two points of input space in a given metrics is always the same as L2 distance in the isometric embedding space. In this paper, by mapping VAE latent space to an implicit isometric space like RaDOGAGA on variable-by-variable basis and analysing VAE quantitatively as a well-examined lossy compression, we thoroughly clarify the quantitative properties of VAE theoretically and experimentally as follows.\n1) Implicit isometric embedding is derived in the loss metric defined space such that the entropy of data representation becomes minimum. A scaling factor between the VAE latent space and implicit isometric space is formulated by the posterior for each input. In the case of β-VAE, the posterior variance of each dimensional component in the implicit isometric embedding space is a constant β/2, which is analogous to the rate-distortion optimal of transform coding in RD theory. As a result, the reconstruction loss and KL divergence in ELBO can be quantitatively formulated. 2) From these properties, VAE can provide a practical quantitative analysis of input data. First, the data probabilities in the input space can be estimated from the prior, loss metric, and posterior parameters. In addition, the quantitative importance of each latent variable, analogous to the eigenvalue of PCA, can be evaluated from the posterior variance of VAE.\nThis work will lead the information theoretic generative models in the right direction." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 VARIATIONAL AUTOENCODER AND THEORETICAL ANALYSIS", "text": "In VAE, ELBO is maximized instead of maximizing the log-likelihood directly. Let x ∈ Rm be a point in a dataset. The original VAE model consists of a latent variable with fixed prior z ∼ p(z) = N (z; 0, In) ∈ Rn, a parametric encoder Encφ : x ⇒ z, and a parametric decoder Decθ : z ⇒ x̂. In the encoder, qφ(z|x) = N (z;µ(x),σ(x)) is provided by estimating parameters µ(x) and σ(x). Let Lx be a local cost at data x. Then, ELBO is described by\nELBO = Ex∼p(x) [ Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)‖p(z)) ] . (1)\nIn Ex∼p(x)[ · ], the first term Ez∼qφ(z|x)[ · ] is called the reconstruction loss. The second term DKL(·) is a Kullback–Leibler (KL) divergence. Let µj(x), σj(x), and DKLj(x) be j-th dimensional values of µ(x), σ(x), and KL divergence. Then DKL(·) is derived as:\nDKL(·) = n∑ j=1 DKLj(x), where DKLj(x) = 1 2 ( µj(x) 2 + σj(x) 2 − log σj(x)2 − 1 ) . (2)\nD(x, x̂) denotes a metric such as sum square error (SSE) and binary cross-entropy (BCE) as loglikelihoods of Gaussian and Bernoulli distributions, respectively. In training VAE, the next objective is used instead of Eq. 1, where β is a parameter to control the trade-off (Higgins et al., 2017).\nLx = Ez∼qφ(z|x)[D(x, x̂)] + βDKL(·). (3) However, it has not been fully discussed whether the true conditional distribution matches the predetermined distribution, or how the value of KL divergence is derived after training.\nThere have been several studies to analyse VAE theoretically. Alemi et al. (2018) introduced the RD trade-off based on the information-theoretic framework to analyse β-VAE. However, they did not clarify the quantitative property after optimization. Dai et al. (2018) showed that VAE restricted as a linear transform can be considered as a robust PCA. However, their model has a limitation for the analysis on each latent variable basis because of the linearity assumption. Rolı́nek et al. (2019) showed that the Jacobian matrix of VAE at each latent variable is orthogonal, which makes latent variables disentangled implicitly. However, they do not uncover the orthonormality and quantitative properties because they simplify KL divergence as a constant. Dai & Wipf (2019) also showed that the expected rate of VAE for the r-dimensional manifold is close to −(r/2) log γ +O(1) at γ → 0 when pθ(x̂|x) = N (x̂;x, γIm) holds. The remaining challenge is to clearly figure out what latent space is obtained at a given dataset, a loss metric, and β in the model." }, { "heading": "2.2 RATE-DISTORTION THEORY, TRANSFORM CODING, AND ISOMETRIC EMBEDDING", "text": "RD theory (Berger, 1971) formulated the optimal transform coding (Goyal, 2001) for the Gaussian source with square error metric as follows. Let x ∈ Rm be a point in a dataset. First, the data are transformed deterministically with the orthonormal transform (orthogonal and unit norm) such as Karhunen-Loève transform (KLT) (Rao & Yip, 2000). Let z ∈ Rm be a point transformed from x. Then, z is entropy-coded by allowing equivalent stochastic distortion (or posterior with constant variance) in each dimension. A lower bound of a rate R at a distortion D is denoted by R(D). The derivation of R(D) is as follows. Let zj be the j-th dimensional component of z and σzj2 be the variance of zj in a dataset. It is noted that σzj2 is the equivalent to eigenvalues of PCA for the dataset. Let d be a distortion equally allowed in each dimensional channel. At the optimal condition, the distortion Dopt and rate Ropt on the curve R(D) is calculated as a function of d:\nRopt = 1\n2 m∑ j=1 max(log(σzj 2/d), 0), Dopt = m∑ j=1 min(d, σzj 2). (4)\nThe simplest way to allow equivalent distortion is to use a uniform quantization (Goyal, 2001). Let T be a quantization step, and round(·) be a round function. Quantized value ẑj is derived as kT , where k = round(zj/T ). Then, d is approximated by T 2/12 as explained in Appendix H.1.\nTo practically achieve the best RD trade-off in image compression, rate-distortion optimization (RDO) has also been widely used (Sullivan & Wiegand, 1998). In RDO, the best trade-off is\nachieved by finding a encoding parameter that minimizes a cost L = D + λR at given Lagrange parameter λ. Recently, deep image compression (Ballé et al., 2018) has been proposed. In these works, instead of an orthonormal transform with sum square error (SSE) metric in the conventional lossy compression, a deep autoencoder is trained with flexible metrics, such as structural similarity (SSIM) (Wang et al., 2001) for RDO. Recently, an isometric autoencoder, RaDOGAGA (Kato et al., 2020) was proposed based on Ballé et al. (2018). They proved that the latent space to be isometric to the input space if the model is trained by RDO using a parametric prior and posterior with constant variance. By contrast, VAE uses a fixed prior with a variable posterior. In section 3, we explain that VAE can be quantitatively understood as the rate-distortion optimum as in Eq. 4 by mapping VAE latent space to implicit isometric embedding on a variable-to-variable basis as in Fig. 1 ." }, { "heading": "3 UNDERSTANDING OF VAE AS A SCALED ISOMETRIC EMBEDDING", "text": "This section shows the quantitative understanding of VAE. First, we present the hypothesis of mapping VAE latent space to an implicit isometric embedding space. Second, we reformulate the objective of β-VAE for easy analysis. Third, we prove the hypothesis from the minimum condition of the objective. Then, we show that ELBO can be interpreted as an optimized RDO cost of transform coding where the quantitative properties are well clarified, as well as discuss and correct some prior theoretical studies. Lastly, we explain the quantitative properties of VAE to validate the theory including approximations and provide a practical data analysis." }, { "heading": "3.1 HYPOTHESIS OF MAPPING VAE TO THE IMPLICIT ORTHONORMAL TRANSFORM", "text": "Figure 1 shows the mapping of VAE to the implicit isometric embedding. Assume the data manifold is smooth and differentiable. Let Sinput(⊂ Rm) be an input space of the dataset. D(x, x́) denotes a metric for points x, x́ ∈ Sinput. Using the second order Taylor expansion, D(x,x + δx) can be approximated by tδx Gxδx, where Gx and δx are an x dependent positive definite Hermitian metric tensor and an arbitrary infinitesimal displacement in Sinput, respectively. The derivations of Gx for SSE, BCE, and SSIM are shown in Appendix H.2. Next, an implicit isometric embedding space SIso(⊂ Rm) is introduced like the isometric latent space in RaDOGAGA (Kato et al., 2020), such that the entropy of data representation is minimum in the inner product space ofGx. Let y and yj be a point in SIso and its j-th component, respectively. Because of the isometricity, p(x) ' p(y) will hold. We will also show the posterior variance of each dimensional component yj is a constant β/2. In addition, the variance of yj will show the importance like PCA when the data manifold has a disentangled feature by nature in the metric space ofGx and the prior covariance is diagonal.\nThen, SIso is nonlinearly scaled to the VAE’s anisometric orthogonal space SVAE(⊂ Rn) on a variable-by-variable basis. Let z be a point in SVAE, and zj denotes the j-th component of z. Let p(yj) and p(zj) be the probability distribution of the j-th variable in SIso and SVAE. Each variable yj is nonlinearly scaled to zj , such that dzj/dyj = p(yj)/p(zj) to fit the cumulative distribution. dzj/dyj is σj(x)/ √ β/2, the ratio of posterior’s standard deviations for zj and yj , such that KL divergences in both spaces are equivalent. In addition, dimensional components whose KL divergences are zero can be discarded because such dimensions have no information.\n3.2 REFORMULATION OF OBJECTIVE TO THE FORM USING ∂x/∂zj AND ∂x/∂z\nWe reformulate the objective Lx to the form using ∂x/∂zj and ∂x/∂z. Here, the dimensions of x and z, i.e., m and n, are set as the same. The condition to reduce n is shown in section 3.3.\nReformulation of D(x, x̂) loss: In accordance with Kato et al. (2020), the loss D(x, x̂) can be decomposed into D(x̆, x̂) + D(x, x̆), where x̆ denotes Decθ(µ(x)). The first term D(x̆, x̂) is a distortion between the decoded values of µ(x) with and without noise σ(x). We call this term as a coding loss. This term is expanded as follows. δx̆ denotes x̂ − x̆. Then, D(x̆, x̂) term can be approximated by tδx̆ Gxδx̆. Let xzj be ∂x/∂zj at zj = µj(x), and δzj ∼ N (0, σj(x)) be an added noise in zj . Then, δx̆ is approximated by δx̆ ' ∑m j=1 δzj xzj . Because δzj and δzk for j 6= k are uncorrelated, the average of D(x̆, x̂) over z ∼ qφ(z|x) can be finally reformulated by\nEz∼qφ(z|x) [D(x̆, x̂)] ' Ez∼qφ(z|x) [ tδx̆ Gxδx̆ ] ' n∑ j=1 σj(x) 2 txzjGxxzj . (5)\nThe second term D(x, x̆) is a loss between the input data and Decθ(µ(x)). We call this term a transform loss. We presume VAE is analogous to the Wiener filter (Wiener, 1964; Jin et al., 2003) where the coding loss is regarded as an added noise. From the Wiener filter theory, the ratio between the transform loss and coding loss is close to the ratio between the coding loss and the variance of the input data. The coding loss, approximately nβ/2 as in Eq. 14, should be smaller than the variance of the input data to capture meaningful information. Thus the transform loss, usually small, is not considered in the following discussion. Appendix B explains the detail in a simple 1-dimensional VAE. We show the exhaustive and quantitative evaluation of coding loss and transform loss in the toy dataset in appendix E.2 to validate this approximation.\nReformulation of KL divergence: When σj(x) 1, σj(x)2 − log σj(x)2 is observed. For example, when σj(x)2 < 0.1, we have −(σj(x)2/ log σj(x)2) < 0.05. In such dimensions, DKLj(x) can be approximated as Eq. 6 by ignoring the σj(x)2 term and setting p(µj(x)) to N (zj ; 0, 1):\nDKLj(x) ' 1\n2\n( µj(x) 2 − log σj(x)2 − 1 ) = − log ( σj(x) p(µj(x)) ) − log 2πe\n2 . (6)\nEq. 6 can be considered as a rate of entropy coding for a symbol with mean µj(x) allowing quantization noise σj(x)2, as shown in Appendix H.3. Thus, in the dimension with meaningful information, σj(x)\n2 is much smaller than the prior variance 1, and the approximation in Eq.6 is reasonable. Let p(µ(x)) be ∏n j=1 p(µj(x)). p(µ(x)) = p(x) |det(∂x/∂z)| holds where det(∂x/∂z) is a Jacobian determinant at z = µ(x). Let CDKL be a constant n 2 log 2πe. Then, DKL(·) is reformulated by\nDKL(·) ' − log ( p(µ(x)) n∏ j=1 σj(x) ) − CDKL ' − log ( p(x) ∣∣∣∣det(∂x∂z )∣∣∣∣ n∏ j=1 σj(x) ) − CDKL . (7)\nFinal objective form: From Eqs. 5 and 7, the objective L′x to minimise is derived as:\nL′x = n∑ j=1 σj(x) 2 txzjGxxzj − β log ( p(x) ∣∣∣∣det(∂x∂z )∣∣∣∣ n∏ j=1 σj(x) ) − CDKL . (8)" }, { "heading": "3.3 PROOF OF THE HYPOTHESIS", "text": "Mapping VAE to implicit isometric embedding: The minimum condition of L′x at x is examined. Let x̃zj be the j-th column vector of a cofactor matrix for Jacobian matrix ∂x/∂z. Note that d log |det(∂x/∂z)|/dxzj = x̃zj/det(∂x/∂z) holds as is also used in Kato et al. (2020). Using this equation, the derivative of L′x by xzj is described by\ndL′x dxzj = 2σj(x) 2Gxxzj −\nβ\ndet (∂x/∂z) x̃zj . (9)\nNote that txzk · x̃zj = det(∂x/∂z) δjk holds by the cofactor’s property. Here, · denotes the dot product, and δjk denotes the Kronecker delta. By setting Eq. 9 to zero and multiplying txzk from the left, the condition to minimize L′x is derived by the next orthogonal form of xzj :\n(2σj(x) 2/β) txzkGxxzj = δjk. (10)\nHere, the diagonal posterior covariance is the key for orthogonality. Next, implicit latent variable y and its j-th dimensional component yj are introduced. Set yj to zero at zj = 0. The derivative between yj and zj at µj(x) is defined by\ndyj dzj ∣∣∣ zj=µj(x) = √ β 2 σj(x) −1. (11)\nxyj denotes ∂x/∂yj . By applying xzj = dyj/dzj xyj to Eq. 10, xyj shows the isometric property (Han & Hong, 2006; Kato et al., 2020) in the inner product space with a metric tensorGx as follows:\ntxyjGxxyk = δjk. (12)\nMinimum entropy of implicit isometric representation: Let L′min x be a minimum of L′x at x. Dminx and Rminx denote a coding loss and KL divergence in L′minx, respectively. By applying Eqs. 10-11 and p(zj) = (dyj/dzj) p(yj) to Eqs. 5 and 7, the following equations are derived:\nL′min x = Dmin x + βRmin x, where Dmin x = nβ\n2 , Rmin x = − log p(y)−\nn log(βπe)\n2 . (13) Here, Dmin x is derived as ∑n j=1(β/2)\ntxyjGxxyj = nβ/2, implying each dimensional posterior variance of the implicit isometric variable is a constant β/2. In addition, exp(−L′min x/β) = p(y) exp(Const.) ∝ p(y) ' p(x) will hold in the inner product space ofGx from the isometricity. By averaging L′minx over x ∼ p(x) and approximating this average by the integration over y ∼ p(y), the global minimum L′G is derived as:\nL′G = DG + βRG, where DG = nβ\n2 , RG = min p(y)\n( − ∫ p(y) log p(y)dy ) − n log(βπe)\n2 . (14)\nThe term− ∫ p(y) log p(y)dy in RG is the entropy of y. Thus, the optimal implicit isometric space is derived such that the entropy of data representation is minimum in the inner product space ofGx.\nWhen the data manifold has a disentangled property in the given metric, each yj will capture a disentangled feature with minimum entropy, as shown in Kato et al. (2020). This is analogous to PCA for Gaussian data, which gives the disentangled representation with minimum entropy in SSE. Considering the similarity to the PCA eigenvalues, the variance of yj will indicate the importance of each dimension. In the dimensions where the variance of yj is less than β/2, σj(x) = 1, µj(x) = 0, and DKLj(x) = 0 will hold. In addition, σj(x)2 txzjGxxzj will be close to 0 because this needs not to be balanced with DKLj(x). This is similar to the case in the RD theory in Eq. 4 where σzj2 is less than d, meaning no information. As a result, Eqs. 10-14 will not hold here. Thus, latent variables with variances from the largest to the n-th withDKLj(x) > 0 are sufficient for the representation and the dimensions with DKLj(x) = 0 can be ignored, allowing the reduction of the dimension n for z.\nSome approximations may be slightly violated, however, our analysis still helps to understand VAE." }, { "heading": "3.4 DISCUSSION AND RELATIONSHIP WITH PRIOR THEORETICAL STUDIES", "text": "First, we show β-VAE optimum as in Eq. 14 can be interpreted as the rate-distortion optimum (Eq. 4) in RD theory when the uniform distortion d in Eq. 4 is set to β/2 in the metric defined space. H(X) = − ∫ p(x) log p(x) dx denotes a differential entropy for a set x ∈ X;x ∼ p(x). For the 1-dimensional Gaussian data x ∼ N (x, 0, σ2), H(X) = 12 log(2πeσ 2) holds. Thus, Ropt in Eq. 4\nis derived as a difference of the differential entropy between transformed data z ∼ ∏ j N (zj ; 0, σzj ) and uniform distortion D ∼ N (D; 0, dIm). RG is also derived as a difference of the differential entropy between transformed data y ∼ p(y) and uniform distortion D ∼ N (D; 0, (β/2)Im). Furthermore, DG in Eq. 14 can be interpreted as Dopt in Eq. 4 by setting d = β/2. As a result, the VAE optimal corresponds to the rate-distortion optimal of transform coding in RD theory, and β/2 is regarded as a variance of the constant distortion equally added to each dimensional component. Because of the isometricity, the power of distortion (i.e., posterior variance) in the implicit isometric space is the same as that in the metric defined input space. Thus the conditional distribution after optimization in the metric defined space is derived as pθ(x|z) = pθ(x|x̂) ' N (x; x̂, (β/2)I). This is consistent with the fact that the quality of the reconstructed data becomes worse in larger β.\nNext, we estimate the reconstruction loss Eqφ(z|x)[log pθ(x|z)] and KL divergence DKL(·) in βVAE and also correct the analysis in Alemi et al. (2018). LetH = −Ep(x)[log p(x)] be a differential entropy of input data. When β = 1, Alemi et al. (2018) suggest ”the ELBO objective alone (and the marginal likelihood) cannot distinguish between models that make no use of the latent variable (autodecoders) versus models that make large use of the latent variable and learn useful representations for reconstruction (autoencoders),” because the reconstruction loss and KL divergence can be arbitrary value on the line−Eqφ(z|x)[log pθ(x|z)]+DKL(·) = H . Correctly, the reconstruction loss and KL divergence after optimization are deterministically estimated at any β (including β = 1) as:\nEqφ(z|x)[log pθ(x|z)] ' −(n/2) log(βπe), DKL(·) ' − log p(y)− (n/2) log(βπe). (15) The proof is explained in Appendix A.1. Thus ELBO can be estimated as:\nELBO = Ep(x)[Ez∼qφ(z|x)[log pθ(x|z)]−DKL(·)] ' Ep(x)[log p(y)] ' Ep(x)[log p(x)]. (16) As a result, when the objective of β-VAE is optimised, ELBO (Eq. 1) in the original form (Kingma & Welling, 2014) is approximately equal to the log-likelihood of x, regardless β = 1 or not.\nFinally, the predetermined conditional distribution pRp(x|x̂) and the true conditional distribution after optimization pRθ(x|x̂) are examined using β in the input Euclidean space of x. Assume pRp(x|x̂) = N (x; x̂, σ2I). In this case, the metric D(x, x̂) is derived as − log pRp(x|x̂) = (1/2σ2)|x− x̂|22 + Const. From Eq. 13, the following equations are derived:\nEqφ(x̂|x)[D(x, x̂)] = Eqφ(x̂|x) [ 1 2σ2 |x− x̂|22 ] = Eqφ(x̂|x) [ 1 2σ2 ∑ i (xi − x̂i)2 ] ' nβ/2, (17)\nEqφ(x̂|x) [ (xi − x̂i)2 ] ' βσ2. (18)\nBecause the variance of each dimension is estimated as βσ2, the true conditional distribution after optimization is approximated as pRθ(x|x̂) ' N (x; x̂, βσ2I). If β = 1, i.e., the original VAE, pRp(x|x̂) and pRθ(x|x̂) are equivalent as expected. If β 6= 1, however, pRp(x|x̂) and pRθ(x|x̂) are different. Actually, what β-VAE does is only to scale the variance of the pre-determined conditional distribution in the original VAE by a factor of β, because β-VAE objective can be rewritten as:\nEqφ(·)[logN (x; x̂, σ 2I)]− βDKL(·) = β ( Eqφ(·)[logN (x; x̂, βσ 2I)]−DKL(·) ) + const. (19)\nMore detailed discussions about prior works (Higgins et al. (2017); Alemi et al. (2018); Dai et al. (2018); Dai & Wipf (2019); Tishby et al. (1999); Goyal (2001)) are explained in Appendix A." }, { "heading": "3.5 QUANTITATIVE PROPERTIES TO VALIDATE THE THEORY", "text": "This section shows three quantitative properties in VAE with a priorN (z; 0, In), to validate the theory in section 3.3. The second and third properties also provide practical data analysis approaches. The derivation of equations in the second and third properties are explained in appendix C.\nNorm of xyj equal to 1: Let e(j) be a vector (0, · · · , j-th\n1 , · · · , 0) where the j-th dimension is 1, and others are 0. Let D′j(z) be D(Decθ(z),Decθ(z + e\n(j)))/ 2, where denotes a minute value for the numerical differential. From Eq. 10, the squared norm of xyj can be numerically evaluated as the first term of Eq. 20. This value will be equal to 1 at any x and dimension j except DKLj(x) = 0.\n2 β σj(x) 2D′j(z) ' 2 β\n( σj(x) 2 txzjGxxzj ) ' txyjGxxyj = 1. (20)\nIf observed, the existence of an implicit isometric embedding can be shown because of unit norm and orthogonality (Rolı́nek et al., 2019). Eq. 20 also show σj(x)2 txzjGxxzj ' β 2 , implying that a noise σj(x) added to each dimension of latent variable causes an equal noise β/2 in the input space.\nPCA-like feature: When the data manifold has a disentangled property in the given metric, the variance of the j-th implicit latent component yj can be roughly estimated as∫\nyj 2p(yj)dyj '\nβ 2 E x∼p(x) [σj(x) −2]. (21)\nThe averageE[σj(x)−2] on the right allows evaluating the quantitative importance of each dimension in practice, like the eigenvalue of PCA. Note that a dimension whose average is close to 1 implies DKLj(x) = 0. Such a dimension has no information and is an exceptions of the property in Eq. 20.\nEstimation of the data probability distribution: First, assume the case m = n. Since the y space is isometric to the inner product space of Gx, the PDFs in both spaces are the same. The Jacobian determinant between the input space and inner product space, giving the the ratio of PDFs, is derived as |Gx| 1 2 . We set p(µ(x)) to the prior. Thus, the data probability in the input space can be estimated by |Gx| 1 2 and either the prior/posterior or Lx after training, as the following last two equations:\np(x) ' |Gx| 1 2 p(y) ∝ |Gx| 1 2 p(µ(x)) m∏ j=1 σj(x) ∝ |Gx| 1 2 exp ( − 1 β Lx ) . (22)\nIn the case m > n, the derivation of the PDF ratio between the input space and the inner product space is generally intractable, except for Gx = axIm, where ax is an x-dependent scalar factor. In this case, the PDF ratio is given by axn/2. Thus, p(x) can be estimated as follows:\np(x) ∝ ax n 2 p(µ(x)) n∏ j=1 σj(x) ∝ ax n 2 exp ( − 1 β Lx ) . (23)\nEquations 22 and 23 enable a probability-based quantitative data analysis/sampling in practice." }, { "heading": "4 EXPERIMENT", "text": "We show the experiments of the quantitative properties presented in Section 3.5. First, the results of the toy dataset are presented. Then, the results of CelebA are shown as a real data example." }, { "heading": "4.1 EVALUATION OF QUANTITATIVE PROPERTIES IN THE TOY DATASET", "text": "The toy dataset is generated as follows. First, three dimensional variables s1, s2, and s3 are sampled in accordance with the three different shapes of distributions p(s1), p(s2), and p(s3), as shown in Fig. 2. The variances of s1, s2, and s3 are 1/6, 2/3, and 8/3, respectively, such that the ratio of the variances is 1:4:16. Second, three 16-dimensional uncorrelated vectors v1, v2, and v3 with L2 norm 1, are provided. Finally, 50, 000 toy data with 16 dimensions are generated by x = ∑3 i=1 sivi. The data generation probability p(x) is also set to p(s1)p(s2)p(s3). If our hypothesis is correct, p(yj) will be close to p(sj). Then, σj(x) ∝ dzj/dyj = p(yj)/p(zj) will also vary a lot with these varieties of PDFs. Because the properties presented in Section 3.5 are calculated from σj(x), our theory can be easily validated by evaluating those properties.\nThen, the VAE model is trained using Eq. 1. We use two kinds of the reconstruction loss D(·, ·) to analyze the effect of the loss metrics. The first is the square error loss equivalent to sum square error (SSE). The second is the downward-convex loss which we design as Eq. 24, such that the shape becomes similar to the BCE loss as in Appendix H.2:\nD(x, x̂) = ax‖x− x̂‖22, where ax = (2/3 + 2 ‖x‖22/21) andGx = axIm. (24)\nHere, ax is chosen such that the mean of ax for the toy dataset is 1.0 since the variance of x is 1/6+2/3+8/3=7/2. The details of the networks and training conditions are written in Appendix D.1.\nThen the network is trained with two types of reconstruction losses. The ratio of transform loss to coding loss for the square error loss is 0.023, and that for the downward-convex loss is 0.024. As expected in section 3.2, the transform losses are negligibly small. Tables 1 and 2 show the measurements of 2βσj(x) 2D′j(z) (shown as 2 βσj 2D′j), D ′ j(z), and σj(x)\n−2 described in Section 3.5. In these tables, z1, z2, and z3 show acquired latent variables. ”Av.” and ”SD” are the average and standard deviation, respectively. To begin with, the norm of the implicit orthonormal basis\nTable 1: Property measurements of the toy dataset trained with the square error loss.\nTable 2: Property measurements of the toy dataset trained with the downward-convex loss.\na 3/2 x p(µ(x))\n∏\nj σj(x), and (d) a 3/2 x exp(−Lx/β).\nis discussed. In both tables, the values of 2βσ(x)j 2D′j(z) are close to 1.0 in each dimension as described in Eq. 23. By contrast, the average of D′j(z), which corresponds to txzjGxxzk , is different in each dimension. Therefore, the derivative of x with zj , the original latent variable of VAE, is not normalized.\nNext, the PCA-like feature is examined. The average of σj(x)−2 in Eq.21 and its ratio are shown in Tables 1 and 2. Although the average of σj(x)−2 is a rough estimation of variance, the ratio is close to 1:4:16, i.e., the variance ratio of generation parameters s1, s2, and s3. When comparing both losses, the ratio of s2 and s3 for the downward-convex loss is somewhat smaller than that for the square error. This is explained as follows. In the downward-convex loss, |xyj |2 tends to be 1/ax from Eq. 12, i.e. txyj (axIm)xyk = δjk. Therefore, the region in the inner product space with a larger norm is shrunk, and the estimated variances corresponding to s2 and s3 become smaller.\nFigure 3 shows the scattering plots of the data generation probability p(x) and estimated probabilities for the downward-convex loss. The plots for the square error loss are shown in Appendix E. Figure 3a shows the plots of p(x) and the prior probabilities p(µ(x)). This graph implies that it is difficult to estimate p(x) only from the prior. The correlation coefficient shown as ”R” (0.434) is also low. Figure 3b shows the plots of p(x) and exp(−Lx/β), i.e., the lower bound of likelihood. The correlation coefficient (0.771) becomes better, but is still not high. Next, Figures 3c and 3d show the plots of a3/2x p(µ(x)) ∏ j σj(x) and a 3/2 x exp(−Lx/β) in Eq. 23. These graphs, showing a high correlation coefficients around 0.91, support that the objective Lx in Eq. 3 is optimized in the inner product space ofGx. In the case of the square error loss, the plots with exp(−Lx/β) also shows a high correlation coefficient 0.904 because ax is 1, allowing the probability estimation from Lx in Eq. 3. The ablation study with different PDF, losses, and β is shown in Appendix E." }, { "heading": "4.2 EVALUATIONS IN CELEBA DATASET", "text": "This section evaluates the first and second quantitative properties of VAE trained with the CelebA dataset 1 (Liu et al., 2015) as an example of real data. This dataset is composed of 202,599 celebrity images. In use, the images are center-cropped to form 64× 64 sized images.\n1(http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)\nFigure 4: Graph of σj(x)−2 average and 2βσj(x) 2D′j(z) in VAE for CelebA dataset.\nFigure 5: Graph of σj(x)−2 average and 2βσj(x) 2D′j(z) in VAE for CelebA dataset with explicit decomposed loss.\nWe use SSIM, which is popular in image compression, as a reconstruction loss. The details of networks and training conditions are written in Appendix D.2.\nFigure 4 shows the averages of σj(x)−2 in Eq.21 as the estimated variances, as well as the average and the standard deviation of 2βσj(x)\n2D′j(z) in Eq.20 as the estimated square norm of implicit transform. The latent variables zi are numbered in descending order by the estimated variance. In the dimensions greater than the 27th, the averages of σj(x)−2 are close to 1 and that of 2βσj(x) 2D′j(z) is close to 0, implying DKL(·) = 0. Between the 1st and 26th dimensions, the mean and standard deviation of 2βσj(x)\n2D′j(z) averages are 1.83 and 0.13, respectively. These values seem almost constant with a small standard deviation; however, the mean is somewhat larger than the expected value 1. This result implies that the implicit transform can be considered as almost orthonormal by dividing √ 1.83 ' 1.35. Thus, the average of σj(x)−2 still can determine the quantitative importance of each latent variable. This also mean that the added noise to each yj is around 1.83(β/2). We also train VAE by the decomposed loss explicitly, where Lx is set to D(x, x̆) + D(x̆, x̂) + βDKL(·). Figure 5 shows the result. Here, the mean and standard deviation of 2βσj(x)\n2D′j(z) averages are 0.92 and 0.04, respectively, which suggests almost a unit norm. As a result, the explicit use of decomposed loss matches the theory better, allowing better analysis. The slight violation of the norm in the conventional form needs a more exact analysis as a future study.\nFigure 6 shows decoder outputs where the selected latent variables are traversed from −2 to 2 while setting the rest to 0. The average of σj(x)−2 is also shown there. The components are grouped by the average of σj(x)−2, such that z1, z2, z3 to the large, z16, z17 to the medium, and z32 to the small, accordingly. In the large group, significant changes of background brightness, the direction of the face, and hair color are observed. In the medium group, we can see minor changes such as facial expressions. However, in the small group, there are almost no changes. This result strongly supports that the average of σj(x)−2 shows the importance of each latent variable. The traversed outputs for all the component and results with another conditions are shown in Appendix F." }, { "heading": "5 CONCLUSION", "text": "This paper provides a quantitative understanding of VAE by non-linear mapping to an isometric embedding. According to the Rate-distortion theory, the optimal transform coding is achieved by using PCA/KLT orthonormal transform, where the transform space is isometric to the input. From this analogy, we show theoretically and experimentally that VAE can be mapped to an implicit isometric embedding with a scale factor derived from the posterior parameter. Based on this property, we also clarify that VAE can provide a practical quantitative analysis of input data such as the probability estimation in the input space and the PCA-like quantitative multivariate analysis. We believe the quantitative properties thoroughly uncovered in this paper will be a milestone to further advance the information theory-based generative models such as VAE in the right direction." }, { "heading": "A DETAILED RELATION TO PRIOR WORKS", "text": "Firstly, we clarify the the difference between ELBO in Eq. 1 and the objective Lx in Eq. 3 in the right direction. Then, we discuss the relation to the prior works. We also point out the incorrectness of some works." }, { "heading": "A.1 DERIVATION OF ELBO WITH CLEAR AND QUANTITATIVE FORM", "text": "We derive the reconstruction loss and KL divergence terms in ELBO (without β) at x in Eq. 1 when the objective of β-VAE L′x in Eq. 13 is optimised. The reconstruction loss can be rewritten as\nEz∼qφ(z|x)[log pθ(x|z)] = ∫ qφ(z|x) log pθ(x|z)dz = ∫ qφ(y|x) log pθ(x|y)dy. (25)\nLet µy(x) be a implicit isometric variable corresponding to µ(x). Because the posterior variance in each isometric latent variable is a constant β/2, qφ(y|x) ' N (y;µy(x), (β/2)In) will hold. If β/2 is small, p(x̂) ' p(x) will hold. Then, the next equation will hold also using isometricity;\npθ(x|z) = pθ(x|y) = pθ(x|x̂) = p(x̂|x)p(x)/p(x̂) ' p(x̂|x) ' qφ(y|x). (26)\nThus the reconstruction loss is estimated as: Ez∼qφ(z|x)[log pθ(x|z)] ∼ ∫ N (y;µy(x), (β/2)In) logN (y;µy(x), (β/2)In) dy\n= −(n/2) log(βπe). (27)\nFrom Eq. 13, KL divergence is derived as:\nDKL(·) = Rminx = − log p(y)− (n/2) log(βπe). (28)\nBy summing both terms, ELBO at x can be estimated as\nELBO = Ex∼p(x)[Ez∼qφ(z|x)[log pθ(x|z)]−DKL(·)] ' Ex∼p(x)[log p(y)] ' Ex∼p(x)[log p(x)]. (29)\nAs a result, ELBO (Eq. 1) in the original form (Kingma & Welling, 2014) is close to the loglikelihood of x, regardless β = 1 or not, when the objective of β-VAE (Higgins et al., 2017) is optimised.\nSome of the prior VAE works do not explicitly distinguish between the reconstruction loss Ex̂∼p(x̂|x)[p(x|x̂)] in ELBO and the distortion D(x, x̂) in the objective Lx by mistake, which leads to some incorrect discussion.\nIn addition, there have also been incorrect discussions in some prior works. In ELBO derivation, they useEx̂∼p(x̂|x)[p(x|x̂)] as a reconstruction loss, without discussing what kinds of properties the distortion probability p(x̂|x) should be. In training VAE with a real dataset, by contrast, they use a predetermined distortion metric D(x, x̂) like BCE and SSE as a reconstruction loss instead of a log-likelihood of the distortion probability, without discussing what distortion probability should be after optimization.\nCorrectly, the distortion probability p(x̂|x) after training is determined by β and the metric as pθ(x̂|x) ' N (x̂;x, (β/2)Im) in the metric defined space. Then, by applying pθ(x̂|x) ' pθ(x̂|x) to Eq. 1, the value of ELBO will become log p(y) ' log p(x) regardless β = 1 or not.\nIfD(x,x+δx) = tδxGxδx+O(||δx||3) is not SSE, by introducing a variable x́ = Lx−1x where Lx satisfies tLx Lx = Gx, the metric D(·, ·) can be replaced by SSE in the Euclidean space of x́." }, { "heading": "A.2 RELATION TO TISHBY ET AL. (1999)", "text": "The theory described in Tishby et al. (1999) is consistent with our analysis. Tishby et al. (1999) clarified the behaviour of the compressed representation when the rate-distortion trade-off is optimized.\nx ∈ X denotes the signal space with a fixed probability p(x) and x̂ ∈ X̂ denotes its compressed representation. Let D(x, x̂) be a loss metric. Then the rate-distortion trade-off can be described as:\nL = I(X; X̂) + β′ E p(x,x̂) [D(x, x̂)]. (30)\nBy solving this condition, they derive the following equation:\np(x̂|x) ∝ exp(−β′D(x, x̂)). (31)\nAs shown in our discussion above, p(x̂|x) ' N (x̂;x, (β/2)Im) will hold in the metric defined space from our VAE analysis. This result is equivalent to Eq. 31 in their work if D(x, x̂) is SSE and β′ is set to β−1, as follows:\np(x̂|x) ∝ exp(−β′D(x, x̂)) = exp ( −||x− x̂|| 2 2\n2(β/2)\n) ∝ N (x̂;x, (β/2)Im). (32)\nIf D(x, x̂) is not SSE, the use of the space transformation explained in appendix A.1 will lead to the same result.\nA.3 RELATION TO β-VAE (HIGGINS ET AL., 2017)\nIn the β-VAE work by Higgins et al. (2017), it is presumed that the objective Lx was not mistakenly distinguished from ELBO. In their work, ELBO equation is modified as:\nEp(x)[ Ex̂∼pφ(x̂|x)[qθ(x|x̂)]− βDKL(·) ]. (33)\nHowever, they use the predetermined probabilities of pθ(x̂|x) such as the Bernoulli and Gaussian distributions in training (described in table 1 in Higgins et al. (2017)). As shown in our appendix H.2, the log-likelihoods of the Bernoulli and Gaussian distributions can be regarded as BCE and SSE metrics, respectively. As a result, the actual objective for training in Higgins et al. (2017) is not Eq. 33, but the objective Lx in Eq. 3 using BCE and SSE metrics with varying β. Thus ELBO as Eq. 1 form will become log p(x) in the BCE / SSE metric defined space regardless β = 1 or not, as shown in appendix A.1.\nActually, the equation 33 dose not show the log-likelihood of x. When DKL(·) ' − log p(x) − (n/2) log(βπe) and Ex̂∼p(x̂|x)[p(x|x̂)] ' −(n/2) log(βπe) are applied, the value of Eq. 33 is derived as β log p(x) + (β − 1)(n/2) log(βπe), which is different from the log-likelihood of x if β 6= 1. Correctly, what β-VAE really does is only to scale the variance of the pre-determined conditional distribution in the original VAE by a factor of β. In the case the pre-determined conditional distribution is Gaussian N (x; x̂, σ2I), the objective of β can be can be rewritten as a linearly scaled original VAE objective with a Gaussian N (x; x̂, βσ2I):\nEqφ(·)[logN (x; x̂, σ 2I)]− βDKL(·) = Eqφ(·)\n[ −1\n2 log 2πσ2 − |x− x̂| 2 2 2σ2\n] − βDKL(·)\n= β ( Eqφ(·) [ −1 2 log 2πβσ2 − |x− x̂| 2 2 2βσ2 ] −DKL(·) ) + β\n2 log 2πβσ2 − 1 2 log 2πσ2\n= β ( Eqφ(·)[logN (x; x̂, βσ 2I)]−DKL(·) ) + const.\n(34)" }, { "heading": "A.4 RELATION TO ALEMI ET AL. (2018)", "text": "Alemi et al. (2018) discuss the rate-distortion trade-off by the theoretical entropy analysis. Their work is also presumed that the objective Lx was not mistakenly distinguished from ELBO, which leads to the incorrect discussion. In their work, the differential entropy for the inputH , distortionD, and rate R are derived carefully. They suggest that VAE with β = 1 is sensitive (unstable) because D and R can be arbitrary value on the line R = H − βD = H −D. Furthermore, they also suggest\nthat R ≥ H, D = 0 at β → 0 and R = 0, D ≥ H at β → ∞ will hold as shown the figure 1 of their work.\nIn this appendix, we will show that β determines the value of R and D specifically. We also show that R ' H −D will hold regardless β = 1 or not. In their work, these values of H , D, and D are mathematically defined as:\nH ≡ − ∫ dx p∗(x) log p∗(x), (35)\nD ≡ − ∫ dx p∗(x) ∫ dz e(z|x) log d(x|z), (36)\nR ≡ ∫ dx p∗(x) ∫ dz e(z|x) log e(z|x)\nm(z) . (37)\nHere, p∗(x) is a true PDF of x, e(z|x) is a stochastic encoder, e(z|x) is a decoder, and m(z) is a marginal probability of z.\nOur work allows a rough estimation of Eqs. 35-37 with β by introducing the implicit isometric variable y as explained in our work.\nUsing isometric variable y and the relation dz e(z|x) = dy e(y|x), Eq. 36 can be rewritten as: D = − ∫ dx p∗(x) ∫ dy e(y|x) log d(x|y). (38)\nLet µy be the implicit isometric latent variable corresponding to the mean of encoder output µ(x). As discussed in section 3.3, e(y|x) = N (y;µy, (β/2)In) will hold. Because of isometricity, the value of d(x|y) will be also close to e(y|x) = N (y;µy, (β/2)In). Though d(x|z) must depend on e(z|x), this important point has not been discussed well in this work. By using the implicit isometric variable, we can connect both theoretically. Thus, D can be estimated as:\nD ' ∫ dx p∗(x) ∫ dy N (y;µy, (β/2)In) logN (y;µy, (β/2)In)\n' ∫ dx p∗(x) (n\n2 log(βπe) ) = n\n2 log(βπe) (39)\nSecond, R is examined. m(y) is a marginal probability of y. Using the relation dz e(z|x) = dy e(y|x) and e(z|x)/m(z) = (e(y|x)(dy/dz))/(m(y)(dy/dz)) = e(y|x)/m(y), Eq. 37 can be rewritten as:\nR ' ∫ dx p∗(x) ∫ dy e(y|x) log e(y|x)\nm(y) . (40)\nBecause of isometricity, e(y|x) ' p(x̂|x) ' N (x̂;x, (β/2)Im) will approximately hold where x̂ denotes a decoder output. Thus m(y) can be approximated by:\nm(y) ' ∫ dx p∗(x)e(y|x) ' ∫ dx p∗(x) N (x̂;x, (β/2)Im) (41)\nHere, if β/2, i.e., added noise, is small enough compared to the variance of x, a normal distribution function term in this equation will act like a delta function. Thus m(y) can be approximated as:\nm(y) ' ∫ dx́ p∗(x́) δ(x́− x) ' p∗(x). (42)\nIn the similar way, the following approximation will also hold.∫ dy e(y|x) logm(y) ' ∫ dy e(y|x) log p∗(x) ' ∫ dx́ δ(x́− x) log p∗(x́) ' log p∗(x) (43)\nBy using these approximation and applying Eqs. 38-39, R in Eq. 37 can be approximated as: R ' ∫ dx p∗(x) ∫ dy e(y|x) log e(y|x)\np∗(x) ' − ∫ dx p∗(x) log p∗(x)− ( − ∫ dx p∗(x) ∫ dy e(y|x) log e(y|x) ) ' H − n\n2 log(βπe)\n' H −D (44) As discussed above, R and D can be specifically derived from β. In addition, Shannon lower bound discussed in Alemi et al. (2018) can be roughly verified in the optimized VAE with clearer notations using β.\nFrom the discussion above, we presume Alemi et al. (2018) might wrongly treat D in their work. They suggest that VAE with β = 1 is sensitive (unstable) becauseD andR can be arbitrary value on the line R = H − βD = H −D; however, our work as well as Tishby et al. (1999) (appendix A.2) and Dai & Wipf (2019)(appendix A.5) show that the differential entropy of the distortion and rate, i.e., D and R, are specifically determined by β after optimization, and R = H − D will hold for any β regardless β = 1 or not. Alemi et al. (2018) also suggest D should satisfy D ≥ 0 because D is a distortion; however, we suggest D should be treated as a differential entropy and can be less than 0 because x is once handled as a continuous signal with a stochastic process in Eqs. 35-37. Here, D ' (n/2) log(βπe) can be −∞ if β → 0, as also shown in Dai & Wipf (2019). Thus, upper bound of R at β → 0 is not H , but R = H − (−∞) =∞, as shown in RD theory for a continuous signal. Huang et al. (2020) show this property experimentally in their figures 4-8 such that R seems to diverge if MSE is close to 0." }, { "heading": "A.5 RELATION TO DAI ET AL. (2018) AND DAI & WIPF (2019)", "text": "Our work is consistent with Dai et al. (2018) and Dai & Wipf (2019).\nDai et al. (2018) analyses VAE by assuming a linear model. As a result, the estimated posterior is constant. If the distribution of the manifold is the Gaussian, our work and Dai et al. (2018) give a similar result with constant posterior variances. For non-Gaussian data, however, the quantitative analysis such as probability estimation is intractable using their linear model. Our work reveals that the posterior variance gives a scaling factor between z in VAE and y in the isometric space when VAE is ideally trained with rich parameters. This is validated by Figures 3c and 3d, where the estimation of the posterior variance at each data point is a key.\nNext, the relation to Dai & Wipf (2019) is discussed. They analyse a behavior of VAE when ideally trained. For example, the theorem 5 in their work shows that D → (d/2) log γ + O(1) and R → −(γ̂/2) log γ+O(1) hold if γ → +0, where γ, d, and γ̂ denote a variance of d(x|z), data dimension, and latent dimension, respectively. By setting γ = β/2 and d = γ̂ = n, this theorem is consistent with R and D derived in Eq. 39 and Eq. 44." }, { "heading": "A.6 RELATION TO TRANSFORM CODING (GOYAL, 2001)", "text": "We show the optimum condition of VAE shown in Eq. 14 can be mapped to the optimum condition of transform coding (Goyal, 2001) as shown in Eq. 4. First, the derivation of Eq. 4 is explained by solving the optimal distortion assignment to each dimension. In the transform coding for m dimensional the Gaussian data, an input data x is transformed to z using an orthonormal transform such as KLT/DCT. Then each dimensional component zj is encoded with allowing distortion dj . Let D be a target distortion satisfying D = ∑m j=1 dj . σ 2 zj denotes a variance of each dimensional\ncomponent zj for the input dataset. Then, a rate R can be derived as ∑m j=1 1 2 log(σ 2 zj/dj). By introducing a Lagrange parameter λ and minimizing a rate-distortion optimization costL = D+λR, the optimum condition is derived as:\nλopt = 2D/m, dj = D/m = λopt/2. (45) This result is consistent with Eq. 14 by setting β = λopt = 2D/m. This implies that L′G in Eq. 14 is a rate-distortion optimization (RDO) cost of transform coding when x is deterministically transformed to y in the implicit isometric space and stochastically encoded with a distortion β/2.\nB ESTIMATION OF THE CODING LOSS AND TRANSFORM LOSS IN 1-DIMENSIONAL LINEAR VAE\nThis appendix estimates the coding loss and transform loss in 1-dimensional linear β-VAE for the Gaussian data, and also shows that the result is consistent with the Wiener filter. Let x be a one dimensional data with the normal distribution:\nx ∈ R, x ∼ N (x; 0, σx2) (46)\nLet z be a one dimensional latent variable. Following two linear encoder and decoder are provided with constant parameters a, b, and σz to optimize:\nz = ax+ σz where ∼ N ( ; 0, 1), x̂ = bz. (47)\nFirst, KL divergence at x, DKLx is derived. Due to the above relationship, we have\np(z) = N (z; 0, (aσx)2). (48)\nUsing Eq. 6, KL-divergence at x can be evaluated as:\nDKLx = − log(σzp(z))− 1\n2 log 2πe = − log σx +\na2x2\n2 − 1 2 . (49)\nSecond, the reconstruction loss at x Dx is evaluated as:\nDx = E ∼N ( ;0,1)\n[(x− (b(ax+ σz )))2] = ((ab− 1)x)2 + b2σz2. (50)\nThen, the loss objective Lx = Dx + βDKLx is averaged over x ∼ N (x; 0, σx2), and the objective L to minimize is derived as:\nL = E x∼N (x;0,σx2)\n[Lx] = (ab− 1)2σx2 + b2σz2 + β ( − log σz + a2σx 2\n2 − 1 2\n) . (51)\nHere, (ab − 1)2σx2 and b2σz2 in the last equation are corresponding to the transform loss DT and coding loss DC, respectively.\nBy solving dL/da = 0, dL/db = 0, and dL/dσz = 0, a , b, and σz are derived as follows:\na = 1/σx,\nb = σx\n( 1 + √ 1− 2β/σx2 ) 2 ,\nσz = 2 √ β/2\nσx\n( 1 + √ 1− 2β/σx2 ) . (52) From Eq. 52, DT and DC are derived as:\nDT =\n(√ 1− 2β/σx2 − 1\n2\n)2 σx 2,\nDC = β/2. (53)\nAs shown in section 3.3, the added noise, β/2, should be reasonably smaller than the data variance σx 2. If σx2 β, b and σz in Eq. 52 can be approximated as:\nDT ' (β/2)2 σx2 = β/2 σx2 DC. (54)\nAs shown in this equation, DT/DC is small in the VAE where the added noise is reasonably small, and DT can be ignored.\nNext, the relation to the Wiener filter is discussed. We consider an simple 1-dimensional Gaussian process. Let x ∼ N (x; 0, σ2x) be input data. Then, x is scaled by s, and a Gaussian noise n ∼\nN (n; 0, σ2n) is added. Thus, y = s x + n is observed. From the Wiener filter theory, the estimated value with minimum distortion, x̂ can be formulated as:\nx̂ = sσx\n2\ns2σx2 + σn2 y. (55)\nIn this case, the estimation error is derived as:\nE[(x̂− x)2] = σn 4\n(s2σx2 + σn2)2 σx\n2 + s2σx 4\n(s2σx2 + σn2)2 σn\n2 = σx\n2\nσx2 + (σn2/s2) (σn\n2/s2). (56)\nIn the second equation, the first term is corresponding to the transform loss, and the second term is corresponding to the coding loss. Here the ratio of the transform loss and coding loss is derived as σn 2/(s2σx 2). By appying s = 1/σx and σn = σz to σn2/(s2σx2) and assuming σ2x β/2, this ratio can be described as: σn 2\ns2σx2 = σz\n2 = β/2\nσ2x 4( 1 + √ 1− 2β/σx2 )2 = β/2σ2x +O (( β/2 σ2x )2) . (57)\nThis result is consistent with Eq. 54, implying that optimized VAE and the Wiener filter show similar behaviours." }, { "heading": "C DERIVATION OF QUANTITATIVE PROPERTIES IN SECTION 3.5", "text": "" }, { "heading": "C.1 DERIVATION OF THE ESTIMATED VARIANCE", "text": "This appendix explains the derivation of Eq. 21 in Section 3.5. Here, we assume that zj is mapped to yj such that yj is set to 0 at zj = 0. We also assume that the prior distribution is N (z; 0, In). The variance is derived by the subtraction of E[yj ]\n2, the square of the mean, from E[y2j ], the square mean. Thus, the approximations of both E[yj ] and E[y2j ] are needed.\nFirst, the approximation of the mean E[yj ] is explained. Because the cumulative distribution functions (CDFs) of yj are the same as CDF of zj , the following equations hold:∫ 0\n−∞ p(yj)dyj = ∫ 0 −∞ p(zj)dzj = 0.5, ∫ ∞ 0 p(yj)dyj = ∫ ∞ 0 p(zj)dzj = 0.5. (58)\nThis equation means that the median of the yj distribution is 0. Because the mean and median are close in most cases, the mean E[yj ] can be approximated as 0. As a result, the variance of yj can be approximated by the square mean E[y2j ].\nSecond, the approximation of the square mean E[y2j ] is explained. The standard deviation of the posterior σj(x) is assumed as a function of zj , regardless of x. This function is denoted as σj(zj). For zj ≥ 0, yj is approximated as follows, using Eq. 11 and replacing the average of 1/σj(źj) over źj = [0, zj ] by 1/σj(zj):\nyj = ∫ zj 0 dyj dźj dźj = √ β 2 ∫ zi 0 1 σj(źj) dźi ' √ β 2 1 σj(zj) ∫ zj 0 dźj = √ β 2 zj σj(zj) . (59)\nThe same approximation is applied to zi < 0. Then the square mean of yi is approximated as follows, assuming that the correlation between σ(zj)\n−2 and zj2 is low:∫ yj 2p(yj)dyj ' β\n2\n∫ ( zj\nσj(zj)\n)2 p(zj)dzj ' β\n2\n∫ σj(zj) −2 p(zj)dzj ∫ zj 2p(zj)dzj . (60)\nFinally, the square mean of yi is approximated as the following equation, using ∫ zj 2p(zj)dzj = 1 and replacing σj(zj) 2 by σj(x)2, i.e., the posterior variance derived from the input data:∫\nyj 2p(yj)dyj '\nβ\n2\n∫ σj(zj) −2 p(zj)dzj ' β\n2 E zj∼p(zj) [σj(zj)\n−2 ] ' β\n2 E x∼p(x) [σj(x)\n−2]. (61)\nAlthough some rough approximations are used in the expansion, the estimated variance in the last equation seems still reasonable, because σj(x) shows a scale factor between yj and zj while the variance of zj is always 1 for the priorN (zj ; 0, 1). Considering the variance of the prior ∫ zj\n2p(zj)dzj in the expansion, this estimation method can be applied to any prior distribution." }, { "heading": "C.2 DERIVATION OF THE DATA PROBABILITY ESTIMATION", "text": "This appendix shows the derivation of variables in Eqs. 22 and 23. First, the derivation of Lx for the input x is described. Then, the PDF ratio between the input space and inner product space is explained for the cases m = n and m > n." }, { "heading": "Derivation of Lx for the input x :", "text": "As shown in in Eq. 1,Lx is denoted as−Ez∼qφ(z|x)[ · ]+βDKL( · ). We approximateEz∼qφ(z|x)[ · ] as 12 (D(x,Decθ(µx +σx)) +D(x,Decθ(µx −σx))), i.e., the average of two samples, instead of the average over z ∼ qφ(z|x). DKL( · ) can be calculated from µx and σx using Eq. 2." }, { "heading": "The PDF ratio in the case m = n:", "text": "The PDF ratio form = n is a Jacobian determinant between two spaces. First, (∂x∂y ) TGx( ∂x ∂y ) = Im holds from Eq. 12. |∂x/∂y|2 |Gx| = 1 also holds by calculating the determinant. Finally, |∂x/∂y| is derived as |Gx|1/2 using |∂y/∂x| = |∂x/∂y|−1. The PDF ratio in the case m > n andGx = axIm: Although the strict derivation needs the treatment of the Riemannian manifold, we provide a simple explanation in this appendix. Here, it is assumed that DKL(j)(·) > 0 holds for all j = [1, ..n]. If DKL(j)(·) = 0 for some j, n is replaced by the number of latent variables with DKL(j)(·) > 0.\nFor the implicit isometric space Siso(⊂ Rm), there exists a matrix Lx such that both y = Lxx and Gx =\ntLxLx holds. w denotes a point in Siso, i.e., w ∈ Siso. Because Gx is assumed as axIm in Section 3.5, Lx = ax1/2Im holds. Then, the mapping function w = h(x) between Sinput and Siso is defined, such that:\n∂h(x) ∂x = ∂w ∂x = Lx, and h(x(0)) = w(0) for ∃ x(0) ∈ Sinput and ∃ w(0) ∈ Siso. (62)\nLet δx and δw are infinitesimal displacements around x and w = h(x), such that w + δw = h(x+ δx). Then the next equation holds from Eq. 62:\nδw = Lxδx (63)\nLet δx(1), δx(2), δw(1), and δw(2) be two arbitrary infinitesimal displacements around x and w = h(x), such that δw(1) = Lxδx(1) and δw(2) = Lxδx(2). Then the following equation holds, where · denotes the dot product.\ntδx(1)Gxδx (2) = t(Lxδx (1))(Lxδx (2)) = δw(1) · δw(2) (64)\nThis equation shows the isometric mapping from the inner product space for x ∈ Sinput with the metric tensorGx to the Euclidean space for w ∈ Siso. Note that all of the column vectors in the Jacobian matrix ∂x/∂y also have a unit norm and are orthogonal to each other in the metric space for x ∈ Sinput with the metric tensor Gx. Therefore, the m×n Jacobian matrix ∂w/∂y should have a property that all of the column vectors have a unit norm and are orthogonal to each other in the Euclidean space.\nThen n-dimensional space which is composed of the meaningful dimensions from the implicit isometric space is named as the implicit orthonormal space Sortho. Figure 7 shows the projection of the volume element from the implicit orthonormal space to the isometric space and input space. Let dVortho be an infinitesimal n-dimensional volume element in Sortho. This volume element is a n-dimensional rectangular solid having each edge length dyj . Let Vn(dVX) be the n-dimensional volume of a volume element dVX. Then, Vn(dVortho) = ∏n j dyj holds. Next, dVortho is projected to n dimensional infinitesimal element dViso in Siso by ∂w/∂y. Because of the orthonormality, dViso is equivalent to the rotation / reflection of dVortho, and Vn(dViso) is the same as Vn(dVortho), i.e., ∏n j dyj . Then, dViso is projected to n-dimensional element dVinput in Sinput by ∂x/∂w = L−1x = ax −1/2Im. Because each dimension is scaled equally by the scale factor ax −1/2, Vn(dVinput) = ∏n j ax −1/2dyj = ax −n/2 Vn(dVortho) holds. Here, the ratio of the volume element between Sinput and Sortho is Vn(dVinput)/Vn(dVortho) = ax−n/2. Note that the PDF ratio is derived by the reciprocal of Vn(dVinput)/Vn(dVortho). As a result, the PDF ratio is derived as axn/2." }, { "heading": "D DETAILS OF THE NETWORKS AND TRAINING CONDITIONS IN THE EXPERIMENTS", "text": "This appendix explains the networks and training conditions in Section 4." }, { "heading": "D.1 TOY DATA SET", "text": "This appendix explains the details of the networks and training conditions in the experiment of the toy data set in Section 4.1." }, { "heading": "Network configurations:", "text": "FC(i, o, f) denotes a FC layer with input dimension i, output dimension o, and activate function f.\nThe encoder network is composed of FC(16, 128, tanh)-FC(128, 64, tahh)-FC(64, 3, linear)×2 (for µ and σ). The decoder network is composed of FC(3, 64, tanh)-FC(64, 128, tahh)-FC(128, 16, linear)." }, { "heading": "Training conditions:", "text": "The reconstruction loss D(·, ·) is derived such that the loss per input dimension is calculated and all of the losses are averaged by the input dimension m = 16. The KL divergence is derived as a summation of DKL(j)(·) as explained in Eq. 2. In our code, we use essentially the same, but a constant factor scaled loss objective from the original β-VAE form Lx = D(·, ·) + βDKL(j)(·) in Eq. 1, such as:\nLx = λ D(·, ·) +DKL(j)(·). (65)\nEquation 65 is essentially equivalent to L = D(·, ·) + βDKL(j)(·), multiplying a constant λ = β−1 to the original form. The reason why we use this form is as follows. Let ELBOtrue be the true ELBO in the sense of log-likelihood, such as E[log p(x)]. As shown in Section 3.3, the minimum of the loss objective in the original β-VAE form is likely to be a −βELBOtrue + Constant. If we use Eq. 65, the minimum of the loss objective will be −ELBOtrue + Constant, which seems more natural form of ELBO. Thus, Eq. 65 allows estimating a data probability from Lx in Eqs. 22 and 23, without scaling Lx by 1/β.\nThen the network is trained with λ = β−1 = 100 using 500 epochs with a batch size of 128. Here, Adam optimizer is used with the learning rate of 1e-3. We use a PC with CPU Inter(R) Xeon(R) CPU E3-1280v5@3.70GHz, 32GB memory equipped with NVIDIA GeForce GTX 1080. The simulation time for each trial is about 20 minutes, including the statistics evaluation codes.\nIn our experiments, λ or β−1, i.e., 100, seems somewhat large. This is caused by the use of the mean square error as a reconstruction loss. In contrast, KL divergence is the sum for the whole image, which can be thought of as a rate for the whole image. Considering the number of input dimensions, β′ = (λ/16)−1 = 16/λ = 0.16 is thought of as β in the general form of VAE." }, { "heading": "D.2 CELEBA DATA SET", "text": "This appendix explains the details of the networks and training conditions in the experiment of the toy data set in Section 4.2." }, { "heading": "Network configurations:", "text": "CNN(w, h, s, c, f) denotes a CNN layer with kernel size (w, h), stride size s, dimension c, and activate function f. GDN and IGDN 2 are activation functions designed for image compression (Ballé et al., 2016). This activation function is effective and popular in deep image compression studies.\nThe encoder network is composed of CNN(9, 9, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - FC(1024, 1024, softplus) - FC(1024, 32, None)×2 (for µ and σ) in encoder.\nThe decoder network is composed of FC(32, 1024, softplus) - FC(1024, 1024, softplus) - CNN(5, 5, 2, 64, IGDN) - CNN(5, 5, 2, 64, IGDN) - CNN(5, 5, 2, 64, IGDN)-CNN(9, 9, 2, 3, IGDN)." }, { "heading": "Training conditions:", "text": "In this experiment, SSIM explained in Appendix H.2 is used as a reconstruction loss. The reconstruction loss D(·, ·) is derived as follows. Let SSIM be a SSIM calculated from two input images. Then 1 − SSIM is set to D(·, ·). The KL divergence is derived as a summation of DKL(j)(·) as explained in Eq. 2.\nWe also use the loss form as in Equation 65 in our code. In the case of the decomposed loss, the loss function Lx is set to λ(D(x, x̆) +D(x̆, x̂)) +DKL(·) in our code. Then, the network is trained with λ = β−1 = 1, 000 using a batch size of 64 for 300,000 iterations. Here, Adam optimizer is used with the learning rate of 1e-3.\nWe use a PC with CPU Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz, 12GB memory equipped with NVIDIA GeForce GTX 1080. The simulation time for each trial is about 180 minutes, including the statistics evaluation codes.\nIn our experiments, λ = β−1 = 1, 000 seems large. This is caused by the use of SSIM. As explained in Appendix H.2, SSIM is measured for a whole image, and its range is between 0 and 1. The order of 1 − SSIM is almost equivalent to the mean square error per pixel, as shown in Eq. 74. As explained in Appendix D.1, KL divergence is thought of as a rate for the whole image. Considering the number of pixels in a image, β′ = (λ/(64 × 64))−1 = 4096/λ = 4.096 is comparable to β in the general form of VAE.\n2Google provides a code in the official Tensorflow library (https://github.com/tensorflow/compression)" }, { "heading": "E ADDITIONAL RESULTS IN THE TOY DATASETS", "text": "" }, { "heading": "E.1 SCATTERING PLOTS FOR THE SQUARE ERROR LOSS IN SECTION", "text": "4.1 Figure 8a shows the plots of p(x) and estimated probabilities for the square error coding loss in Section 4.1, where the scale factor ax in Eq. 23 is 1. Thus, both exp(−Lx/β) and p(µ(x)) ∏ j σj(x) show a high correlation, allowing easy estimation of the data probability in the input space. In contrast, p(µ(x)) still shows a low correlation. These results are consistent with our theory.\nE.2 ABLATION STUDY USING 3 TOY DATASETS, 3 CODING LOSSES, AND 10 β PARAMETERS.\nIn this appendix, we explain the ablation study for the toy datasets. We introduce three toy datasets and three coding losses including those used in Section 4.1. We also change β−1 = λ from 1 to 1, 000 in training. The details of the experimental conditions are shown as follows.\nDatasets: First, we call the toy dataset used in Section 4.1 the Mix dataset in order to distinguish three datasets. The second dataset is generated such that three dimensional variables s1, s2, and s3 are sampled in accordance with the distributions p(s1), p(s2), and p(s3) in Figure 9. The variances of the variables are the same as those of the Mix dataset, i.e., 1/6, 2/3, and 8/3, respectively. We call this the Ramp dataset. Because the PDF shape of this dataset is quite different from the prior N (z; 0, I3), the fitting will be the most difficult among the three. The third dataset is generated such that three dimensional variables s1, s2, and s3 are sampled in accordance with the normal distributions N (s1; 0, 1/6), N (s2; 0, 2/3), and N (s3; 0, 8/3), respectively. We call this the Norm dataset. The fitting will be the easiest, because both the prior and input have the normal distributions, and the posterior standard deviation, given by the PDF ratio at the same CDF, can be a constant.\nCoding losses: Two of the three coding losses is the square error loss and the downward-convex loss described in Section 4.1. The third coding loss is a upward-convex loss which we design as Eq. 66 such that the scale factor ax becomes the reciprocal of the scale factor in Eq. 24:\nD(x, x̂) = ax‖x− x̂‖22, where ax = (2/3 + 2 ‖x‖22/21)−1 andGx = axIm. (66) Figure 10 shows the scale factors ax in Eqs. 24 and 66, where s1 in x = (s1, 0, 0) moves within ±5. Parameters: As explained in Appendix D.1, λ = 1/β is used as a hyper parameter. Specifically, λ = 1, 2, 5, 10, 20, 50, 100, 200, 500, and 1, 000 are used.\nFigures 11 - 19 show the property measurements for all combinations of the datasets and coding losses, with changing λ. In each Figure, the estimated norms of the implicit transform are shown in the figure (a), the ratios of the estimated variances are shown in the figure (b), and the correlation coefficients between p(x) and estimated data probabilities are shown in the figure (c), respectively.\nFirst, the estimated norm of the implicit transform in the figures (a) is discussed. In all conditions, the norms are close to 1 as described in Eq. 20 in the λ range 50 to 1000. These results show consistency with our theoretical analysis, supporting the existence of the implicit orthonormal transform. The values in the Norm dataset are the closest to 1, and those in the Ramp dataset are the most different, which seems consistent with the difficulty of the fitting.\nSecond, the ratio of the estimated variances is discussed. In the figures (b), Var(zj) denotes the estimated variance, given by the average of σ−2j(x). Then, Var(z2)/Var(z1) and Var(z3)/Var(z1) are plotted. In all conditions, the ratios of Var(z2)/Var(z1) and Var(z3)/Var(z1) are close to the variance ratios of the input variables, i.e., 4 and 16, in the λ range 5 to 500. Figure 20 shows the detailed comparison of the ratio for the three datasets and three coding losses at λ = 100. In most cases, the estimated variances in the downward-convex loss are the smallest, and those in the upward-convex loss are the largest, which is more distinct for Var(z3)/Var(z1). This can be explained as follows. When using the downward-convex loss, the space region with a large norm is thought of as shrinking in the inner product space, as described in Section 4.1. This will make the variance smaller. In contrast, when using the upward-convex loss, the space region with a large norm is thought of as expanding in the inner product space, making the variance larger. Here, the dependency of the losses on the ratio changes is less in the Norm dataset. The possible reason is that data in the normal distribution concentrate around the center, having less effect on the loss scale factor in the downward-convex loss and upward-convex loss.\nThird, the correlation coefficients between p(x) and the estimated data probabilities in the figures (c) are discussed. In the Mix dataset and Ramp dataset, the correlation coefficients are around 0.9 in the λ range from 20 to 200 when the estimated probabilities axn/2p(µ(x)) ∏n j=1 σj(x) and ax n/2 exp(−(1/β)Lx) in Eq. 23 are used. When using p(µ(x)) ∏n j=1 σj(x) and exp(−(1/β)Lx) in the downward-convex loss and upward-convex loss, the correlation coefficients become worse. In addition, when using the prior probability p(µ(x)), the correlation coefficients always show the worst. In the Norm dataset, the correlation coefficients are close to 1.0 in the wider range of λ when using the estimated distribution in Eq. 23. When using p(µ(x)) ∏n j=1 σj(x) and exp(−(1/β)Lx) in the downward-convex loss and upward-convex loss, the correlation coefficients also become worse. When using the prior probability p(µ(x)), however, the correlation coefficients are close to 1 in contrast to the other two datasets. This can be explained because both the input distribution and the prior distribution are the same normal distribution, allowing the posterior variances almost constant. These results also show consistency with our theoretical analysis.\nFigure 21 shows the dependency of the coding loss on β for the Mix, Ramp, and Norm dataset using square the error loss. From DG in Eq. 14 and n = 3, the theoretical value of coding loss is 3β2 , as also shown in the figure. Unlike Figs. 11-19, x-axis is β = λ−1 to evaluate the linearity. As expected in section 3.3, the coding losses are close to the theoretical value where β < 0.1, i.e., λ > 10.\nFigure 22 shows the dependency of the ratio of transform loss to coding loss on β for the Mix, Ramp, and Norm dataset using square the error loss. From Eq. 54, the estimated transform loss is∑3 i=1(β/2) 2/Var(si) = 63β2 32 . Thus the theoretical value is ( 63β2 32 )/( 3β 2 ) = 21β 16 , as is also shown in the figure. x-axis is also β = λ−1 like Figure 21. Considering the correlation coefficient discussed above, the useful range of β seems between 0.005-0.05 (20-200 for λ). In this range, the ratio is less than 0.1, implying the transform loss is almost negligible. As expected in section 3.2 and appendix B, the ratio is close to the theoretical value where β > 0.01, i.e., λ < 100. For β < 0.01, the conversion loss is still negligibly small, but the ratio is somewhat off the theoretical value. The reason is presumably that the transform loss is too small to fit the network.\nAs shown above, this ablation study strongly supports our theoretical analysis in sections 3." }, { "heading": "F.1 TRAVERSED OUTPUTS FOR ALL THE COMPONENT IN THE EXPERIMENTAL SECTION 4.2", "text": "" }, { "heading": "F ADDITIONAL RESULTS IN CELEBA DATASET", "text": "Figure 23 shows decoder outputs for all the components, where each latent variable is traversed from −2 to 2. The estimated variance of each yj , i.e., σ−2j , is also shown in these figures. The latent variables zi are numbered in descending order by the estimated variances. Figure 23a is a result using the conventional loss form, i.e., Lx = D(x, x̂) + βDKL(·). The degrees of change seem to descend in accordance with the estimated variances. In the range where j is 1 from 10, the degrees of changes are large. In the range j > 10, the degrees of changes becomes gradually smaller. Furthermore, almost no change is observed in the range j > 27. As shown in Figure 4, DKL(j)(·) is close to zero for j > 27, meaning no information. Thus, this result is clearly consistent with our theoretical analysis in Section\nFigure 23b is a result using the decomposed loss form, i.e., Lx = D(x, x̆) +D(x̆, x̂) + βDKL(·). The degrees of change also seem to descend in accordance with the estimated variances. When looking at the detail, there are still minor changes even j = 32. As shown in Figure 5, KL divergences DKL(j)(·) for all the components are larger than zero. This implies all of the dimensional components have meaningful information. Therefore, we can see a minor change even j = 32. Thus, this result is also consistent with our theoretical analysis.\nAnother minor difference is sharpness. Although the quantitative comparison is difficult, the decoded images in Figure 23b seems somewhat sharper than those in Figure 23a. A possible reason for this minor difference is as follows. The transform loss D(x, x̆) serves to bring the decoded image of µ(x) closer to the input. In the conventional image coding, the orthonormal transform and its inverse transform are used for encoding and decoding, respectively. Therefore, the input and the decoded output are equivalent when not using quantization. If not so, the quality of the decoded image will suffer from the degradation. Considering this analogy, the use of decomposed loss might improve the decoded images for µ(x), encouraging the improvement of the orthonormality of the encoder/decoder in VAE." }, { "heading": "F.2 ADDITIONAL EXPERIMENTAL RESULT WITH OTHER CONDITION", "text": "In this Section, we provide the experimental results with other condition. We use essentially the same condition as described in Appendix D.2, except for the following conditions. The bottleneck size and λ are set to 256 and 10000, respectively. The encoder network is composed of CNN(9, 9, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - CNN(5, 5, 2, 64, GDN) - FC(1024, 2048, softplus) - FC(2048, 256, None)×2 (for µ and σ) in encoder. The decoder network is composed of FC(256, 2048, softplus) - FC(2048, 1024, softplus) - CNN(5, 5, 2, 64, IGDN) - CNN(5, 5, 2, 64, IGDN) - CNN(5, 5, 2, 64, IGDN)-CNN(9, 9, 2, 3, IGDN).\nFigures 24a and 24b show the averages of σj(x)−2 as well as the average and the standard deviation of 2βσj(x)\n2D′j(z) in the conventional loss form and the decomposed loss form, respectively. When using the conventional loss form, the mean of 2βσj(x) 2D′j(z) is 1.25, which is closer to 1 than the\nmean 1.83 in Section 4.2. This suggests that the implicit transform is closer to the orthonormal. The possible reason is that a bigger reconstruction error is likely to cause the interference to RD-trade off and a slight violation of the theory, and it might be compensated with a larger lambda. When using the decomposed loss form, the mean of 2βσj(x)\n2D′j(z) is 0.95, meaning almost unit norm. These results also support that VAE provides the implicit orthonormal transform even if the lambda or bottleneck size is varied." }, { "heading": "G ADDITIONAL EXPERIMENTAL RESULT WITH MNIST DATASET", "text": "In this Appendix, we provide the experimental result of Section 4.2 with MNIST dataset3 consists of binary hand-written digits with a dimension of 768(=28× 28). We use standard training split which includes 50,000 data points. For the reconstruction loss, we use the binary cross entropy loss (BCE) for the Bernoulli distribution. We averaged BCE by the number of pixels.\nThe encoder network is composed of FC(768, 1024, relu) - FC(1024, 1024, relu) - FC(1024, bottleneck size) in encoder. The decoder network is composed of FC(bottleneck size, 1024, relu) - FC(1024, 1024, relu) - FC(1024, 768, sigmoid). The batch size is 256 and the training iteration number is 50,000. In this section, results with two parameters, (bottleneck size=32, λ=2000) and (bottleneck size=64, λ=10000) are provided. Note that since we averaged BCE loss by the number of pixels, β in the conventional β VAE is derived by 768/λ. Then, the model is optimized by Adam optimizer with the learning rate of 1e-3, using the conventional (not decomposed) loss form.\nWe use a PC with CPU Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz, 12GB memory equipped with NVIDIA GeForce GTX 1080. The simulation time for each trial is about 10 minutes, including the statistics evaluation codes.\nFigure 25 shows the averages of σj(x)−2 as well as the average and the standard deviation of 2 βσj(x) 2D′j(z). In both conditions, the means of 2 βσj(x)\n2D′j(z) averages are also close to 1 except in the dimensions where σj(x)−2 is less than 10. These results suggest the theoretical property still holds when using the BCE loss. In the dimensions where σj(x)−2 is less than 10, the 2βσj(x) 2D′j(z) is somewhat lower than 1. The possible reason is that DKL(j)(·) in such dimension is 0 for some inputs and is larger than 0 in other inputs. The understanding of the transition region needs further study.\n3http://yann.lecun.com/exdb/mnist/" }, { "heading": "H DERIVATION/EXPLANATION IN RDO-RELATED EQUATION EXPANSIONS", "text": "" }, { "heading": "H.1 APPROXIMATION OF DISTORTION IN UNIFORM QUANTIZATION", "text": "Let T be a quantization step. Quantized values ẑj is derived as k T , where k = round(zj/T ). Then d, the distortion per channel, is approximated by\nd = ∑ k ∫ (k+1/2)T (k−1/2)T p(zj)(zj − k T )2 dzj\n' ∑ k T p(k T ) ∫ (k+1/2)T (k−1/2)T 1 T (zj − k T )2 dzj\n= T 2\n12 ∑ k T p(k T )\n' T 2\n12 . (67) Here, ∑ k T p(k T ) ' ∫∞ −∞ p(zj)dzj = 1 is used. The distortion for the given quantized value is\nalso estimated as T 2/12, because this value is approximated by ∫ (k+1/2)T (k−1/2)T 1 T (zj − k T ) 2 dzj ." }, { "heading": "H.2 APPROXIMATION OF RECONSTRUCTION LOSS AS A QUADRATIC FORM.", "text": "In this appendix, the approximations of the reconstruction losses as a quadratic form tδxGxδx+Cx are explained for the sum of square error (SSE), binary cross entropy (BCE) and Structural Similarity (SSIM). Here, we have borrowed the derivation of BCE and SSIM from Kato et al. (2020), and add some explanation and clarification to them for convenience. We also describe the log-likelihood of the Gaussian distribution.\nLet x̂ and x̂i be decoded sample Decθ(z) and its i-th dimensional component respectively. δx and δxi denote x− x̂ and xi− x̂i, respectively. It is also assumed that δx and δxi are infinitesimal. The details of the approximations are described as follows." }, { "heading": "Sum square error:", "text": "In the case of sum square error,Gx is equal to Im. This can be derived as:\nm∑ i=1 (xi − x̂i)2 = m∑ i=1 δx2i = tδxImδx. (68)" }, { "heading": "Binary cross entropy:", "text": "Binary cross entropy is a log likelihood of the Bernoulli distribution. The Bernoulli distribution is described as:\npθ(x|z) = m∏ i=1 x̂i xi (1− x̂i)(1−xi) (69)\nThen, the binary cross-entropy (BCE) can be expanded as: − log pθ(x|z) = − log m∏ i=1 x̂i xi (1− x̂i)(1−xi)\n= m∑ i=1 (−xi log x̂i − (1− xi) log (1− x̂i))\n= ∑ i (−xi log(xi + δxi)− (1− xi) log(1− xi − δxi))\n= ∑ i ( −xi log ( 1 + δxi xi ) − (1− xi) log ( 1− δxi 1− xi )) + ∑ i (−xi log(xi)− (1− xi) log(1− xi)). (70)\nFigure 26: Graph of 12\n( 1 x + 1 1−x ) in the BCE approximation.\nHere, the second term of the last equation is a constant Cx depending on x. Using log(1 + x) = x− x2/2 +O(x3), the first term of the last equation is further expanded as follows:∑\ni\n( −xi ( δxi xi − δxi 2 2xi2 ) − (1− xi) ( − δxi 1− xi − δxi 2 2 (1− xi)2 ) +O ( δxi 3 ))\n= ∑ i ( 1 2 ( 1 xi + 1 1− xi ) δxi 2 +O ( δxi 3 )) . (71)\nAs a result, a metric tensor Gx can be approximated as the following positive definite Hermitian matrix:\nGx = 1 2 ( 1 x1 + 11−x1 ) 0 . . . 0 12 ( 1 x2 + 11−x2 ) . . .\n... ...\n. . . . (72)\nHere, the loss function in each dimension 12 ( 1 x1 + 11−x1 ) is a downward-convex function as shown in Figure 26." }, { "heading": "Structural similarity (SSIM):", "text": "Structural similarity (SSIM) (Wang et al., 2001) is widely used for picture quality metric, which is close to human subjective quality. Let SSIM be a SSIM value between two pictures. The range of the SSIM value is between 0 and 1. The higher the value, the better the quality. In this appendix, we also show that (1− SSIM) can be approximated to a quadratic form such as tδx Gxδx. SSIMN×N(h,v)(x,y) denotes a SSIM value between N ×N windows in pictures X and Y , where x ∈ RN2 and y ∈ RN2 denote N × N pixels cropped from the top-left coordinate (h, v) in the images X and Y , respectively. Let µx, µy be the averages of all dimensional components in x, y, and σx , σy be the variances of all dimensional components in x, y in the N × N windows, respectively. Then, SSIMN×N(h,v)(x,y) is derived as\nSSIMN×N(h,v)(x,y) = 2µxµy µx2 + µy2 · 2σxy σx2 + σy2 . (73)\nIn order to calculate a SSIM value for a picture, the window is shifted in a whole picture and all of SSIM values are averaged. Therefore, if ( 1− SSIMN×N(h,v)(x,y) ) is expressed as a quadratic form tδx G(h,v)x δx, (1− SSIM) can be also expressed in quadratic form tδx Gxδx.\nLet δx be a minute displacement of x. µδx and σδx2 denote an average and variance of all dimensional components in δx, respectively. Then, SSIM between x and x + δx can be approximated as:\nSSIMN×N(h,v)(x,x+ δx) ' 1− µδx\n2\n2µx2 − σδx\n2\n2σx2 +O\n( (|δx|/|x|)3 ) . (74)\nThen µδx2 and σδx2 can be expressed as\nµδx 2 = tδxMδx, where M =\n1\nN2 1 1 . . . 1 1 1 . . . 1 ... ... . . .\n... 1 1 . . . 1 , (75) and\nσδx 2 = tδx V δx, where V =\n1\nN IN −M, (76) respectively. As a result, ( 1− SSIMN×N(h,v)(x,x+ δx) ) can be expressed in the following quadratic form as:\n1− SSIMN×N(h,v)(x,x+ δx) ' tδx G(h,v)xδx, where G(h,v)x = ( 1\n2µx2 M +\n1\n2σx2 V\n) .\n(77) It is noted thatM is a positive definite Hermitian matrix and V is a positive semidefinite Hermitian matrix. Therefore, G(h,v)x is a positive definite Hermitian matrix. As a result, (1 − SSIM) can be also expressed in quadratic form tδx Gxδx, whereGx is a positive definite Hermitian matrix." }, { "heading": "Log-likelihood of Gaussian distribution:", "text": "Gaussian distribution is described as:\npθ(x|z) = m∏ i=1 1√ 2πσ2 e−(xi−x̂i) 2/2σ2 = m∏ i=1 1√ 2πσ2 e−δxi 2/2σ2 , (78)\nwhere σ2 is a variance as a hyper parameter. Then, the log-likelihood of the Gaussian distribution is denoted as:\n− log pθ(x|z) = − log m∏ i=1 1√ 2πσ2 e−δx 2 i /2σ 2 = 1 2σ2 m∑ i=1 δx2i + m 2 log(2πσ2). (79)\nThe first term can be rewritten as (1/2σ2) tδxImδx. Thus,Gx = (1/2σ2) Im holds. Cx is derived as the second term of the last equation in Eq.78." }, { "heading": "H.3 DETAILED EXPLANATION OF KL DIVERGENCE AS A RATE OF ENTROPY CODING.", "text": "This appendix explains the detail how KL divergence can be interpreted as a rate in the transform coding. In the transform coding, input data is transformed by an orthonormal transform. Then, the transformed data is quantized, and an entropy code is assigned to the quantized symbol, such that the length of the entropy code is equivalent to the logarithm of the estimated symbol probability.\nIt is generally intractable to derive the rate and distortion of individual symbols in the ideal information coding. Thus, we first discuss the case of uniform quantization. Let Pzj and Rzj be the probability and rate in the uniform quantization coding of zj ∼ N (zj ; 0, 1). Here, µj(x) and σj(x)2 are regarded as a quantized value and a coding noise after the uniform quantization, respectively. Let T be a quantization step size. The coding noise after quantization is T 2/12 for the quantization step size T , as explained in Appendix H.1. Thus, T is derived as T = 2 √ 3σj(x) from σj(x)2 = T 2/12.\nWe also assume σj(x)2 1. As shown in Fig.27a, Pzj is denoted by ∫ µj(x)+T/2 µj(x)−T/2 p(zj)dzj where p(zj) is N (zj ; 0, 1). Using Simpson’s numerical integration method and ex = 1 + x + O(x2) expansion, Pzj is approximated as:\nPzj ' T\n6\n( p(µj(x) − T2 ) + 4p(µj(x)) + p(µj(x) + T 2 ) )\n= Tp(µj(x))\n6\n( 4 + e 4µj(x)T−T 2 8 + e −4µj(x)T−T 2 8 ) ' Tp ( µj(x) ) ( 1− T 2/24\n) = √ 6\nπ σj(x) e\n−(µj(x)2)/2 (\n1− σj(x)\n2\n2\n) . (80)\nUsing log(1 + x) = x+O(x2) expansion, Rµσ is derived as:\nRzj = − logPzj ' 1\n2\n( µj(x) 2 + σj(x) 2 − log σj(x)2 − log 6\nπ\n) = DKLj(x)(·) + 1\n2 log\nπe\n6 . (81)\nWhen Rzj and DKLj(x)(·) in Eq. 2 are compared, both equations are equivalent except a small constant difference 12 log(πe/6) ' 0.176 for each dimension. As a result, KL divergence for j-th dimension is equivalent to the rate for the uniform quantization coding, allowing a small constant difference.\nTo make theoretical analysis easier, we use the simpler approximation as Pzj = T p(µj(x)) = 2 √\n3σj(x) p(µj(x)) instead of Eq.80, as shown in Fig.27b. Then, Rzj is derived as:\nRzj = − log(2 √ 3 σj(x) p(µj(x))) = Eq. 6 + 1\n2 log\nπe\n6 . (82)\nThis equation also means that the approximation of KL divergence in Eq. 6 is equivalent to the rate in the uniform quantization coding with Pzj = 2 √ 3σj(x) p(µj(x)) approximation, allowing the same small constant difference as in Eq. 81. It is noted that the approximation Pzj = 2 √\n3σj(x) p(µj(x)) in Figure 27b can be applied to any kinds of prior PDFs because there is no explicit assumption for the prior PDF. This implies that the theoretical discussion after Eq. 6 in the main text will hold in arbitrary prior PDFs.\nFinally, the meaning of the small constant difference 12 log πe 6 in Eqs. 81 and 82 is shown. Pearlman & Said (2011) explains that the difference of the rate between the ideal information coding and uniform quantization is 12 log πe 6 . This is caused by the entropy difference of the noise distributions. In the ideal case, the noise distribution is known as a Gaussian. In the case the noise variance is σ2, the entropy of the Gaussian noise is 12 log(σ\n22πe). For the uniform quantization with a uniform noise distribution, the entropy is 12 log(σ 212). As a result, the difference is just 12 log πe 6 . Because the rate estimation in this appendix uses a uniform quantization, the small offset 12 log πe 6 can be regarded as a difference between the ideal information coding and the uniform quantization. As a result, KL divergence in Eq. 2 and Eq. 6 can be regarded as a rate in the ideal informaton coding for the symbol with the mean µj(x) and variance σj(x)2." } ]
2,020
null
SP:9f4c8080e3e3b45abdd1d906312bd1271670a805
[ "This paper considers a certain generalization of convolutional neural networks and equivariant linear networks to the infinite dimensional case, while covering also the discrete case, and offers a universality result. In more detail, the paper first characterizes equivariant maps as the unique extensions of \"generator\", namely regular maps that provide target functions (or vector) defined over a basic domain. In other words, any map that takes functions (or vectors) into functions (or vectors) defined over the representatives from the symmetry's equivalent classes can be extended uniquely to an equivariant map by enlarging its target domain according to the equivariance rule. Second, infinite dimensional fully connected networks (FNNs) and general (equivariant) convolution neural networks (CNNs) are described. The main result of the paper is the Conversion Theorem (Theorem 11), and its consequences. The theorem specify the conditions under which an FNN can be approximated by a CNN. Since FNNs are known to be universal this implies universality of CNNs. " ]
Group symmetry is inherent in a wide variety of data distributions. Data processing that preserves symmetry is described as an equivariant map and often effective in achieving high performance. Convolutional neural networks (CNNs) have been known as models with equivariance and shown to approximate equivariant maps for some specific groups. However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and setting. This paper provides a unified method to obtain universal approximation theorems for equivariant maps by CNNs in various settings. As its significant advantage, we can handle non-linear equivariant maps between infinite-dimensional spaces for non-compact groups.
[]
[ { "authors": [ "Andrew R Barron" ], "title": "Approximation and estimation bounds for artificial neural networks", "venue": "Machine learning,", "year": 1994 }, { "authors": [ "Taco S Cohen", "Mario Geiger", "Maurice Weiler" ], "title": "A general theory of equivariant cnns on homogeneous spaces", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pierre Courrieu" ], "title": "Function approximation on non-euclidean spaces", "venue": "Neural Networks,", "year": 2005 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Marc Finzi", "Samuel Stanton", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data", "venue": "arXiv preprint arXiv:2002.12880,", "year": 2020 }, { "authors": [ "Ken-Ichi Funahashi" ], "title": "On the approximate realization of continuous mappings by neural networks", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Robert Gens", "Pedro M Domingos" ], "title": "Deep symmetry networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Jonathan Gordon", "Wessel P Bruinsma", "Andrew YK Foong", "James Requeima", "Yann Dubois", "Richard E Turner" ], "title": "Convolutional conditional neural processes", "venue": null, "year": 1910 }, { "authors": [ "William H Guss", "Ruslan Salakhutdinov" ], "title": "On universal approximation by neural networks with uniform guarantees on approximation of infinite dimensional maps", "venue": null, "year": 1910 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Nicolas Keriven", "Gabriel Peyré" ], "title": "Universal invariant and equivariant graph neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Risi Kondor", "Shubhendu Trivedi" ], "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "venue": "arXiv preprint arXiv:1802.03690,", "year": 2018 }, { "authors": [ "Anastasis Kratsios" ], "title": "The universal approximation property: Characterizations, existence, and a canonical topology for deep-learning", "venue": "arXiv preprint arXiv:1910.03344,", "year": 2019 }, { "authors": [ "Mateusz Krukowski" ], "title": "Frechet-kolmogorov-riesz-weil’s theorem on locally compact groups via arzela-ascoli’s theorem", "venue": "arXiv preprint arXiv:1801.01898,", "year": 2018 }, { "authors": [ "Takanori Maehara", "Hoang NT" ], "title": "A simple proof of the universality of invariant/equivariant graph neural networks", "venue": "arXiv preprint arXiv:1910.03802,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Haggai Maron", "Ethan Fetaya", "Nimrod Segol", "Yaron Lipman" ], "title": "On the universality of invariant networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": "arXiv preprint arXiv:2002.08599,", "year": 2020 }, { "authors": [ "Maximillian Nickel", "Douwe Kiela" ], "title": "Poincaré embeddings for learning hierarchical representations", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Philipp Petersen", "Felix Voigtlaender" ], "title": "Equivalence of approximation by convolutional neural networks and fully-connected networks", "venue": "Proceedings of the American Mathematical Society,", "year": 2020 }, { "authors": [ "Siamak Ravanbakhsh" ], "title": "Universal equivariant multilayer perceptrons", "venue": "arXiv preprint arXiv:2002.02912,", "year": 2020 }, { "authors": [ "Akiyoshi Sannai", "Yuuki Takai", "Matthieu Cordonnier" ], "title": "Universal approximations of permutation invariant/equivariant functions by deep neural networks", "venue": "arXiv preprint arXiv:1903.01939,", "year": 2019 }, { "authors": [ "John Shawe-Taylor" ], "title": "Building symmetries into feedforward networks", "venue": "First IEE International Conference on Artificial Neural Networks,(Conf. Publ. No", "year": 1989 }, { "authors": [ "Sho Sonoda", "Noboru Murata" ], "title": "Neural network with unbounded activation functions is universal approximator", "venue": "Applied and Computational Harmonic Analysis,", "year": 2017 }, { "authors": [ "Dmitry Yarotsky" ], "title": "Universal approximations of invariant maps by neural networks", "venue": "arXiv preprint arXiv:1804.10306,", "year": 2018 } ]
[ { "heading": null, "text": "Group symmetry is inherent in a wide variety of data distributions. Data processing that preserves symmetry is described as an equivariant map and often effective in achieving high performance. Convolutional neural networks (CNNs) have been known as models with equivariance and shown to approximate equivariant maps for some specific groups. However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and setting. This paper provides a unified method to obtain universal approximation theorems for equivariant maps by CNNs in various settings. As its significant advantage, we can handle non-linear equivariant maps between infinite-dimensional spaces for non-compact groups." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks have been widely used as models to approximate underlying functions in various machine learning tasks. The expressive power of fully-connected deep neural networks was first mathematically guaranteed by the universal approximation theorem in Cybenko (1989), which states that any continuous function on a compact domain can be approximated with any precision by an appropriate neural network with sufficient width and depth. Beyond the classical result stated above, several types of variants of the universal approximation theorem have also been investigated under different conditions.\nAmong a wide variety of deep neural networks, convolutional neural networks (CNNs) have achieved impressive performance for real applications. In particular, almost all of state-of-the-art models for image recognition are based on CNNs. These successes are closely related to the property that performing CNNs commute with translation on pixel coordinate. That is, CNNs can conserve symmetry about translation in image data. In general, this kind of property for symmetry is known as the equivariance, which is a generalization of the invariance. When a data distribution has some symmetry and the task to be solved relates to the symmetry, data processing is desired to be equivariant on the symmetry. In recent years, different types of symmetry have been focused per each task, and it has been proven that CNNs can approximate arbitrary equivariant data processing for specific symmetry. These results are mathematically captured as the universal approximation for equivariant maps and represent the theoretical validity of the use of CNNs.\nIn order to theoretically correctly handle symmetric structures, we have to carefully consider the structure of data space where data distributions are defined. For example, in image recognition tasks, image data are often supposed to have symmetry for translation. When each image data is acquired, there are finite pixels equipped with an image sensor, and an image data is represented by a finitedimensional vector in a Euclidean space Rd, where d is the number of pixels. However, we note that the finiteness of pixels stems from the limit of the image sensor and a raw scene behind the image data is thought to be modelled by an element in RS with continuous spatial coordinates S, where RS is a set of functions from S to R. Then, the element in RS is regarded as a functional representation of the image data in Rd. In this paper, in order to appropriately formulate data symmetry, we treat both typical data representation in finite-dimensional settings and functional representation in infinite-dimensional settings in a unified manner." }, { "heading": "1.1 RELATED WORKS", "text": "Symmetry and functional representation. Symmetry is mathematically described in terms of groups and has become an essential concept in machine learning. Gordon et al. (2019) point out that, when data symmetry is represented by a infinite group like the translation group, equivariant maps, which are symmetry-preserving processing, cannot be captured as maps between finitedimensional spaces but can be described by maps between infinite-dimensional function spaces. As a related study about symmetry-preserving processing, Finzi et al. (2020) propose group convolution of functional representations and investigate practical computational methods such as discretization and localization.\nUniversal approximation for continuous maps. The universal approximation theorem, which is the main objective of this paper, is one of the most classical mathematical theorems of neural networks. The universal approximation theorem states that a feedforward fully-connected network (FNN) with a single hidden layer containing finite neurons can approximate a continuous function on a compact subset of Rd. Cybenko (1989) proved this theorem for the sigmoid activation function. After his work, some researchers showed similar results to generalize the sigmoidal function to a larger class of activation functions as Barron (1994), Hornik et al. (1989), Funahashi (1989), Kůrková (1992) and Sonoda & Murata (2017). These results were approximations to functional representations between finite-dimensional vector spaces, but recently Guss & Salakhutdinov (2019) generalized them to continuous maps between infinite-dimensional function spaces in Guss & Salakhutdinov (2019).\nEquivariant neural networks. The concept of group-invariant neural networks was first introduced in Shawe-Taylor (1989) in the case of permutation groups. In addition to the invariant case, Zaheer et al. (2017a) designed group-equivariant neural networks for permutation groups and obtained excellent results in many applications. Maron et al. (2019a; 2020) consider and develop a theory of equivariant tensor networks for general finite groups. Petersen & Voigtlaender (2020) established a connection between group CNNs, which are equivariant networks, and FNNs for group finites. However, symmetry are not limited to finite groups. Convolutional neural networks (CNNs) was designed to be equivariant for translation groups and achieved impressive performance in a wide variety of tasks. Gens & Domingos (2014) proposed architectures that are based on CNNs and invariant to more general groups including affine groups. Motivated by CNN’s experimental success, many researchers have further generalized this by using group theory. Kondor & Trivedi (2018) proved that, when a group is compact and the group action is transitive, a neural network constrained by some homogeneous structure is equivariant if and only if it becomes a group CNN.\nUniversal approximation for equivariant maps. Compared to the vast studies about universal approximation for continuous maps, there are few existing studies about universal approximation for equivariant maps. Sannai et al. (2019); Ravanbakhsh (2020); Keriven & Peyré (2019) considered the equivariant model for finite groups and proved universal approximation property of them by attributing it to the results of Maron et al. (2019b). Cohen et al. (2019) considered group convolution on a homogeneous space and proved that a linear equivariant map is always convolution-like. Yarotsky (2018) proved universal approximation theorems for nonlinear equivariant maps by CNNlike models when groups are the d-dimensional translation group T (d) = Rd or the 2-dimensional Euclidean group SE(2). However, when groups are more general, universal approximation theorems for non-linear equivariant maps have not been obtained." }, { "heading": "1.2 PAPER ORGANIZATION AND OUR CONTRIBUTIONS", "text": "The paper is organized as follows. In section 2, we introduce the definition of group equivariant maps and provide the essential property that equivariant maps have one-to-one correspondence to theoretically tractable maps called generators. In section 3, we define fully-connected and group convolutional neural networks between function spaces. This formulation is suitable to represent data symmetry. Then, we provide a main theorem called the conversion theorem that can convert FNNs to CNNs. In section 4, using the conversion theorem, we derive universal approximation theorems for non-linear equivariant maps by group CNNs. In particular, this is the first universal approximation theorem for equivariant maps in infinite-dimensional settings. We note that finite and infinite groups are handled in a unified manner. In section 5, we provide concluding remarks and mention future works." }, { "heading": "2 GROUP EQUIVARIANCE", "text": "" }, { "heading": "2.1 PRELIMINARIES", "text": "We introduce definitions and terminology used in the later discussion.\nFunctional representation. In this paper, sets denoted by S, T and G are assumed to be locally compact, σ-compact, Hausdorff spaces. When S is a set, we denote by RS the set of all maps from S to R and by ‖ · ‖∞ the supremum norm. We call S of RS the index set. We denote by C(S) the set of all continuous maps from S to R. We denote by C0(S) the set of continuous functions from S to R which vanish at infinity1. For a Borel space S with some measure µ, we denote the set of integrable functions from S to R with respect to µ as L1µ(S). For a subset B ⊂ S , the restriction map RB : RS → RB is defined by RB(x) = x|B, where x ∈ RS and x|B is the restriction of the domain of x onto B. When S is a finite set, RS is identified with the finite-dimensional Euclidean space R|S|, where |S| is the cardinality of S. In this sense, RS for general sets S is a generalization of Euclidean spaces. However, RS itself is often intractable for an infinite set S. In such cases, we instead consider C(S), C0(S) or Lp(S) as relatively tractable subspaces of RS . Group action. We denote the identity element in a group G by 1. We assume that the action of a group G on a set S is continuous. We denote by g · s the left action of g ∈ G to s ∈ S . Then we call Gs := {g · s|g ∈ G} the orbit of s ∈ S . From the definition, we have S = ⋃ s∈S Gs. When a subset B ⊂ S is the set of representative elements from all orbits, it satisfies the disjoint condition S = ⊔ s∈B Gs. Then, we call B a base space2 and define the projection PB : S → B by mapping s ∈ S to the representative element in B ∩Gs. When a group G acts on sets S and T , the action of G on the product space S ×T is defined by g · (s, t) := (g · s, g · t). When a group G acts on a index set S, the G-translation operators Tg : RS → RS for g ∈ G are defined by Tg[x](s) := x(g−1 · s), where x ∈ RS and s ∈ S . We often denote Tg[x] simply by g ·x for brevity. Then, group translation determine the action3 of G on RS ." }, { "heading": "2.2 GROUP EQUIVARIANT MAPS", "text": "In this section, we introduce group equivariant maps and show their basic properties. First, we define group equivariance.\nDefinition 1 (Group Equivariance). Suppose that a group G acts on sets S and T . Then, a map F : RS → RT is called G-equivariant when F [g · x] = g · F [x] holds for any g ∈ G and x ∈ RS .\nAn example of an equivariant map in image processing is provided in Figure 1.\nTo clarify the degree of freedom of equivariant maps, we define the generator of equivariant maps.\nDefinition 2 (Generator). Let B ⊂ T be a base space with respect to the action of G on T . For a G-equivariant map F : RS → RT , we call FB := RB ◦ F the generator of F .\nThe following theorem shows that equivariant maps can be represented by their generators.\nTheorem 3 (Degree of Freedom of Equivariant Maps). Let a group G act on sets S and T , and B ⊂ T a base space. Then, a G-equivariant map F : RS → RT has one-to-one correspondence to its generator FB.\nA detailed version of Theorem 3 is proved in Section A.1.\n1A function f on a locally compact space is said to vanish at infinity if, for any ϵ, there exists a compact subset K ⊂ S such that sups∈S\\K |f(s)| < ϵ.\n2The choice of the base space is not unique in general. However, the topological structure of a base space can be induced by the quotient space S/G.\n3We note that Tg ◦ Tg′ = Tg′g and the group translation operator is the action of G on RS from the right." }, { "heading": "3 FULLY-CONNECTED AND GROUP CONVOLUTIONAL NEURAL NETWORKS", "text": "" }, { "heading": "3.1 FULLY-CONNECTED NEURAL NETWORKS", "text": "To define neural networks, we introduce some notions. A map A : RS → RT is called a bounded affine map if there exist a bounded linear map W : RS → RT and an element b ∈ RT such that\nA[x] = W [x] + b. (1)\nGuss & Salakhutdinov (2019) provide the following lemma, which is useful to handle bounded affine maps. Lemma 4 (Integral Form, Guss & Salakhutdinov (2019)). Suppose that S and T are locally compact, σ-compact, Hausdorff, measurable spaces. For a bounded linear map W : C(S) → C(T ), there exist a Borel regular measure µ on S and a weak∗ continuous family of functions {w(t, ·)}t∈T ⊂ L1µ(S) such that the following holds for any x ∈ C(S):\nW [x](t) = ∫ S w(t, s)x(s)dµ(s).\nTo use the integral form, we assume in the following that the input and output spaces of A are the class of continuous maps C(S) and C(T ) instead of RS and RT , respectively. Using the integral form, a bounded affine map A is represented by\nAµ,w,b[x](t) = ∫ S w(t, s)x(s)dµ(s) + b(t). (2)\nIn particular, when S and T are finite sets with cardinality d and d′, the function spaces C(S) and C(T ) are identified with finite-dimensional Euclidean spaces Rd and Rd′ , and thus, an affine map A : Rd → Rd′ is parameterized by a weight matrix W = [w(t, s)]s∈[d],t∈[d′] : Rd → Rd ′ and a bias vector b = [b(t)]t∈[d′] ∈ Rd ′ , and (2) induces the following form, which is often used in the literature on neural networks:\nA[x](t) = d∑ s=1 w(t, s)x(s) + b(t). (3)\nA continuous function ρ : R → R induces the activation map αρ : C(S) → C(S) which is defined by αρ(x) := ρ ◦ x ∈ C(S) for x ∈ C(S). However, for brevity, we denote αρ by ρ. Then, we can define fully-connected neural networks in general settings. Definition 5 (Fully-connected Neural Networks). Let L ∈ N. A fully-connected neural network with L layers is a composition map of bounded affine maps (A1, . . . , AL) and an activation map ρ represented by\nϕ := AL ◦ ρ ◦AL−1 ◦ · · · ◦ ρ ◦A1, (4) where Aℓ : C(Sℓ−1) → C(Sℓ) are affine maps for some sequence of sets {Sℓ}Lℓ=0. Then, we denote by NFNN(ρ, L;S0,SL) the set of all fully-connected neural networks from C(S0) to C(SL) with L layers and an activation function ρ.\nWe denote the measure of the affine map A1 in the first layer of a fully-connected neural network ϕ by µϕ. This measure µϕ is used to describe a condition in the main theorem (Theorem 9)." }, { "heading": "3.2 GROUP CONVOLUTIONAL NEURAL NETWORKS", "text": "We introduce the general form of group convolution. Definition 6 (Group Convolution). Suppose that a group G acts on sets S and T . For a G-invariant measure ν on S, G-invariant functions v : S × T → R and b ∈ C(T ), the biased G-convolution Cν,v,b : C(S) → C(T ) is defined as\nCν,v,b[x](t) := ∫ S v(t, s)x(s)dν(s) + b(t). (5)\nIn the right hand side, we call the first term the G-convolution and the second term the bias term.\nIn the following, we denote Cν,v,b by C for brevity. When S and T are finite, we note that (5) also can be represented as (3).\nDefinition 6 includes existing definitions of group convolution as follows. When S = T = G, the group G acts on S and T by left translations. Then, (5) without the bias term (i.e., b = 0) is described as\nC[x](g) = ∫ G v(g, h)x(h)dν(h) = ∫ G ṽ(h−1g)x(h)dν(h),\nwhere4 ṽ(g) := v(g, 1). This is a popular definition of group convolution between two functions on G. Further, when S = G× B and T = G× B′, (5) without the bias term is described as\nC[x](g, t) = ∫ G×B v((g, τ), (h, ς))x(h, ς)dν(h, ς) = ∫ G×B ṽ(h−1g, τ, ς)x(h, ς)dν(h, ς),\nwhere ṽ(g, τ, ς) := v((g, τ), (1, ς)). This coincides with the definition of group convolution in Finzi et al. (2020). We note that Finzi et al. (2020) also proposes discretization and localization of the above group convolution for implementation.\nIn conventional convolution used for image recognition, G represents spatial information such as pixel coordinate, B and B′ correspond to channels in consecutive layers ℓ and ℓ + 1 respectively, and v corresponds to a filter. In applications, the filter v is expected to have compact support or be short-tailed on G as in a 3 × 3 convolution filter in discrete convolution. In particular, when v is allowed to be the Dirac delta or highly peaked around a single point in G, such convolution can be interpreted as the 1× 1 convolution. Then, we define group convolutional neural networks as follows. Definition 7 (Group Convolutional Neural Networks). Let L ∈ N. A G-convolutional neural network with L layers is a composition map of biased convolutions Cℓ : C(Sℓ−1) → C(Sℓ) (ℓ = 1, . . . , L) for some sequence of spaces {Bℓ}Lℓ=0 and an activation map with ρ as Φ := CL ◦ ρ ◦ CL−1 ◦ · · · ◦ ρ ◦ C1. (6) Then, we denote by NCNN(G, ρ, L;S0,SL) the set of all G-convolutional neural networks from C(S0) to C(SL) with respect to a group G with L layers and a fixed activation function ρ.\n4A bivariate G-invariant function v : G × G → R is determined by the univariate function ṽ : G → R because v(g, h) = v(h−1g, h−1h) = v(h−1g, 1) = ṽ(h−1g).\nWe easily verify the following proposition. Proposition 8. A G-convolutional neural network is G-equivariant.\nIn particular, each biased G-convolution Cν,v,b is G-equivariant. Conversely, Cohen et al. (2019) showed that a G-equivariant linear map is represented by some G-convolution without the bias term when G is locally compact and unimodular, and the action of a group is transitive (i.e., B consists of only a single element)." }, { "heading": "3.3 CONVERSION THEOREM", "text": "In this section, we introduce the main theorem (Theorem 9), which is an essential part of obtaining universal approximation theorems for equivariant maps by group CNNs. Theorem 9 (Conversion Theorem). Suppose that the action of a group G on sets S and T . We assume the following condition:\n(C1) there exist base spaces BS ⊂ S , BT ⊂ T , and two subgroups5 HT ⩽ HS ⩽ G such that S = G/HS × BS and T = G/HT × BT .\nFurther, suppose E ⊂ C0(S) is compact and an FNN ϕ : E → C0(BT ) with a Lipschitz activation function ρ satisfies\n(C2) there exists a G-left-invariant locally finite measure ν on S such that6 µϕ ν.\nThen, for any ϵ > 0, there exists a CNN Φ : E → C0(T ) with the activation function ρ such that the number of layers of Φ equals that of ϕ and\n‖RBT ◦ Φ− ϕ‖∞ ≤ ϵ. (7) Moreover, for any G-equivariant map F : C0(S) → C0(T ), the following holds:\n‖F |E − Φ‖∞ ≤ ‖FBT |E − ϕ‖∞ + ϵ. (8)\nWe provide the proof of Theorem 9 in Section B.\nConversion of Universal Approximation Theorems. The conversion theorem can convert a universal approximation theorem by FNNs to a universal approximation theorem for equivariant maps by CNNs as follows. Suppose that the existence of an FNN ϕ which satisfies ‖FB|E − ϕ‖∞ ≤ ϵ using some universal approximation theorem by FNNs. Then, Theorem 9 guarantees the existence of a CNN Φ which satisfies ‖F |E − Φ‖∞ ≤ 2ϵ. In other words, if an FNN can approximate the generator of the target equivariant map on E, then there exists a CNN which approximates the whole of the equivariant map on E.\nApplicable Cases. The conversion theorem can be applied to a wide range of group actions. We explain the generality of the conversion theorem. First, sets S and T are not limited to finite sets or Euclidean spaces, and may be more general topological spaces. Second, a group G may be discrete (especially finite) or continuous groups. Moreover, G can be non-compact and non-commutative. Third, the action of a group G on sets S and T may not be transitive, and thus, the sets can be non-homogeneous spaces. In the following, we provide some concrete examples of group actions when S = T and the actions of G on S and T are the same:\n• Symmetric Group. The action of G = Sn on S = [n] as permutation has the decomposition [n] = Sn/Stab(1) × {∗}, where HS = Stab(1) is the set of all permutations on [n] that fix 1 ∈ [n] and BS = {∗} is a singleton7. Then, the counting measure can be taken as an invariant measure ν.\n• Rotation Group. The action of G = O(d) on S = Rd \\ {0} as rotation around 0 ∈ Rd has the decomposition Rd \\ {0} = O(d)/O(d − 1) × R+ The cases where G = SO(d) or S = Sd−1 have similar decomposition. Then, the Lebesgue measure can be taken as an invariant measure ν.\n5HS and HT are not assumed to be normal subgroups. 6µϕ ≪ ν means that µϕ is absolutely continuous with respect to ν. 7A singleton is a set with exactly one element.\n• Translation Group. The action of G = Rd on S = Rd as translation has the trivial decomposition Rd = Rd/{0} × {∗}. Then, the Lebesgue measure can be taken as an invariant measure ν.\n• Euclidean Group. The action of G = E(d) on S = Rd as isometry has the decomposition Rd = E(d)/O(d) × {∗}. The case where G = SE(d) has a similar decomposition. Then, the Lebesgue measure can be taken as an invariant measure ν.\n• Scaling Group. The action of G = R>0 on S = Rd \\ {0} as scalar multiplication has the decomposition Rd\\{0} = R>0/{1}×Sd−1. Then, the measure νr×νSd−1 can be taken as an invariant measure ν, where the measure νr on R>0 is determined by νr([a, b]) := log ba and νSd−1 is a uniform measure on Sd−1. • Lorentz Group. The action of G = SO+(d, 1), a subgroup of the Lorentz group O(d, 1), on the upper half plane8 S = Hd+1 as matrix multiplication has the decomposition Hd+1 = SO+(d, 1)/SO(n) × {∗}. Then, the π#(ν+) can be taken as a left-invariant measure ν, where ν+ is a left-invariant measure on SO+(d, 1), π : SO+(d, 1) → SO+(d, 1)/SO(d) is a canonical projection, and π#(ν+) is the pushforward measure.\nInapplicable Cases. We explain some cases where the conversion theorem cannot be applied. First, similar to the above discussion, we consider the setting where S = T and the actions of G on S and T are the same. We note that, even if actions of G1 and G2 on S satisfy the conditions in the conversion theorem, a common invariant measure for both G1 and G2 may not exist. Then, a group G including G1 and G2 as subgroups does not satisfies (C2). For example, there does not exist a common invariant measure about the actions of translation and scaling on a Euclidean space. In particular, the action of the general linear group GL(d) on the Euclidean space does not have locally-finite left-invariant measure on Rd. Thus, the conversion theorem cannot applied to the case. Next, as we saw above, our model can handle convolutions on permutation groups, but not on general finite groups. This depends on whether [n] can be represented by a quotient of G, as we will see later. This is also the case for tensor expressions of permutations, which require a different formulation.\nLastly, we consider the case where the actions of G on S and T differ. Here, S and T may and may not be equal. As a representative case, we consider the invariant case. When the stabilizer in T satisfies HT = G, a G-equivariant map F : C0(S) → C0(T ) is said to be G-invariant. However, because of the condition HT ⩽ HS in (C1), the conversion theorem cannot apply to the invariant case as long as HS 6= G. This kind of restriction is similar to existing studies, where the invariant case is separately handled from the equivariant case (Keriven & Peyré (2019); Maehara & NT (2019); Sannai et al. (2019)). In fact, we can show that the inequality (7) never hold for non-trivial invariant cases (i.e., HS 6= G and HT = G) as follows: From HT = G, we have BT = T and RBT = id, and thus, (7) reduces to ‖Φ − ϕ‖∞ ≤ ϵ. Here, we note that ϕ is an FNN, which is not invariant in general, and Φ is a CNN, which is invariant. Thus, Φ cannot approximate non-invariant ϕ within a small error ϵ. This implies that (7) does not hold for small ϵ. However, whether (8) holds for the invariant case is an open problem.\nRemarks on Conditions (C1) and (C2). We consider the conditions (C1) and (C2).\nIn (C1), the subgroup HS ⩽ G (resp. HT ) represents the stabilizer group of the action of G on S (resp. T ). Thus, (C1) requires that the stabilizer group on every point in S (resp. T ) is isomorphic to the common subgroup HS (resp. HT ). When the group action satisfies some moderate conditions, such a requirement is known to be satisfied for most points in the set. As a theoretical result, the principal orbit type theorem (cf. Theorem 1.32, Meinrenken (2003)) guarantees that, if the group action on a manifold S is proper and S/G is connected, there exist a dense subset S ′ ⊂ S and a subgroup HS ⊂ G called a principal stabilizer such that the stabilizer group on every point in S ′ is isomorphic to HS .\nFurther, (C1) assumes that the sets S and T have the direct product form of some coset G/H and a base space B. Then, the case where the base space B consists of a single point is equivalent to the condition that the set is homogeneous. In this sense, (C1) can be regarded as a relaxation of the homogeneous condition. In many practical cases, a set S on which G acts can be regarded as such a direct product form. For example, when the action is transitive, the direct product decomposition\n8The upper half plane is defined by Hd+1 := {(x1, . . . , xd+1) ∈ Rd+1|xd+1 > 0}.\ntrivially holds with the base space that consists of a single point. Even when the set S itself is not rigorously represented by the direct product form, removing some ”small” subset N ⊂ S , the complement S \\ N can be often represented by the direct form. For example, when G = O(d) acts on the set S = Rd as rotation around the origin N = {0}, S \\ N has a direct product form as mentioned above. In applications, removing only the small subset N is expected to be negligible. Next, we provide some remarks on the condition (C2). Let us consider two representative settings of a set S. The first case is the setting where S is finite. When a G-invariant measure ν has a positive value on every singleton in S, ν satisfies (C2) for an arbitrary measure µϕ on S. In particular, the counting measure on S is invariant and satisfies (C2). The second case is the setting where S is a Euclidean space Rd, and µϕ is the Lebesgue measure. Then, (C2) is satisfied with invariant measures on the Euclidean space for various group actions, including translation, rotation, scaling, and an Euclidean group.\nHere, we give a general method to construct ν in (C2) for a compact-group action. When µϕ is locally finite and continuous9 with respect to the action of a compact group G, the measure ν := νG∗ µϕ on S for a Haar measure νG on G satisfies (C2), where (νG ∗µϕ)(A) := ∫ G µϕ(g −1 ·A)dνG(g)." }, { "heading": "4 UNIVERSAL APPROXIMATION THEOREMS FOR EQUIVARIANT MAPS", "text": "" }, { "heading": "4.1 UNIVERSAL APPROXIMATION THEOREM IN FINITE DIMENSION", "text": "We review the universal approximation theorem in finite-dimensional settings. Cybenko (1989) derived the following seminal universal approximation theorem in finite-dimensional settings. Theorem 10 (Universal Approximation for Continuous Maps by FNNs, Cybenko (1989)). Let an activation function ρ : R → R be non-constant, bounded and continuous. Let F : Rd → Rd′ be a continuous map. Then, for any compact E ⊂ Rd and ϵ > 0, there exists a two-layer fully connected neural network ϕE ∈ NFNN(ρ, 2; [d], [d′]) such that ‖F |E − ϕE‖∞ < ϵ.\nSince C0(S) = R|S| for a finite set S , we obtain the following theorem by combining Theorem 9 with Theorem 10. Theorem 11 (Universal Approximation for Equivariant Continuous Maps by CNNs). Let an activation function ρ : R → R be non-constant, bounded and Lipschitz continuous. Suppose that a finite group G acts on finite sets S and T and (C1) in Thoerem 9 holds. Let F : R|S| → R|T | be a G-equivariant continuous map. For any compact set E ⊂ R|S| and ϵ > 0, there exists a two-layer convolutional neural network ΦE ∈ NCNN(ρ, 2; |S|, |T |) such that ‖F |E − ΦE‖∞ < ϵ.\nWe note that Petersen & Voigtlaender (2020) obtained a similar result to Theorem 11 in the case of finite groups.\nUniversality of DeepSets. DeepSets is known as invariant/equivariant models with sets as input and is known to have universality for invariant/equivariant functions on set permutation (Zaheer et al. (2017b); Ravanbakhsh (2020)). The equiariant model is a stack of affine transformations with W = λE + γ1 (1 is the all-one matrix) and bias b = c · (1, ..., 1)⊤ and then an activation function acted on. Here, we prove the universality of DeepSets as a corollary of Theorem 11. Firstly, we consider the equivariant model of DeepSets as the one we are dealing with by setting S, T G,H and B as follows. We set S = T = [n], G = Sn, H = Stab(1) := {s ∈ Sn | s(1) = 1} and B = {∗}, where {∗} is a singleton. Then we can see that Stab(1) is a subgroup of G and its left cosets G/H = [n]. As a set, Sn/Stab(1) is equal to [n], and the canonical Sn-action on Sn/Stab(1) is equivalent to the permutation action on [n]. Therefore, C(G/H × B) = C([n]) = Rn holds, and the equivariant model of our paper is equal to that of DeepSets. Theorem 12. For any permutation equivariant function F : Rn → Rn, a compact set E ⊂ Rn and ϵ > 0, there is an equivariant model of DeepSets (or equivalently, our model) ΦE : E → Rn such that ‖ΦE(x)− F |E(x)‖∞ < ϵ.\nThe proof of Theorem 12 is provided in Section C.\n9A measure µϕ is said to be continuous with respect to the action of a group G if µϕ(g · A) is continuous with respect to g ∈ G for all Borel set A ⊂ S." }, { "heading": "4.2 UNIVERSAL APPROXIMATION THEOREM IN INFINITE DIMENSION", "text": "Guss & Salakhutdinov (2019) derived a universal approximation theorem for continuous maps by FNNs in infinite-dimensional settings. However, the universal approximation theorem in Guss & Salakhutdinov (2019) assumed that the index set S in the input layer and T in the output layer are compact. Combining the conversion theorem with it, we can derive a corresponding universal approximation theorem for equivariant maps with respect to compact groups. However, the compactness condition for S and T is a crucial shortcoming to handle the action of non-compact groups such as translation or scaling. In order to overcome the above obstacle, we can show a novel universal approximation theorem for Lipschitz maps by FNNs as follows. Theorem 13 (Universal Approximation for Lipschitz Maps by FNNs). Let an activation function ρ : R → R be continuous and non-polynomial. Let S ⊂ Rd and T ⊂ Rd′ be domains. Let F : C0(S) → C0(T ) be a Lipschitz map. Then, for any compact E ⊂ C0(S) and ϵ > 0, there exist N ∈ N and a two-layer fully connected neural network ϕE = A2 ◦ ρ ◦A1 ∈ NFNN(ρ, 2;S, T ) such that A1[·] = W (1)[·] + b(1) : E → C0([N ]) = RN , A2[·] = W (2)[·] + b(2) : RN → C0(T ), µϕE is the Lebesgue measure, and ‖F |E − ϕE‖∞ < ϵ.\nWe provide proof of Theorem 13 in the appendix. We note that S ⊂ Rd and T ⊂ Rd′ in Theorem 13 are allowed to be non-compact unlike the result in Guss & Salakhutdinov (2019). Combining Theorem 9 with Theorem 13, we obtain the following theorem. Theorem 14 (Universal Approximation for Equivariant Lipschitz Maps by CNNs). Let an activation function ρ : R → R be Lipschitz continuous and non-polynomial. Suppose that a group G acts on S ⊂ Rd and T ⊂ Rd′ , and (C1) and (C2) in Thoerem 9 hold for the Lebesgue measure µϕ. Let F : C0(S) → C0(T ) be a G-equivariant Lipschitz map. Then, for any compact set E ⊂ C0(S) and ϵ > 0, there exists a two-layer convolutional neural network ΦE ∈ NCNN(ρ, 2;S, T ) such that ‖F |E − ΦE‖∞ < ϵ.\nLastly, we mention some universal approximation theorems for some concrete groups. When a group G is an Euclidean group E(d) or a special Euclidean group SE(d), Theorem 14 shows that group CNNs are universal approximators of G-equivariant maps. Although Yarotsky (2018) showed that group CNNs can approximate SE(2)-equivariant maps, our result for d ≥ 3 was not shown in existing studies. Since Euclidean groups can be used to represent 3D motion and point cloud, Theorem 14 can provide the theoretical guarantee of 3D data processing with group CNNs. As another example, when a group G is SO+(d, 1), G acts on the upper half plane Hd+1, which is shown to be suitable for word representations in NLP (Nickel & Kiela (2017)). Since the action of G preserves the distance on Hd+1, group convolution with SO+(d, 1) may be useful for NLP." }, { "heading": "5 CONCLUSION", "text": "We have considered universal approximation theorems for equivariant maps by group CNNs. To prove the theorems, we showed that an equivariant map is uniquely determined by its generator. Thus, when we can take a fully-connected neural network to approximate the generator, the approximator of the equivariant map can be described as a group CNN from the conversion theorem. In this way, the universal approximation for equivariant maps by group CNNs can be obtained through the universal approximation for the generator by FNNs. We have described FNNs and group CNNs in an abstract way. In particular, we provided a novel universal approximation theorem by FNNs in the infinite dimension, where the support of the input functions is unbounded. Using this result, we obtained the universal approximation theorem for equivariant maps for non-compact groups.\nWe mention future work. In Theorem 14, we assumed sets S and T to be subspaces of Euclidean spaces. However, in the conversion theorem (Theorem 9), sets S and T do not need to be subspaces of Euclidean spaces and may have a more general topological structure. Thus, if there is a universal approximation theorem in non-Euclidean spaces (Courrieu (2005); Kratsios (2019)), we may be able to combine it with the conversion theorem and derive its equivariant version. Next, we note the problem of computational complexity. Although group convolution can be implemented by, e.g., discretization and localization as in Finzi et al. (2020), such implementation cannot be applied to high-dimensional groups due to high computational cost. To use group CNNs for actual machinelearning problems, it is required to construct effective architecture for practical implementation." } ]
2,020
null
SP:d8da07759331a59ef4062e5893eef1a8a8d2c589
[ "This paper studies the statistical risk bounds for two-layer neural networks with $L_1$-regularization. The authors consider two types of $L_1$-regularization: the $L_1$-regularization on output layer and the $L_1$-regularization on the input layer. For the $L_1$-regularization on output layer, the authors develop nearly minimax statistical risk bounds. For the $L_1$-regularization on input layers, the authors develop bounds with no-dependency on the input dimension. The paper is clearly written and easy to follow." ]
A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds. In this work, we present a new perspective towards the bias-variance tradeoff in neural networks. As an alternative to selecting the number of neurons, we theoretically show that L1 regularization can control the generalization error and sparsify the input dimension. In particular, with an appropriate L1 regularization on the output layer, the network can produce a statistical risk that is near minimax optimal. Moreover, an appropriate L1 regularization on the input layer leads to a risk bound that does not involve the input data dimension. Our analysis is based on a new amalgamation of dimension-based and norm-based complexity analysis to bound the generalization error. A consequent observation from our results is that an excessively large number of neurons do not necessarily inflate generalization errors under a suitable regularization.
[]
[ { "authors": [ "Animashree Anandkumar", "Rong Ge", "Daniel Hsu", "Sham M Kakade", "Matus Telgarsky" ], "title": "Tensor decompositions for learning latent variable models", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Andrew R Barron" ], "title": "Universal approximation bounds for superpositions of a sigmoidal function", "venue": "IEEE Trans. Inf. Theory,", "year": 1993 }, { "authors": [ "Andrew R Barron" ], "title": "Approximation and estimation bounds for artificial neural networks", "venue": "Machine learning,", "year": 1994 }, { "authors": [ "Andrew R Barron", "Jason M Klusowski" ], "title": "Complexity, statistical risk, and metric entropy of deep nets using total path variation", "venue": null, "year": 1902 }, { "authors": [ "Benedikt Bauer", "Michael Kohler" ], "title": "On deep learning as a remedy for the curse of dimensionality in nonparametric regression", "venue": "Ann. Stat.,", "year": 2019 }, { "authors": [ "Yu Cheng", "Duo Wang", "Pan Zhou", "Tao Zhang" ], "title": "A survey of model compression and acceleration for deep neural networks", "venue": "arXiv preprint arXiv:1710.09282,", "year": 2017 }, { "authors": [ "George Cybenko" ], "title": "Approximations by superpositions of a sigmoidal function", "venue": "Math. Control Signals Syst.,", "year": 1989 }, { "authors": [ "Enmao Diao", "Jie Ding", "Vahid Tarokh" ], "title": "Restricted recurrent neural networks", "venue": "IEEE Conf. on Big Data,", "year": 2019 }, { "authors": [ "Jie Ding", "Vahid Tarokh", "Yuhong Yang" ], "title": "Model selection techniques: An overview", "venue": "IEEE Signal Process. Mag.,", "year": 2018 }, { "authors": [ "Rong Ge", "Jason D Lee", "Tengyu Ma" ], "title": "Learning one-hidden-layer neural networks with landscape design", "venue": "arXiv preprint arXiv:1711.00501,", "year": 2017 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "arXiv preprint arXiv:1712.06541,", "year": 2017 }, { "authors": [ "Antonio Gulli", "Sujit Pal" ], "title": "Deep Learning with Keras", "venue": "Packt Publishing Ltd,", "year": 2017 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network. Advance", "venue": "Neural Inf. Process. Sys.,", "year": 2015 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The elements of statistical learning: data mining, inference, and prediction", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Majid Janzamin", "Hanie Sedghi", "Anima Anandkumar" ], "title": "Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods", "venue": "arXiv preprint arXiv:1506.08473,", "year": 2015 }, { "authors": [ "Nikhil Ketkar" ], "title": "Introduction to pytorch", "venue": "Deep learning with python,", "year": 2017 }, { "authors": [ "Marco Mondelli", "Andrea Montanari" ], "title": "On the connection between learning two-layers neural networks and tensor decomposition", "venue": "arXiv preprint arXiv:1802.07301,", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "Conf. Learning Theory, pp", "year": 2015 }, { "authors": [ "Fabian Pedregosa", "Francis Bach", "Alexandre Gramfort" ], "title": "On the consistency of ordinal regression methods", "venue": "J. Mach. Learn. Res.,", "year": 2017 }, { "authors": [ "Simone Scardapane", "Danilo Comminiello", "Amir Hussain", "Aurelio Uncini" ], "title": "Group sparse regularization for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Johannes Schmidt-Hieber" ], "title": "Nonparametric regression using deep neural networks with relu activation function", "venue": "arXiv preprint arXiv:1708.06633,", "year": 2017 }, { "authors": [ "George AF Seber", "Alan J Lee" ], "title": "Linear regression analysis, volume 329", "venue": null, "year": 2012 }, { "authors": [ "Xingjian Shi", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-kin Wong", "Wang-chun Woo" ], "title": "Convolutional LSTM network: A machine learning approach for precipitation nowcasting", "venue": "In Advance. Neural Inf. Process. Sys., pp", "year": 2015 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks. Advance", "venue": "Neural Inf. Process. Sys.,", "year": 2016 }, { "authors": [ "Yuhong Yang", "Andrew Barron" ], "title": "Information-theoretic determination of minimax rates of convergence", "venue": "Ann. Stat., pp", "year": 1999 }, { "authors": [ "Ming Yuan", "Yi Lin" ], "title": "Model selection and estimation in regression with grouped variables", "venue": "J. R. Stat. Soc. Ser. B Methodol.,", "year": 2006 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Hang Zhao", "Orazio Gallo", "Iuri Frosio", "Jan Kautz" ], "title": "Loss functions for image restoration with neural networks", "venue": "IEEE Trans. Comput.,", "year": 2016 }, { "authors": [ "Lei Zhao", "Qinghua Hu", "Wenwu Wang" ], "title": "Heterogeneous feature selection with multi-modal deep neural networks and sparse group LASSO", "venue": "IEEE Trans. Multimed.,", "year": 1936 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have been successfully applied in modeling nonlinear regression functions in various domains of applications. A critical evaluation metric for a predictive learning model is to measure its statistical risk bound. For example, the L1 or L2 risks of typical parametric models such as linear regressions are at the order of (d/n)1/2 for small d (Seber & Lee, 2012), where d and n denote respectively the input dimension and number of observations. Obtaining the risk bound for a nonparametric regression model such as neural networks is highly nontrivial. It involves an approximation error (or bias) term as well as a generalization error (or variance) term. The standard analysis of generalization error bounds may not be sufficient to describe the overall predictive performance of a model class unless the data is assumed to be generated from it. For the model class of two-layer feedforward networks and a rather general data-generating process, Barron (1993; 1994) proved an approximation error bound of O(r−1/2) where r denotes the number of neurons. The author further developed a statistical risk error bound of O((d/n)1/4), which is the tightest statistical risk bound for the class of two-layer neural networks up to the authors’ knowledge (for d < n). This risk bound is based on an optimal bias-variance tradeoff involving an deliberate choice of r. Note that the risk is at a convergence rate much slower than the classical parametric rate. We will tackle the same problem from a different perspective, and obtain a much tighter risk bound.\nA practical challenge closely related to statistical risks is to select the most appropriate neural network architecture for a particular data domain (Ding et al., 2018). For two-layer neural networks, this is equivalent to selecting the number of hidden neurons r. While a small r tends to underfit, researchers have observed that the network is not overfitting even for moderately large r. Nevertheless, recent research has also shown that an overly large r (e.g., when r > n) does cause overfitting with high probability (Zhang et al., 2016). It can be shown under some non-degeneracy conditions that a two-layer neural network with more than n hidden neurons can perfectly fit n arbitrary data, even in the presence of noise, which inevitably leads to overfitting. A theoretical choice of r suggested by the asymptotic analysis in (Barron, 1994) is at the order of (n/d)1/2, and a practical choice of r is often from cross-validation with an appropriate splitting ratio (Ding et al., 2018). An alternative perspective that we advocate is to learn from a single neural network with sufficiently many neurons and an appropriate L1 regularization on the neuron coefficients, instead of performing a selection from multiple candidate neural models. A potential benefit of this approach is easier hardware\nimplementation and computation since we do not need to implement multiple models separately. Perhaps more importantly, this perspective of training enables much tighter risk bounds, as we will demonstrate. In this work, we focus on the model class of two-layer feedforward neural networks.\nOur main contributions are summarized below. First, we prove that L1 regularization on the coefficients of the output layer can produce a risk bound O((d/n)1/2) (up to a logarithmic factor) under the L1 training loss, which approaches the minimax optimal rate. Such a rate has not been established under the L2 training loss so far. The result indicates a potential benefit of using L1 regularization for training a neural network, instead of selecting from a number of neurons. Additionally, a key ingredient of our result is a unique amalgamation of dimension-based and norm-based risk analysis, which may be interesting on its own right. The technique leads to an interesting observation that an excessively large r can reduce approximation error while not increasing generalization error under L1 regularizations. This implies that an explicit regularization can eliminate overfitting even when the specified number of neurons is enormous. Moreover, we prove that the L1 regularization on the input layer can induce sparsity by producing a risk bound that does not involve d, where d may be much larger compared with the true number of significant variables.\nRelated work on neural network analysis. Despite the practical success of neural networks, a systematic understanding of their theoretical limit remains an ongoing challenge and has motivated research from various perspectives. Cybenko (1989) showed that any continuous function could be approximated arbitrarily well by a two-layer perceptron with sigmoid activation functions. Barron (1993; 1994) established an approximation error bound of using two-layer neural networks to fit arbitrary smooth functions and their statistical risk bounds. A dimension-free Rademacher complexity for deep ReLU neural networks was recently developed (Golowich et al., 2017; Barron & Klusowski, 2019). Based on a contraction lemma, a series of norm-based complexities and their corresponding generalization errors are developed (Neyshabur et al., 2015, and the references therein). Another perspective is to assume that the data are generated by a neural network and convert its parameter estimation into a tensor decomposition problem through the score function of the known or estimated input distribution (Anandkumar et al., 2014; Janzamin et al., 2015; Ge et al., 2017; Mondelli & Montanari, 2018). Also, tight error bounds have been established recently by assuming that neural networks of parsimonious structures generate the data. In this direction, Schmidt-Hieber (2017) proved that specific deep neural networks with few non-zero network parameters can achieve minimax rates of convergence. Bauer & Kohler (2019) developed an error bound that is free from the input dimension, by assuming a generalized hierarchical interaction model.\nRelated work on L1 regularization. The use of L1 regularization has been widely studied in linear regression problems (Hastie et al., 2009, Chapter 3). The use of L1 regularization for training neural networks has been recently advocated in deep learning practice. A prominent use of L1 regularization was to empirically sparsify weight coefficients and thus compress a network that requires intensive memory usage (Cheng et al., 2017). The extension of L1 regularization to groupL1 regularization (Yuan & Lin, 2006) has also been extensively used in learning various neural networks (Han et al., 2015; Zhao et al., 2015; Wen et al., 2016; Scardapane et al., 2017). Despite the above practice, the efficacy of L1 regularization in neural networks deserves more theoretical study. In the context of two-layer neural networks, we will show that the L1 regularizations in the output and input layers play two different roles: the former for reducing generalization error caused by excessive neurons while the latter for sparsifying input signals in the presence of substantial redundancy. Unlike previous theoretical work, we consider the L1 loss, which ranks among the most popular loss functions in, e.g., learning from ordinal data (Pedregosa et al., 2017) or imaging data (Zhao et al., 2016), and for which the statistical risk has not been studied previously. In practice, the use of L1 loss for training has been implemented in prevalent computational frameworks such as Tensorflow (Google, 2016), Pytorch (Ketkar, 2017), and Keras (Gulli & Pal, 2017)." }, { "heading": "2 PROBLEM FORMULATION", "text": "" }, { "heading": "2.1 MODEL ASSUMPTION AND EVALUATION", "text": "Suppose we have n labeled observations {(xi, yi)}i=1,...,n, where yi’s are continuously-valued responses or labels. We assume that the underlying data generating model is yi = f∗(xi) + εi for some unknown function f∗(·), where xi’s ∈ X ⊂ Rd are independent and identically distributed,\nand εi’s are independent and identically distributed that is symmetric at zero and\nE (ε2i | xi) ≤ τ2. (1)\nHere, X is a bounded set that contains zero, for example {x : ‖x‖∞ ≤ M} for some constant M . Our goal is learn a regression model f̂n : x 7→ f̂n(x) for prediction. The f̂n is obtained from the following form of neural networks\nr∑ j=1 ajσ(w > j x+ bj) + a0, (2)\nwhere a0, aj , bj ∈ R, wj ∈ Rd, j = 1, . . . , r, are parameters to estimate. We let a = [a0, a1, . . . , ar]\nT denote the output layer coefficients. An illustration is given Figure 1. The estimation is typically accomplished by minimizing the empirical risk n−1 ∑n i=1 `(yi, f(xi)), for some loss function l(·) plus a regularization term. We first consider the L1 regularization at the output layer. In particular, we search for such f by the empirical risk minimization from the function class\nFV = { f : Rd → R ∣∣∣f(x) = r∑ j=1 ajσ(w > j x+ bj) + a0, ‖a‖1 ≤ V } (3)\nwhere V is a constant. The following statistical risk measures the predictive performance of a learned model f :\nR(f) ∆= E `(y, f(x))− E `(y, f∗(x)).\nThe loss function `(·) is pre-determined by data analysts, usually the L1 loss defined by `(y, ỹ) = |y − ỹ| or the L2 loss defined by `2(y, ỹ) = (y − ỹ)2. Under the L1 loss, the risk is R(f) = E |f∗(x) + ε − f(x)| − E |ε|, which is nonnegative for symmetric random variables ε. It is typical to use the same loss function for both training and evaluation." }, { "heading": "2.2 NOTATION", "text": "Throughout the paper, we use n, d, k, r to denote the number of observations, the number of input variables or input dimension, the number of significant input variables or sparsity level, the number of neurons (or hidden dimension), respectively. We write an & bn, bn . an, or bn = O(an), if |bn/an| < c for some constant c for all sufficiently large n. We write an bn if an & bn as well as an . bn. Let N (µ, V ) denote Gaussian distribution with mean µ and covariance V . Let ‖ · ‖1 and ‖ · ‖2 denote the common L1 and L2 vector norms, respectively. Let X denote the essential support of X . For any vector z ∈ Rd, we define ‖z‖X ∆ = supx∈X |x>z|, which may or may not be infinity. If X = {x : ‖x‖∞ ≤ M}, ‖z‖X is equivalent to M‖z‖1. Throughout the paper, f̂n denotes the estimated regression function with n being the number of observations." }, { "heading": "2.3 ASSUMPTIONS AND CLASSICAL RESULTS", "text": "We introduce some technical assumptions necessary for our analysis, and state-of-the-art statistical risk bounds built through dimension-based complexity analysis.\nAssumption 1. The activation function σ(·) is a bounded function on the real line satisfying σ(x)→ 1 as x→∞ and σ(x)→ 0 as x→ −∞, and it is L-Lipschitz for some constant L. Assumption 2. The regularization constant V is larger than 2C + f∗(0), where C is any constant such that the Fourier transform of f∗, denoted by F , satisfies∫\nRd ‖ω‖XF (dω) ≤ C. (4)\nAssumption 3. σ(x) approaches its limits at least polynomially fast, meaning that |σ(x) − 1{x > 0}| < ε for all |x| > xε where xε is a polynomial of 1/ε. Also, the value of η ∆ = supj ‖wj‖X scales with n polynomially meaning that log η = O(log n) as n→∞. Assumption 4. There exists a constant c > 0 and a bounded subset S ⊂ R such that P(X ∈ S) > c and infx∈S σ′(x) > c for X ∼ N (0, 1).\nWe explain each assumption below. The above notation of C, V follow those in (Barron, 1993; 1994). Assumption 1 specifies the class of the activation functions we consider. A specific case is the popular activation function σ(x) = 1/{1+exp(−x)}. Assumption 2, first introduced in (Barron, 1993), specifies the smoothness condition for f∗ to ensure the approximation property of neural networks (see Theorem 2.1). In Assumption 3, the condition for w is for technical convenience. It could also be replaced with the following alternative condition: There exists a constant c > 0 such that the distribution of x satisfies\nsup w:‖w‖2=1\nP ( log(|w>x|) < c log ε ) < ε\nfor any ε ∈ (0, 1). Simply speaking, the input data x is not too small with high probability. This condition is rather mild. For example, it holds when each component of x has a a bounded density function. This alternative condition ensures that for some small constant ε > 0 and any w ∈ Rd, there exists a surrogate of w, ŵ ∈ Rd with log ‖ŵ‖2 = O(− log ε), such that\nP(|σ(w>x)− σ(ŵ>x)| > ε) < ε. And this can be used to surrogate the assumption of w in Assumption 3 throughout the proofs in the appendix. Assumption 4 means that σ(·) is not a nearly-constant function. This condition is only used to bound the minimax lower bound in Theorem 3.2. Theorem 2.1 (Approximation error bound (Barron, 1993)). Suppose that Assumptions 1, 2, 3 hold. We have\ninf f∈FV {∫ X (f(x)− f∗(x))2µ(dx) }1/2 ≤ 2C ( 1√ r + δη ) ,\nwhere µ denotes a probability measure on X,\nδη = inf 0<ε<1/2\n{ 2ε+ sup\n|x|>ε ∣∣σ(ηx)− 1{x > 0}∣∣}, (5) η is defined in Assumption 3, and C is defined in (4). Theorem 2.2 (Statistical risk bound (Barron, 1994)). Suppose that Assumptions 1, 2, 3 hold. Then the L2 estimator f̂n in FV satisfies E {f̂n(x) − f∗(x)}2 . V 2/r + (rd log n)/n. In particular, if we choose r V √ n/(d log n), then E {f̂n(x)− f∗(x)}2 . V √ (d log n)/n.\nIt is known that a typical parametric rate under theL2 loss is at the order ofO(d/n), much faster than the above result. This gap is mainly due to excessive model complexity in bounding generalization errors. We will show in Section 3 that the gap in the rate of convergence can be filled when using L1 loss. Our technique will be based on the machinery of Rademacher complexity, and we bound this complexity through a joint analysis of the norm of coefficients (‘norm-based’) as well as dimension of parameters (‘dimension-based’)." }, { "heading": "2.4 MODEL COMPLEXITY AND GENERALIZATION ERROR", "text": "The statistical risk consists of two parts. The first part is an approximation error term non-increasing in the number of neurons r, and the second part describes generalization errors. The key issue for\nrisk analysis is to bound the second term using a suitable model complexity and then tradeoff with the first term. We will develop our theory based on the following measure of complexity.\nLet F denote a class of functions each mapping from X to R, and x1, x2, . . . , xn ∈ X. Following a similar terminology as in (Neyshabur et al., 2015), the Rademacher complexity, or simply ‘complexity’, of a function class F is defined by E supf∈F |n−1 ∑n i=1 ξif(xi)|, where ξi, i = 1, 2, . . . , n are independent symmetric Bernoulli random variables.\nLemma 2.3 (Rademacher complexity of FV ). Suppose that Assumptions 1, 3 hold. Then for the Rademacher complexity of FV , we have\nE sup f∈FV ∣∣∣∣ 1n n∑ i=1 ξif(xi) ∣∣∣∣ . V√d log n√n . (6) The proof is included in Appendix A.1. The bound in (6) is derived from an amalgamation of dimension-based and norm-based analysis elaborated in the appendix. It is somewhat surprising that the bound does not explicitly involve the approximation error part (that depends on r and η). This Rademacher complexity bound enables us to derive tight statistical risk bounds in the following section." }, { "heading": "3 MAIN RESULTS", "text": "" }, { "heading": "3.1 STATISTICAL RISK BOUND FOR THE L1 REGULARIZED NETWORKS IN (3)", "text": "Theorem 3.1 (Statistical risk bound). Suppose that Assumptions 1, 2, 3 hold. Then the constrained" }, { "heading": "L1 estimator f̂n over FV satisfies", "text": "R(f̂n) . (\n1√ r + δη\n) C + V √ d log n+ τ√\nn , (7)\nwhere δη is defined in (5), and τ was introduced in (1). Moreover, choosing the parameters r, η large enough, we have\nR(f̂n) . V √ d log n+ τ√\nn . (8)\nThe proof is in Appendix A.2. We briefly explain our main idea in deriving the risk bound (7). A standard statistical risk bound contains two parts which correspond to the approximation error and generalization error, respectively. The approximation error part in (7) is the first term, which involves the hidden dimension r and the norm of input coefficients through η. This observation motivates us to use the norm of output-layer coefficients through V and the input dimension d to derive a generalization error bound. In this way, the generalization error term does not involve r already used for bounding the approximation error, and thus a bias-variance tradeoff through r is avoided. This thought leads to the generalization error part in (7), which is the second term involving V and d. Its proof combines the machinery of both dimension-based and norm-based complexity analysis. From our analysis, the error bound in Theorem 3.1 is a consequence of the L1 loss function and the employed L1 regularization. In comparison with the previous result of Theorem 2.2, the bound obtained in Theorem 3.1 is tight and it approaches the parametric rate √ d/n for the d < n regime. Though we can only prove for L1 loss in this work, we conjecture that the same rate is achieved using L2 loss.\nIn the following, we further show that the above risk bound is minimax optimal. The minimax optimality indicates that deep neural networks with more than two layers will not perform much better than shallow neural networks when the underlying regression function belongs to FV .\nTheorem 3.2 (Minimax risk bound). Suppose that Assumptions 1 and 4 hold, and x1, x2, . . . , xn iid∼ N (0, Id), then inf f̂n supf∈FV R(f̂n(x)) & V √ d/n.\nHere the FV is the same one as defined in (3). All the smooth functions f∗(·) that satisfy V > 2C+f∗(0) and (4) belong to FV according to Theorem 2.1. The proof is included in Appendix A.3." }, { "heading": "3.2 ADAPTIVENESS TO THE INPUT SPARSITY", "text": "It is common to input a large dimensional signal to a neural network, while only few components are genuinely significant for prediction. For example, in environmental science, high dimensional weather signals are input for prediction while few are physically related (Shi et al., 2015). In image processing, the image label is relevant to few background pixels (Han et al., 2015). In natural language processing, a large number of redundant sentences sourced from Wikipedia articles are input for language prediction (Diao et al., 2019). The practice motivates our next results to provide a tight risk bound for neural networks whose input signals are highly sparse. Assumption 5. There exists a positive integer k ≤ d and an index set S ⊂ {1, . . . , d}with card(S) = k, such that f∗(x) = g∗(xS) for some function g∗(·) with probability one.\nThe subset S is generally unknown to data analysts. Nevertheless, if we know k, named the sparsity level, the risk bound could be further improved by a suitable regularization on the input coefficients. We have the following result where d is replaced with k in the risk bound of Theorem 3.1. Proposition 3.3. Suppose that that Assumptions 1, 2, 3, 5 hold. Suppose that f̂n is the L1 estimator over the following function class{\nf : Rd → R ∣∣∣f(x) = r∑\nj=1\najσ(w > j x+ bj) + a0, ‖a‖1 ≤ V, sup j ‖wj‖0 ≤ k\n} .\nThenR(f̂n) . √ {k log(dn)}/n.\nThe proof is included in Appendix A.4. The above statistical risk bound is also minimax optimal according to a similar argument in Theorem 3.2. From a practical point of view, the above L0 constraint is usually difficult to implement, especially for a large input dimension d. Alternatively, one may impose an L1 constraint instead of an L0 constraint on the input coefficients. Our next result is concerned with the risk bound when the model is learned from a joint regularization on the output and input layers. For technical convenience, we will assume that X is a bounded set. Theorem 3.4. Consider the following function class of two-layer neural networks\nFV,η = { f : Rd → R ∣∣∣f(x) = r∑ j=1 ajσ(w > j x+ bj) + a0, ‖a‖1 ≤ V, sup 1≤j≤r (‖wj‖1 + |bj |) ≤ η } .\nSuppose that V & C, where C is defined in (4). Then the constrained L1 estimator f̂n over FV,η satisfies\nR(f̂n) . C (\n1√ r + δη\n) + V η + τ√\nn ,\nwhere δη is defined in (5). In particular, choosing r large enough, we have\nR(f̂n) . Cδη + V η + τ√\nn\nwhich does not involve the input dimension d and the number of hidden neurons r. Moreover, suppose that σ(x) = 1/(1 + e−x), η ( n log2 n )1/3 , thenR(f̂n) . V { (log n)/n }1/3 .\nThe proof is included in Appendix A.5. In the above result, the risk bound is at the order of O(n−1/3), which is slower than the O(n−1/2) in the previous Theorem 3.1 and Proposition 3.3 if ignoring d and logarithmic factors of n. However, for a large input dimension d that is even much larger than n, the bound can be much tighter than the previous bounds since it is dimension-free." }, { "heading": "4 CONCLUSION AND FURTHER REMARKS", "text": "We studied the tradeoff between model complexity and statistical risk in two-layer neural networks from the explicit regularization perspective. We end our paper with two future problems. First, in Theorem 3.4, For a small d, the order of n−1/3 seems to be an artifact resulting from our technical arguments. We conjecture that in the small d regime, this risk bound could be improved toO(n−1/2) by certain adaptive regularizations. Second, it would be interesting to emulate the current approach to yield similarly tight risk bounds for deep forward neural networks." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF LEMMA 2.3\nWe first prove (6), which uses an amalgamation of dimension-based and norm-based analysis. For the output layer, we use the following norm-based analysis\nE sup f∈FV ∣∣∣∣ 1n n∑ i=1 ξif(zi) ∣∣∣∣ = E sup f∈FV |〈a, 1 n n∑ i=1 ξiσ(W >zi + b)〉| (9)\n≤ sup ‖a‖1E sup f∈FV ∥∥∥∥ 1n n∑ i=1 ξiσ(W >zi + b) ∥∥∥∥ ∞ ≤ V E sup f∈FV max j ∣∣∣∣ 1n n∑ i=1 ξiσ(w > j zi + bj) ∣∣∣∣ ≤ V E sup\nw∈Rd ∣∣∣∣ 1n n∑ i=1 ξiσ(w >zi + b) ∣∣∣∣. For notational convenience, we define w0 = 0, b0 = 0, and a0 = σ(0)−1a0σ(w>0 z + b0) so that a0 can be treated in a similar manner as other ai’s. Without loss of generality, we do not separately consider a0 in the following proofs.\nNext, we prove that\nE sup w∈Rd ∣∣∣∣ 1n n∑ i=1 ξiσ(w >zi + b) ∣∣∣∣ . √ d log n n , (10)\nand thus conclude the proof. The proof will be based on an ε-net argument together with the union bound. For any ε, let Wε ⊂ Rd denote the subset\nWε =\n{ w = ε\n2d (i1, i2, . . . , id) : ij ∈ Z, ‖w‖1 ≤ ηn\n} .\nThen, for any w, b, there exists some element ŵ ∈Wε such that\nsup z∈X |σ(w>z + b)− σ(ŵ>z + b̂)| ≤ sup z |(w>z + b)− (ŵ>z + b̂)| ≤ sup z |(w − ŵ)>z|+ |b− b̂|\n≤ ‖w − ŵ‖1 sup z ‖z‖∞ + |b− b̂| ≤ ε,\nwhere b̂ = (ε/2d) b(2db/ε)c and b·c is the floor function. By Bernstein’s Inequality, for any w, b, P ( | 1 n n∑ i=1 ξiσ(w >zi + b)| > t ) ≤ 2 exp { − nt 2 2(1 + t/3) } .\nBy taking the union bound over Wε, and use the fact that log card(Wε) . d log(nd/ε), we obtain\nsup w∈Rd ∣∣∣∣ 1n n∑ i=1 ξiσ(w >zi + b) ∣∣∣∣ . ε+ √ d n log nd ε log 1 δ ,\nwith probability at least 1− δ. Then the desired result is obtained by taking ε ∼ √ (d log n)/n.\nA.2 PROOF OF THEOREM 3.1\nThe proof is based on the following contraction lemma used in (Neyshabur et al., 2015). Lemma A.1 (Contraction Lemma). Suppose that g is L-Lipschitz and g(0) = 0. Then for any function class F mapping from X to R and any set {x1, x2, . . . , xn}, we have\nE sup f∈F ∣∣∣∣ 1n n∑ i=1 ξig(f(xi)) ∣∣∣∣ ≤ 2LE sup f∈F ∣∣∣∣ 1n n∑ i=1 ξif(xi) ∣∣∣∣. (11) With the above lemma, we have the following result.\nLemma A.2. The constrained L1 estimator f̂n over F satisfies\nR(f̂n) ≤ min f∈F E |f(x)− f∗(x)|+ 2E sup f∈F | 1 n n∑ i=1 ξif(zi)|+ 2 √ E y2 n . (12)\nProof. Define the empirical risk as: Rn(f) = E ( 1\nn n∑ i=1 |f∗(xi) + εi − f(xi)| ) − E |ε|. (13)\nSince f̂n minimizes n−1 ∑n i=1 |f∗(xi) + εi − f(xi)| in F , we have\nR(f̂n) ≤ R(f̂n)− {Rn(f̂n)−Rn(f̂)} = {R(f̂n)−Rn(f̂n)}+Rn(f0), (14) where f0 = argminf∈F R(f). We also have\nRn(f0) = R(f0) = min f∈F E (|f∗(x) + ε− f(xi)| − |ε|) ≤ min f∈F E |f(x)− f∗(x)|. (15)\nIn the following, we will analyze the term R(f̂n) − Rn(f̂n) in (14). Let zi’s denote independent and identically distributed copies of xi’s.\nR(f̂n)−Rn(f̂n) = E 1\nn n∑ i=1 { |f̂n(zi)− f∗(zi)− εi| − |f̂n(xi)− f∗(xi)− εi| }\n≤ E sup f∈F\n1\nn n∑ i=1 { |f(zi)− f∗(zi)− εi| − |f(xi)− f∗(xi)− εi| }\n≤ 2E sup f∈F\n1\nn n∑ i=1 ξi|f(zi)− f∗(zi)− εi|,\nwhere ξ1, . . . , ξn are independent and identically distributed symmetric Bernoulli random variables that are independent with zi’s. According to Lemma A.1, since g(x) = |x| is 1-Lipschitz and g(0) = 0, we have\nE sup f∈F\n1\nn n∑ i=1 ξi|f(zi)− f∗(zi)− εi| ≤ 2E sup f∈F | 1 n n∑ i=1 ξi(f(zi)− f∗(zi)− εi)|\n≤ 2E sup f∈F ∣∣∣∣ 1n n∑ i=1 ξif(zi) ∣∣∣∣+ 2 √ E y2 n .\nCombining this and (15), we conclude the proof of Lemma A.2.\nProof of Theorem 3.1. The proof of (7) is a direct consequence of Lemma 2.3, Lemma A.2, Theorem 2.1 and the fact that the first moment is no more than the second moment. The proof of (8) follows from the fact that δ(η)→ 0 as η →∞.\nA.3 PROOF OF THEOREM 3.2\nDefine a subclass of FV by F0 = { f : Rd → R ∣∣∣f(x) = V σ(w>x), ‖w‖2 = 1}. In the following, we will prove the minimax bound for FV by analyzing F0. Notice that\nE |σ(w>1 x)− σ(w>2 x)| ≥ E inf u σ′(u) · |w>1 x− w>2 x| · I(w>1 x,w>2 x ∈ S) & ‖w1 − w2‖2.\nLet M1(ε) denote the packing ε-entropy of F0 with L1 distance, then M1(ε) is greater than the packing ε-entropy of Bd1 with L2 distance, which means M1(ε) & d. Let Vk(ε) denote the covering ε-entropy ofF0 with the square root Kullback-Leibler divergence, then according to its relation with the L2 distance shown in (Yang & Barron, 1999), we have\nVk(ε) ≤M2( √ 2ε) . d log 1\nε ,\nwhere M2(ε) denote the packing ε-entropy of FV with L2 loss function. The second inequality is proved in a similar way to the proof of Lemma 2.3, which is omitted here for brevity. Hence, according to (Yang & Barron, 1999, Theorem 1),\ninf f̂n sup f∈FV R(f̂n(x)) ≥ inf f̂n sup f∈F0\nR(f̂n(x)) & V √ d\nn ,\nThis concludes the proof.\nA.4 PROOF OF PROPOSITION 3.3\nTo prove the proposition, it is sufficient to verify the following Rademacher complexity bound E sup ∣∣∣∣ 1n n∑ i=1 ξiσ(w >zi + b) ∣∣∣∣ .√k log d log n, which can be derived easily by adjusting the proof in Lemma 2.3. Then the result follows with a similar analysis as in Theorem 3.1.\nA.5 PROOF OF THEOREM 3.4\nIt can be verified from the identity (9) that\nE sup f∈FV ∣∣∣∣ 1n n∑ i=1 ξif(xi) ∣∣∣∣ ≤ r∑ j=0 E sup f∈FV |aj | ∣∣∣∣ 1n n∑ i=1 ξiσ(w > j xi + bj) ∣∣∣∣. (16) Then according to Lemma A.1, we have\nE sup f∈FV ∣∣∣∣ 1n n∑ i=1 ξiσ(w > j xi + bj) ∣∣∣∣ . √ log n n (‖wj‖X + |bj |). (17)\nCombining (16) and (17), we obtain the following lemma that may be interesting on its own right. Lemma A.3. We have\nE sup f∈FV ∣∣∣∣ 1n n∑ i=1 ξif(xi) ∣∣∣∣ . √ log n n r∑ j=0 |aj |(‖wj‖X + |bj |) . V √ log n n max j ‖wj‖X.\nSince ‖w‖X . ‖w‖1 and {w : ‖w‖X . η} ⊂ {w : ‖w‖1 . η}, the ‖ · ‖X can be replaced with ‖ · ‖1 in the bounds in Lemmas A.3 and A.2. Then, with a similar argument as in the proof of Theorem 3.1, we conclude the proof of Theorem 3.4." } ]
2,020
null
SP:44e95502bce4ea4e27495a27aa1bf56e962ca6fd
[ "The authors work in the domain of applying neural networks to combinatorial problems with structured output space, such as sudoku and n-queens. They notice how models currently performing well at this task encounter difficulties when there are multiple possible solutions. They formalize the task of learning any of multiple given (and possibly quite different) labels and propose an RL based approach to solve that task. They show improvements over selected baselines. " ]
Recent research has proposed neural architectures for solving combinatorial problems in structured output spaces. In many such problems, there may exist multiple solutions for a given input, e.g. a partially filled Sudoku puzzle may have many completions satisfying all constraints. Further, we are often interested in finding any one of the possible solutions, without any preference between them. Existing approaches completely ignore this solution multiplicity. In this paper, we argue that being oblivious to the presence of multiple solutions can severely hamper their training ability. Our contribution is two fold. First, we formally define the task of learning one-of-many solutions for combinatorial problems in structured output spaces, which is applicable for solving several problems of interest such as NQueens, and Sudoku. Second, we present a generic learning framework that adapts an existing prediction network for a combinatorial problem to handle solution multiplicity. Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration. We propose an RL based approach to jointly train the selection module with the prediction network. Experiments on three different domains, and using two different prediction networks, demonstrate that our framework significantly improves the accuracy in our setting, obtaining up to 21 pt gain over the baselines.
[ { "affiliations": [], "name": "OUTPUT SPACES" }, { "affiliations": [], "name": "Yatin Nandwani" }, { "affiliations": [], "name": "Deepanshu Jindal" } ]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Rudy Bunel", "Matthew J. Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli" ], "title": "Leveraging grammar and reinforcement learning for neural program synthesis", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vivien Cabannes", "Alessandro Rudi", "Francis Bach" ], "title": "Structured prediction with partial labelling through the infimum loss", "venue": "CoRR, abs/2003.00920,", "year": 2020 }, { "authors": [ "Timothée Cour", "Benjamin Sapp", "Ben Taskar" ], "title": "Learning from partial labels", "venue": "J. Mach. Learn. Res.,", "year": 2011 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "Proceedings of the 35th International Conference on Machine Learning, ICML 2018,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Jonathan Uesato", "Surya Bhupatiraju", "Rishabh Singh", "Abdel-rahman Mohamed", "Pushmeet Kohli" ], "title": "Robustfill: Neural program learning under noisy I/O", "venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML 2017,", "year": 2017 }, { "authors": [ "Honghua Dong", "Jiayuan Mao", "Tian Lin", "Chong Wang", "Lihong Li", "Denny Zhou" ], "title": "Neural logic machines", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Richard Evans", "Edward Grefenstette" ], "title": "Learning explanatory rules from noisy data", "venue": "J. Artif. Intell. Res.,", "year": 2018 }, { "authors": [ "Jun Feng", "Minlie Huang", "Li Zhao", "Yang Yang", "Xiaoyan Zhu" ], "title": "Reinforcement learning for relation classification from noisy data. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence", "venue": "URL", "year": 2018 }, { "authors": [ "Lei Feng", "Bo An" ], "title": "Partial label learning with self-guided retraining", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Lei Feng", "Jiaqi Lv", "Bo Han", "Miao Xu", "Gang Niu", "Xin Geng", "Bo An", "Masashi Sugiyama" ], "title": "Provably consistent partial-label learning", "venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Mikael Henaff", "Junbo Jake Zhao", "Yann LeCun" ], "title": "Prediction under uncertainty with error-encoding", "venue": "networks. CoRR,", "year": 2017 }, { "authors": [ "Rong Jin", "Zoubin Ghahramani" ], "title": "Learning with multiple labels", "venue": "Advances in Neural Information Processing Systems 15 [Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "Yogesh S. Mahajan", "Zhaohui Fu", "Sharad Malik" ], "title": "Zchaff2004: An efficient SAT solver", "venue": "Theory and Applications of Satisfiability Testing, 7th International Conference,", "year": 2004 }, { "authors": [ "Gary McGuire", "Bastian Tugemann", "Gilles Civario" ], "title": "There is no 16-clue sudoku: Solving the sudoku minimum number of clues problem via hitting set enumeration", "venue": "Experimental Mathematics,", "year": 2012 }, { "authors": [ "Pasquale Minervini", "Matko Bošnjak", "Tim Rocktäschel", "Sebastian Riedel", "Edward Grefenstette" ], "title": "Differentiable reasoning on large knowledge bases and natural language", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence", "year": 2020 }, { "authors": [ "Ramesh Nallapati", "Feifei Zhai", "Bowen Zhou" ], "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Rasmus Berg Palm", "Ulrich Paquet", "Ole Winther" ], "title": "Recurrent relational networks", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kyubyong Park" ], "title": "Can convolutional neural networks crack sudoku puzzles? https://github", "venue": "com/Kyubyong/sudoku,", "year": 2018 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher" ], "title": "A deep reinforced model for abstractive summarization", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Pengda Qin", "Weiran Xu", "William Yang Wang" ], "title": "Robust distant supervision relation extraction via deep reinforcement learning", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Tim Rocktäschel", "Sameer Singh", "Sebastian Riedel" ], "title": "Injecting logical background knowledge into embeddings for relation extraction", "venue": "In NAACL HLT", "year": 2015 }, { "authors": [ "Gordon Royle" ], "title": "Minimum sudoku", "venue": "sudokumin.php,", "year": 2014 }, { "authors": [ "Adam Santoro", "Ryan Faulkner", "David Raposo", "Jack W. Rae", "Mike Chrzanowski", "Theophane Weber", "Daan Wierstra", "Oriol Vinyals", "Razvan Pascanu", "Timothy P. Lillicrap" ], "title": "Relational recurrent neural networks", "venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Bart Selman", "Henry A. Kautz", "Bram Cohen" ], "title": "Local search strategies for satisfiability testing", "venue": "Proceedings of a DIMACS Workshop,", "year": 1993 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Grigorios Tsoumakas", "Ioannis Katakis" ], "title": "Multi-label classification: An overview", "venue": "IJDWM,", "year": 2007 }, { "authors": [ "Oriol Vinyals", "Alexander Toshev", "Samy Bengio", "Dumitru Erhan" ], "title": "Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2017 }, { "authors": [ "Po-Wei Wang", "Priya L. Donti", "Bryan Wilder", "J. Zico Kolter" ], "title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ning Xu", "Jiaqi Lv", "Xin Geng" ], "title": "Partial label learning via label enhancement", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Quanzeng You", "Hailin Jin", "Zhaowen Wang", "Chen Fang", "Jiebo Luo" ], "title": "Image captioning with semantic attention", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have become the de-facto standard for solving perceptual tasks over low level representations, such as pixels in an image or audio signals. Recent research has also explored their application for solving symbolic reasoning tasks, requiring higher level inferences, such as neural theorem proving (Rocktäschel et al., 2015; Evans & Grefenstette, 2018; Minervini et al., 2020), and playing blocks world (Dong et al., 2019). The advantage of neural models for these tasks is that it will create a unified, end-to-end trainable representation for integrated AI systems that combine perceptual and high level reasoning. Our paper focuses on one such high level reasoning task – solving combinatorial problems in structured output spaces, e.g., solving a Sudoku or N-Queens puzzle. These can be thought of as Constraint Satisfaction problems (CSPs) where the underlying constraints are not explicitly available, and need to be learned from training data. We focus on learning such constraints by a non-autoregressive neural model where variables in the structured output space are decoded simultaneously (and therefore independently). Notably, most of the current state-of-the-art neural models for solving combinatorial problems, e.g., SATNET (Wang et al., 2019), RRN (Palm et al., 2018), NLM (Dong et al., 2019), work with non autoregressive architectures because of their high efficiency of training and inference, since they do not have to decode the solution sequentially.\nOne of the key characteristics of such problems is solution multiplicity – there could be many correct solutions for any given input, even though we may be interested in finding any one of these solutions. For example, in a game of Sudoku with only 16 digits filled, there are always multiple correct solutions (McGuire et al., 2012), and obtaining any one of them suffices for solving Sudoku. Unfortunately, existing literature has completely ignored solution multiplicity, resulting in sub-optimally trained\n∗Equal contribution. Work done while at IIT Delhi. Current email: deepanshu.jindal@alumni.iitd.ac.in\nnetworks. Our preliminary analysis of a state-of-the-art neural Sudoku solver (Palm et al., 2018)1, which trains and tests on instances with single solutions, showed that it achieves a high accuracy of 96% on instances with single solution, but the accuracy drops to less than 25%, when tested on inputs that have multiple solutions. Intuitively, the challenge comes from the fact that (a) there could be a very large number of possible solutions for a given input, and (b) the solutions may be highly varied. For example, a 16-givens Sudoku puzzle could have as many as 10,000 solutions, with maximum hamming distance between any two solutions being 61. Hence, we argue that an explicit modeling effort is required to represent this solution multiplicity.\nAs the first contribution of our work, we formally define the novel problem of One-of-Many Learning (1oML). It is given training data of the form {(xi,Yxi)}, where Yxi denotes a subset of all correct outputs Yxi associated with input xi. The goal of 1oML is to learn a function f such that, for any input x, f(x) = y for some y ∈ Yx. We show that a naïve strategy that uses separate loss terms for each (xi,yij) pair where yij ∈ Yxi can result in a bad likelihood objective. Next, we introduce a multiplicity aware loss (CC-LOSS) and demonstrate its limitations for non-autoregressive models on structured output spaces. In response, we present our first-cut approach, MINLOSS, which picks up the single yij closest to the prediction ŷi based on the current parameters of prediction network (base architecture for function f ), and uses it to compute and back-propagate the loss for that training sample xi. Though significantly better than naïve training, through a simple example, we demonstrate that MINLOSS can be sub-optimal in certain scenarios, due to its inability to pick a yij based on global characteristics of solution space.\nTo alleviate the issues with MINLOSS, we present two exploration based techniques, I-EXPLR and SELECTR, that select a yij in a non-greedy fashion, unlike MINLOSS. Both techniques are generic in the sense that they can work with any prediction network for the given problem. I-EXPLR relies on the prediction network itself for selecting yij, whereas SELECTR is an RL based learning framework which uses a selection module to decide which yij should be picked for a given input xi, for back-propagating the loss in the next iteration. The SELECTR’s selection module is trained jointly along with the prediction network using reinforcement learning, thus allowing us to trade-off exploration and exploitation in selecting the optimum yij by learning a probability distribution over the space of possible yij’s for any given input xi.\nWe experiment on three CSPs: N-Queens, Futoshiki, and Sudoku. Our prediction networks for the first two problems are constructed using Neural Logic Machines (Dong et al., 2019), and for Sudoku, we use a state-of-the-art neural solver based on Recurrent Relational Networks (Palm et al., 2018). In all three problems, our experiments demonstrate that SELECTR vastly outperforms naïve baselines by up to 21 pts, underscoring the value of explicitly modeling solution multiplicity. SELECTR also consistently improves on other multiplicity aware methods, viz. CC-LOSS, MINLOSS, and I-EXPLR." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Related ML Models: There are a few learning scenarios within weak supervision which may appear similar to the setting of 1oML, but are actually different from it. We first discuss them briefly. ‘Partial Label Learning’ (PLL) (Jin & Ghahramani, 2002; Cour et al., 2011; Xu et al., 2019; Feng & An, 2019; Cabannes et al., 2020) involves learning from the training data where, for each input, a noisy set of candidate labels is given amongst which only one label is correct. This is different from 1oML in which there is no training noise and all the solutions in the solution set Yx for a given x are correct. Though some of the recent approaches to tackle ambiguity in PLL (Cabannes et al., 2020) may be similar to our methods, i.e., MINLOSS , by the way of deciding which solution in the target set should be picked next for training, the motivations are quite different. Similarly, in the older work by (Jin & Ghahramani, 2002), the EM model, where the loss for each candidate is weighted by the probability assigned to that candidate by the model itself, can be seen as a naïve exploration based approach, applied to a very different setting. In PLL, the objective is to select the correct label out of many incorrect ones to reduce training noise, whereas in 1oML, selecting only one label for training provably improves the learnability and there is no question of reducing noise as all the labels are correct. Further, most of the previous work on PLL considers classification over a discrete output space with, say, L labels, where as in 1oML, we work with structured output spaces, e.g., an r dimensional vector space where each dimension represents a discrete space of L labels. This\n1Available at https://data.dgl.ai/models/rrn-sudoku.pkl\nexponentially increases the size of the output space, making it intractable to enumerate all possible solutions as is typically done in existing approaches for PLL (Jin & Ghahramani, 2002).\nWithin weak supervision, the work on ‘Multi Instance Learning’ (MIL) approach for Relation Extraction (RE) employs a selection module to pick a set of sentences to be used for training a relation classifier, given a set of noisy relation labels (Feng et al., 2018; Qin et al., 2018). This is different from us where multiplicity is associated with any given input, not with a class (relation).\nOther than weak supervision, 1oML should also not be confused with the problems in the space of multi-label learning (Tsoumakas & Katakis, 2007). In multi-label learning, given a solution set Yx for each input x, the goal is to correctly predict each possible solution in the set Yx for x. Typically, a classifier is learned for each of the possible labels separately. On the other hand, in 1oML, the objective is to learn any one of the correct solutions for a given input, and a single classifier is learned. The characteristics of the two problems are quite different, and hence, also the solution approaches. As we show later, the two settings lead to requirements for different kinds of generalization losses.\nSolution Multiplicity in Other Settings: There is some prior work related to our problem of solution multiplicity, albeit in different settings. An example is the task of video-prediction, where there can be multiple next frames (yij) for a given partial video xi (Henaff et al., 2017; Denton & Fergus, 2018). The multiplicity of solutions here arises from the underlying uncertainty rather than as a inherent characteristic of the domain itself. Current approaches model the final prediction as a combination of the deterministic part oblivious to uncertainty, and a non-determinstic part caused by uncertainty. There is no such separation in our case since each solution is inherently different from others.\nAnother line of work, which comes close to ours is the task of Neural Program Synthesis (Devlin et al., 2017; Bunel et al., 2018). Given a set of Input-Output (IO) pairs, the goal is to generate a valid program conforming to the IO specifications. For a given IO pair, there could be multiple valid programs, and often, training data may only have one (or a few) of them. Bunel et al. (2018) propose a solution where they define an alternate RL based loss using the correctness of the generated program on a subset of held out IO pairs as reward. In our setting, in the absence of the constraints (or rules) of the CSP, there is no such additional signal available for training outside the subset of targets Yx for an input x.\nIt would also be worthwhile to mention other tasks such as Neural Machine translation (Bahdanau et al., 2015; Sutskever et al., 2014), Summarization (Nallapati et al., 2017; Paulus et al., 2018), Image Captioning (Vinyals et al., 2017; You et al., 2016) etc., where one would expect to have multiple valid solutions for any given input. E.g., for a given sentence in language A, there could be multiple valid translations in language B. To the best of our knowledge, existing literature ignores solution multiplicity in such problems, and simply trains on all possible given labels for any given input.\nModels for Symbolic Reasoning: Our work follows the line of recent research, which proposes neural architectures for implicit symbolic and relational reasoning problems (Santoro et al., 2018; Palm et al., 2018; Wang et al., 2019; Dong et al., 2019). We experiment with two architectures as base prediction networks: Neural Logic Machines (NLMs) (Dong et al., 2019), and Recurrent Relational Networks (RRNs) (Palm et al., 2018). NLMs allow learning of first-order logic rules expressed as Horn Clauses over a set of predicates, making them amenable to transfer over different domain sizes. The rules are instantiated over a given set of objects, where the groundings are represented as tensors in the neural space over which logical rules operate. RRNs use a graph neural network to learn relationships between symbols represented as nodes in the graph, and have been shown to be good at problems that require multiple steps of symbolic reasoning." }, { "heading": "3 THEORY AND ALGORITHM", "text": "" }, { "heading": "3.1 PROBLEM DEFINITION", "text": "Notation: Each possible solution (target) for an input (query) x is denoted by an r-dimensional vector y ∈ Vr, where each element of y takes values from a discrete space denoted by V . Let Y = Vr, and let Yx denote the set of all solutions associated with input x. We will use the term solution multiplicity to refer to the fact that there could be multiple possible solutions y for a given input x. In our setting, the solutions in Yx span a structured combinatorial subspace of Vr, and can be thought of as representing solutions to an underlying Constraint Satisfaction Problem (CSP). For example in N-Queens, x would denote a partially filled board, and y denote a solution for the input board.\nGiven a set of inputs xi along with a subset of associated solutions Yxi ⊆ Yxi , i.e., given a set of (xi,Yxi) pairs, we are interested in learning a mapping from x to any one y among many possible solutions for x. Formally, we define the One-of-Many-Learning (1oML) problem as follows. Definition 1. Given training data D of the form, {(xi,Yxi)}mi=1, where Yxi denotes a subset of solutions associated with input xi, and m is the size of training dataset, One-of-Many-Learning (1oML) is defined as the problem of learning a function f such that, for any input x, f(x) = y for some y ∈ Yx, where Yx is the set of all solutions associated with x.\nWe use parameterized neural networks to represent our mapping function. We use MΘ to denote a non-autoregressive network M with associated set of parameters Θ. We use ŷi (ŷ) to denote the network output corresponding to input xi (x), i.e., ŷi (ŷ) is the arg max of the learnt conditional distribution over the output space Y given the input xi (x). We are interested in finding a Θ∗ that solves the 1oML problem as defined above. Next, we consider various formulations for the same." }, { "heading": "3.2 OBJECTIVE FUNCTION", "text": "Naïve Objective: In the absence of solution multiplicity, i.e. when target set Yxi = {yi}, ∀i, the standard method to train such models is to minimize the total loss, L(Θ) = ∑m i=1 lΘ(ŷi,yi), where lΘ(ŷi,yi) is the loss between the prediction ŷi and the unique target yi for the input xi. We find the optimal Θ∗ as argminΘ L(Θ). A Naïve extension of this for 1oML would be to sum the loss over all targets in Yx, i.e., minimize the following loss function:\nL(Θ) = 1\nm m∑ i=1 ∑ yij∈Yxi lΘ(ŷi,yij) (1)\nWe observe that loss function in eq. (1) would unnecessarily penalize the model when dealing with solution multiplicity. Even when it is correctly predicting one of the targets for an input xi, the loss with respect to the other targets in Yxi could be rather high, hence misguiding the training process. Example 1 below demonstrates such a case. For illustration, we will use the cross-entropy loss, i.e., lΘ(ŷ,y) = − ∑ k ∑ l 1{y[k] = vl} log(P (ŷ[k] = vl)), where vl ∈ V varies over the elements of V , and k indices over r dimensions in the solution space. y[k] denotes the kth element of y. Example 1. Consider a learning problem over a discrete (Boolean) input space X = {0, 1} and Boolean target space in two dimensions, i.e., Y = Vr = {0, 1}2. Let this be a trivial learning problem where ∀x, the solution set is Yx = {(0, 1), (1, 0)}. Then, given a set of examples {xi,Yxi}, the Naïve objective (with lΘ as cross entropy) will be minimized, when P (ŷi[k] = 0) = P (ŷi[k] = 1) = 0.5, for k ∈ {1, 2}, ∀i, which can not recover either of the desired solutions: (0, 1) or (1, 0).\nThe problem arises from the fact that when dealing with 1oML, the training loss defined in eq. (1) is no longer a consistent predictor of the generalization error as formalized below. Lemma 1. The training loss L(Θ) as defined in eq. (1) is an inconsistent estimator of generalization error for 1oML, when lΘ is a zero-one loss, i.e., lΘ(ŷi,yij) = 1{ŷi 6= yij}. (Proof in Appendix). For the task of PLL, Jin & Ghahramani (2002) propose a modification of the cross entropy loss to tackle multiplicity of labels in the training data. Instead of adding the log probabilities, it maximizes the log of total probability over the given target set. Inspired by Feng et al. (2020), we call it CCLOSS: Lcc(Θ) = − 1m ∑m i=1 log (∑ yij∈Yxi Pr (yij|xi; Θ) )\n. However, in the case of structured prediction, optimizing Lcc requires careful implementation due to its numerical instability (see Appendix). Moreover, for non-autoregressive models, CC-LOSS also suffers from the same issues illustrated in example 1 for naïve objective.\nNew Objective: We now motivate a better objective function based on an unbiased estimator. In general, we would likeMΘ to learn a conditional probability distribution Pr(y|xi; Θ) over the output space Y such that the entire probability mass is concentrated on the desired solution set Yxi , i.e.,∑\nyij∈Yxi Pr(yij|xi; Θ) = 1, ∀i. If such a conditional distribution is learnt, then we can easily sample a yij ∈ Yxi from it. CC-LOSS is indeed trying to achieve this. However, ours being a structured output space, it is intractable to represent all possible joint distributions over the possible solutions in Yxi , especially for non-autoregressive models\n2. 2Autoregressive models may have the capacity to represent certain class of non-trivial joint distributions, e.g., Pr(y[1], y[2]|x) could be modeled as Pr(y[1]|x)Pr(y[2]|y[1];x), but requires sequential decoding during inference. Studying the impact of solution multiplicity on autoregressive models is beyond the current scope.\nHence, we instead design a loss function which forces the model to learn a distribution in which the probability mass is concentrated on any one of the targets yij ∈ Yxi . We call such distributions as one-hot. To do this, we introduce |Yxi | number of new learnable Boolean parameters, wi, for each query xi in the training data, and correspondingly define the following loss function:\nLw(Θ,w) = 1\nm m∑ i=1 ∑ yij∈Yxi wijlΘ(ŷi,yij) (2)\nHere, wij ∈ {0, 1} and ∑\nj wij = 1,∀i, where j indices over solutions yij ∈ Yxi . The last constraint over Boolean variables wij enforces that exactly one of the weights in wi is 1 and all others are zero. Lemma 2. Under the assumption Yxi = Yxi ,∀i, the loss L′(Θ) = minw Lw(Θ,w), defined as the minimum value of Lw(Θ,w) (defined in eq. (2)) with respect to w, is a consistent estimator of generalization error for 1oML, when lΘ is a zero-one loss, i.e., lΘ(ŷi,yij) = 1{ŷi 6= yij}. We refer to Appendix for details. Next, we define our new objective as:\nmin Θ,w Lw(Θ,w) s.t. wij ∈ {0, 1} ∀i,∀j and |Yxi |∑ j=1 wij = 1,∀i = 1 . . .m (3)" }, { "heading": "3.3 GREEDY FORMULATION: MINLOSS", "text": "In this section, we present one possible way to optimize our desired objective minΘ,w Lw(Θ,w). It alternates between optimizing over the Θ parameters, and optimizing over w parameters. While Θ parameters are optimized using SGD, the weights w are selected greedily for a given Θ = Θt at each iteration, i.e., it assigns a non-zero weight to the solution corresponding to the minimum loss amongst all the possible yij ∈ Yxi for each i = 1 . . .m:\nw (t) ij = 1 { yij = argmin\ny∈Yxi lΘ(t)\n( ŷ\n(t) i ,y\n)} ,∀i = 1 . . .m (4)\nThis can be done by computing the loss with respect to each target, and picking the one which has the minimum loss. We refer to this approach as MINLOSS. Intuitively, for a given set of Θ(t) parameters, MINLOSS greedily picks the weight vector wi(t), and uses them to get the next set of Θ(t+1) parameters using SGD update.\nΘ(t+1) ← Θ(t) − αΘ∇ΘLw (Θ,w) |Θ=Θ(t),w=w(t) (5)\nOne significant challenge with MINLOSS is the fact that it chooses the current set of w parameters independently for each example based on current Θ values. While this way of picking the w parameters is optimal if Θ has reached the optima, i.e. Θ = Θ∗, it can lead to sub-optimal choices when both Θ and w are being simultaneously trained. Following example illustrates this.\nExample 2. Consider a simple task with a one-dimensional continuous input space\nX ⊂ R, and target space Y = {0, 1}. Consider learning with 10 examples, given as (x = 1,Yx = {1}) (5 examples), (x = −1,Yx = {0, 1}) (4 examples), (x = −2,Yx = {1}) (1 example). The optimal decision hypothesis is given as: y = 1{x > α}, for α ≤ −2, or y = 1{x < β}, for β ≥ 1. Assume learning this with logistic regression using MINLOSS as the training algorithm optimizing the objective in eq. (3). If we initialize the parameters of logistic such that the starting hypothesis is given by y = 1{x > 0} (logistic parameters: θ1 = 0.1, θ0 = 0), MINLOSS will greedily pick the target y = 0 for samples with x = −1, repeatedly. This will result in the learning algorithm converging to the decision hypothesis y = 1{x > −0.55}, which is sub-optimal since the input with x = −2 is incorrectly classified (fig. 1, see Appendix for a detailed discussion).\nMINLOSS is not able to achieve the optimum since it greedily picks the target for each query xi based on current set of parameters and gets stuck in local mimima. This is addressed in the next section." }, { "heading": "3.4 REINFORCEMENT LEARNING FORMULATION: SELECTR", "text": "In this section, we will design a training algorithm that fixes some of the issues observed with MINLOSS. Considering the Example 2 above, the main problem with MINLOSS is its inability to consider alternate targets which may not be greedily optimal at the current set of parameters. A better strategy will try to explore alternative solutions as a way of reaching better optima, e.g., in example 2 we could pick, for the input x = −1, the target y = 1 with some non-zero probability, to come out of the local optima. In the above case, this also happens to be the globally optimal strategy. This is the key motivation for our RL-based strategy proposed below.\nA natural questions arises: how should we assign the probability of picking a particular target? A naïve approach would use the probability assigned by the underlying MΘ network as a way of deciding the amount of exploration on each target y. We call it I-EXPLR. We argue below why this may not always be an optimal choice.\nWe note that the amount of exploration required may depend in complex ways on the global solution landscape, as well as the\ncurrent set of parameters. Therefore, we propose a strategy, which makes use of a separate selection module (a neural network), which takes as input, the current example (xi,Yxi), and outputs the probability of picking each target for training Θ in the next iteration. Our strategy is RL-based since, we can think of choosing each target (for a given input) as an action that our selection module needs to take. Our selection module is trained using a reward that captures the quality of selecting the corresponding target for training the prediction network. We next describe its details.\nSelection Module (Sφ): This is an RL agent or a policy network where the action is to select a target, yij ∈ Yxi , for each xi. Given a training sample, (xi,Yxi), it first internally predicts ŷi_ = MΘ_(xi), using a past copy of the parameters Θ_. This prediction is then fed as an input along with the target set, Yxi , to a latent model, Gφ, which outputs a probability distribution Prφ(yij),∀yij ∈ Yxi , s.t. ∑ yij Prφ(yij) = 1. Sφ then picks a target ȳi ∈ Yxi based on the distribution Prφ(yij) and returns a w̄i such that ∀i, w̄ij = 1 if yij = ȳi, and w̄ij = 0 otherwise. Update of φ Parameters: The job of the selection module is to pick one target, ȳi ∈ Yxi , for each input xi, for training the prediction network MΘ. If we were given an oracle to tell us which ȳi is most suited for training MΘ, we would have trained the selection module Sφ to match the oracle. In the absence of such an oracle, we train Sφ using a reward scheme. Intuitively, ȳi would be a good choice for training MΘ, if it is “easier” for the model to learn to predict ȳi. In our reward design, we measure this degree of ease using hamming distance between ȳi and MΘ’s prediction ŷi, i.e., R(ŷi, ȳi) = ∑r k=1 1{ŷi[k] = ȳi[k]}. We note that there are other choices as well for the reward, e.g., a binary reward, which gives a positive reward of 1 only if the prediction model MΘ has learnt to predict the selected target ȳi. Our reward scheme is a granular proxy of this binary reward and makes it easier to get a partial reward even when the binary reward would be 0.\nThe expected reward for RL can then be written as: R(φ) = m∑\ni=1 ∑ yij∈Yxi Prφ (yij)R (ŷi,yij) (6)\nWe make use of policy gradient to compute the derivative of the expected reward with respect to the φ parameters. Accordingly, update equation for φ can be written as:\nφ(t+1) ← φ(t) + αφ∇φR (φ) |φ=φ(t) (7) Update of Θ Parameters: Next step is to use the output of the selection module, w̄i corresponding to the sampled target ȳi, ∀i, to train the MΘ network. The update equation for updating the Θ parameters during next learning iteration can be written as:\nΘ(t+1) ← Θ(t) − αΘ∇ΘLw (Θ,w) |Θ=Θ(t),w=w̄(t) (8) Instead of backpropagating the loss gradient at a sampled target ȳi, one could also backpropagate the gradient of the expected loss given the distribution Prφ(yij). In our experiments, we backpropagate\nthrough the expected loss since our action space for the selection module Sφ is tractable. Figure 2 represents the overall framework. In the diagram, gradients for updating Θ flow back through the red line and gradients for updating φ flow back through the green line." }, { "heading": "3.5 TRAINING ALGORITHM", "text": "We put all the update equations together and describe the key components of our training algorithm below. Algorithm 1 presents a detailed pseudocode.\nAlgorithm 1 Joint Training of Prediction Network MΘ & Selection Module Sφ\n1 Θ0 ← Pre-train Θ using eq. (4) and eq. (5) 2 In Selection Module (SM): Θ_← Θ0 3 φ0 ← Pre-train φ using rewards from MΘ in eq. (7) 4 Initialize: t← 0 5 while not converged do 6 B ← Randomly fetch a mini-batch 7 for i ∈ B do 8 Get weights: wi ← Sφ((xi,Yxi),Θ_) 9 Get model predictions: ŷi ←MΘt(xi)\n10 Get rewards: ri ← [R(ŷi,yij), ∀yij ∈ Yxi ] end\n11 Update φ: Use eq. (7) to get φ(t+1)\n12 Update Θ: Use eq. (8) to get Θ(t+1)\n13 Update Θ_← Θ(t+1) if t%copyitr = 0 (in SM) 14 Increment t← t+ 1\nend\nPre-training: It is a common strategy in many RL based approaches to first pre-train the network weights using a simple strategy. Accordingly, we pre-train both the MΘ and Sφ networks before going into joint training. First, we pre-train MΘ. In our experiments, we observe that in some cases, pre-training MΘ using only those samples from training data D for which there is only a unique solution, i.e., {(xi,Yxi) ∈ D s.t. |Yxi | = 1} gives better performance than pre-training with MINLOSS. Therefore, we pre-train using both the approaches and select the better one based on their performance on a held out dev set. Once the prediction network is pre-trained, a copy of it is given to the selection module to initialize MΘ_. Keeping Θ and Θ_ fixed and identical to each other, the latent model, Gφ, in the selection module is pre-trained using the rewards given by the pre-trained MΘ and the internal predictions given by MΘ_.\nJoint Training: After pre-training, both prediction network MΘ and selection module Sφ are trained jointly. In each iteration t, selection module first computes the weights, w̄ti , for each sample in the mini-batch. The prediction network computes the prediction ŷti and rewards R(ŷ t i ,yij),∀yij ∈ Yxi . The parameters φt and Θt are updated simultaneously using eq. (7) and eq. (8), respectively. The copy of the prediction network within selection module, i.e., MΘ_ in Sφ, is updated with the latest parameters Θt after every copyitr updates where copyitr is a hyper-parameter." }, { "heading": "4 EXPERIMENTS", "text": "The main goal of our experiments is to evaluate the four multiplicity aware methods: CC-LOSS, MINLOSS, informed exploration (I-EXPLR) and RL based exploration (SELECTR), when compared to baseline approaches that completely disregard the problem of solution multiplicity. We also wish to assess the performance gap, if any, between queries with a unique solution and those with many possible solutions. To answer these questions, we conduct experiments on three different tasks (N-Queens, Futoshiki & Sudoku), trained over two different prediction networks, as described below.3" }, { "heading": "4.1 DATASETS AND PREDICTION NETWORKS", "text": "N-Queens: Given a query, i.e., a chess-board of sizeN×N and a placement of k < N non-attacking queens on it, the task of N Queens is to place the remaining N − k queens, such that no two queens are attacking each other. We train a Neural Logic Machine (NLM) model (Dong et al., 2019) as the prediction network MΘ for solving queries for this task. To model N-Queens within NLM, we represent a query x and the target y as N2 dimensional Boolean vectors with 1 at locations where a Queen is placed. We use another smaller NLM architecture as the latent model Gφ.\nWe train our model on 10–Queens puzzles and test on 11–Queens puzzles, both with 5 placed queens. This size-invariance in training and test is a key strength of NLM architecture, which we exploit in our experiments. To generate the train data, we start with all possible valid 10–Queens board configurations and randomly mask any 5 queens, and then check for all possible valid completions to\n3Further details of software environments, hyperparameters and dataset generation are in the appendix.\ngenerate potentially multiple solutions for an input. Test data is also generated similarly. Training and testing on different board sizes ensures that no direct information leaks from test to train. Queries with multiple solutions have 2-6 solutions, so we choose Yxi = Yxi ,∀xi. Futoshiki: This is a logic puzzle in which we are given a grid of size N ×N , and the goal is to fill the grid with digits from {1 . . . N} such that no digit is repeated in a row or a column. k out of N2 positions are already filled in the input query x and the remaining N2 − k positions need to be filled. Further, inequality constraints are specified between some pairs of adjacent grid positions, which need to be honored in the solution. Our prediction network, and latent model use NLM, and the details (described in Appendix) are very similar to that of N–Queens.\nSimilar to N–Queens, we do size-invariant training – we train our models on 5× 5 puzzles with 14 missing digits and test on 6× 6 puzzles with 20 missing digits. Similar to N–Queens, we generate all possible valid grids and randomly mask out the requisite number of digits to generate train and test data. For both train and test queries we keep up to five inequality constraints of each type: > and <.\nSudoku: We also experiment on Sudoku, which has been used as the task of choice for many recent neural reasoning works (Palm et al., 2018; Wang et al., 2019). We use Relational Recurrent Networks (RRN) (Palm et al., 2018) as the prediction network since it has recently shown state-of-the-art performance on the task. We use a 5 layer CNN as our latent model Gφ. Existing Sudoku datasets (Royle, 2014; Park, 2018), do not expose the issues with solution multiplicity. In response, we generate our own dataset by starting with a collection of Sudoku puzzles with unique solutions that have 17 digits filled. We remove one of the digits, thus generating a puzzle, which is guaranteed to have solution multiplicity. We then randomly add 1 to 18 of the digits back from the solution of the original puzzle, while ensuring that the query continues to have more than 1 solution. This generates our set of multi-solution queries with a uniform distribution of filled digits from 17 to 34. We mix an equal number of unique solution queries (with same filled distribution). Because some xis may have hundreds of solutions, we randomly sample 5 of them from Yxi , i.e., |Yxi | ≤ 5 in the train set. For each dataset, we generate a devset in a manner similar to the test set." }, { "heading": "4.2 BASELINES AND EVALUATION METRIC", "text": "Our comparison baselines include: (1) Naïve: backpropagating L(Θ) through each solution independently using Equation (1), (2) Unique: computing L(Θ) only over the subset of training examples that have a unique solution, and (3) Random: backpropagating L(Θ) through one arbitrarily picked solution yi ∈ Yxi for every xi in the train data, and keeping this choice fixed throughout the training. We separately report performance on two mutually exclusive subsets of test data: OS: queries with a unique solution, and MS: those with multiple solutions. For all methods, we tune various hyperparameters (and do early stopping) based on the devset performance. Additional parameters for the four multiplicity aware methods include the ratio of OS and MS examples in training.4 I-EXPLR and SELECTR also select the pre-training strategy as described in Section 3.5. For all tasks, we consider the output of a prediction network as correct only if it is a valid solution for the underlying CSP. No partial credit is given for guessing parts of the output correctly." }, { "heading": "4.3 RESULTS AND DISCUSSION", "text": "We report the accuracies across all tasks and models in Table 2. For each setting, we report the mean over three random runs (with different seeds), and also the accuracy on the best of these runs selected via the devset (in the parentheses). We first observe that Naïve and Random perform significantly worse than Unique in all the tasks, not only on MS, but on OS as well. This suggests that, 1oML models that explicitly handle solution multiplicity, even if by simply discarding multiple solutions, are much better than those that do not recognize it at all.\n4Futoshiki and N–Queens training datasets have significant OS-MS imbalance (see Table 1), necessitating managing this ratio by undersampling OS. This is similar to standard approach in class imbalance problems.\nPredictably, all multiplicity aware methods vastly improve upon the performance of naïve baselines, with a dramatic 13-52 pt gains between Unique and SELECTR on queries with multiple solutions.\nComparing MINLOSS and SELECTR, we find that our RL-based approach outperforms MINLOSS consistently, with p-values (computed using McNemar’s test for the best models selected based on validation set) of 1.00e−16, 0.03, and 1.69e−18 for NQueens, Futoshiki and Sudoku respectively (see Appendix for seedwise comparisons of gains across tasks). On the other hand, informed exploration technique, I-EXPLR, though improves over MINLOSS on two out of three tasks, it performs worse than SELECTR in all the domains. This highlights the value of RL based exploration on top of the greedy target selection of MINLOSS as well as over the simple exploration of I-EXPLR. We\nnote that this is due to more exploratory power of SELECTR over I-EXPLR. See Appendix for more discussion and experiments comparing the two exploration techniques.\nRecall that Sudoku training set has no more than 5 solutions for a query, irrespective of the actual number of solutions – i.e, for many xi, Yxi ( Yxi . Despite incomplete solution set, significant improvement over baselines is obtained, indicating that our formulation handles solution multiplicity even with incomplete information. Furthermore, the large variation in the size of solution set (|Yx|) in Sudoku allows us to assess its effect on the overall performance. We find that all models get worse as |Yx| increases (fig. 3), even though SELECTR remains the most robust (see Appendix for details)." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this paper, we have defined 1oML: the task of learning one of many solutions for combinatorial problems in structured output spaces. We have identified solution multiplicity as an important aspect of the problem, which if not handled properly, may result in sub-optimal models. As a first cut solution, we proposed a greedy approach: MINLOSS formulation. We identified certain shortcomings with the greedy approach and proposed two exploration based formulations: I-EXPLR and an RL formulation, SELECTR, which overcomes some of the issues in MINLOSS by exploring the locally sub-optimal choices for better global optimization.\nExperiments on three different tasks using two different prediction networks demonstrate the effectiveness of our approach in training robust models under solution multiplicity 5.\nIt is interesting to note that for traditional CSP solvers, e.g.(Selman et al., 1993; Mahajan et al., 2004), a problem with many solutions will be considered an easy problem, whereas for neural models, such problems appear much harder (Figure 3). As a future work, it will be interesting to combine symbolic CSP solvers with SELECTR to design a much stronger neuro-symbolic reasoning model.\n5All the code and datasets are available at: https://sites.google.com/view/yatinnandwani/1oml" }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank IIT Delhi HPC facility6 for computational resources. We thank anonymous reviewers for their insightful comments and suggestions, in particular AnonReviewer4 for suggesting a simple yet effective informed exploration strategy (I-EXPLR). Mausam is supported by grants from Google, Bloomberg, 1MG and Jai Gupta chair fellowship by IIT Delhi. Parag Singla is supported by the DARPA Explainable Artificial Intelligence (XAI) Program with number N66001-17-2-4032. Both Mausam and Parag Singla are supported by the Visvesvaraya Young Faculty Fellowships by Govt. of India and IBM SUR awards. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the funding agencies." }, { "heading": "3 THEORY AND ALGORITHM", "text": "" }, { "heading": "3.2 OBJECTIVE FUNCTION", "text": "Lemma 1. The training loss L(Θ) as defined in eq. (1) is an inconsistent estimator of generalization error for 1oML, when lΘ is a zero-one loss, i.e., lΘ(ŷi,yij) = 1{ŷi 6= yij}. (Proof in Appendix).\nProof. LetD represent the distribution using which samples (x,Yx) are generated. In our setting, generalization error ε(MΘ) for a prediction networkMΘ can be written as: ε(MΘ) = E(x,Yx)∼D(1{ŷ /∈ Yx}), where ŷ = MΘ(x), i.e. the prediction of the network on unseen example sampled from the underlying data distribution. Assume a scenario when Yxi = Yxi , ∀i, i.e., for each input xi all the corresponding solutions are present in the training data. Then, an unbiased estimator ε̂D(MΘ) of the generalization error, computed using the training data is written as: ε̂D(MΘ) = 1m ∑m i=1 1{ŷi /∈ Yxi}. Clearly, the estimator obtained using L(Θ) (Naïve Objective), when the loss function lΘ(ŷi,yij) is replaced by a zero-one loss 1{ŷi 6= yij}, is not a consistent estimator for the generalization error. This can be easily seen by considering a case when ŷi ∈ Yxi and |Yxi | > 1.\nOPTIMIZATION ISSUES WITH CC-LOSS\nFor the task of PLL, Jin & Ghahramani (2002) propose a modification of the cross entropy loss to tackle multiplicity of labels in the training data. Instead of adding the log probabilities, it maximizes the log of total probability over the given target set. Inspired by Feng et al. (2020), we call it CC-LOSS:\nLcc(Θ) = − 1\nm m∑ i=1 log ∑ yij∈Yxi Pr (yij|xi; Θ) (9) However, in the case of structured prediction, optimizing Lcc suffers from numerical instability.\nWe illustrate this with an example. Consider solving 9 x 9 sudoku puzzle, xi. The probabilty of a particular target board, yij, is a product of r = 92 = 81 individual probabilities over the discrete space V = {1 · · · 9} of size 9, i.e., Pr(yij|xi; Θ) = ∏r k=1 Pr(yij[k]|xi; Θ). In the beginning of the training process, the network outputs nearly uniform probability over V for each of the r dimensions, making Pr(yij|xi; Θ) very small (= 9−81 ∼ 5.09e−78). The derivative of log of such a small quantity becomes numerically unstable.\nThis issue is circumvented in the case of naïve loss by directly working with log probabilities and log-sum-exp trick 7. However, in the case of CC-LOSS, we need to sum the probabilities over the target set Yxi before taking log, and computing Pr(yij|xi; Θ) makes it numerically unstable. Motivated by log-sum-exp trick, we use the following modifications which involves computing only log probabilities. For simplicity of notation, we will use Pr(yij) to denote Pr(yij|xi; Θ) and Licc to denote the CC Loss for the ith training sample.\nLicc = − log ∑ yij∈Yxi Pr(yij) Multiply and divide by maxpi = maxyij∈Yxi Pr(yij):\nLicc = − log maxpi ∑ yij∈Yxi Pr(yij) maxpi 7https://blog.feedly.com/tricks-of-the-trade-logsumexp/\nUse the identity: α = exp(log(α)):\nLicc = − log(maxpi)− log ∑ yij∈Yxi exp ( log ( Pr(yij) maxpi )) = − log(maxpi)− log\n ∑ yij∈Yxi exp (log (Pr(yij))− log (maxpi)) In the above equations, we first separate out the max probability target (similar to log-sum-exp trick), and then exploit the observation that the ratio of (small) probabilities is more numerically stable than the individual (small) probabilities. Further, we compute this ratio using the difference of individual log probabilities.\nLemma 2. Under the assumption Yxi = Yxi ,∀i, the loss L′(Θ) = minw Lw(Θ,w), defined as the minimum value of Lw(Θ,w) (defined in eq. (2)) with respect to w, is a consistent estimator of generalization error for 1oML, when lΘ is a zero-one loss, i.e., lΘ(ŷi,yij) = 1{ŷi 6= yij}.\nProof. Let D represent the distribution using which samples (x,Yx) are generated. In our setting, generalization error ε(MΘ) for a prediction network MΘ is:\nε(MΘ) = E(x,Yx)∼D(1{ŷ /∈ Yx})\nwhere ŷ = MΘ(x), i.e. the prediction of the network on unseen example sampled from the underlying data distribution. Assume a scenario when Yxi = Yxi , ∀i, i.e., for each input xi all the corresponding solutions are present in the training data. Then, an unbiased estimator ε̂D(MΘ) of the generalization error, computed using the training data is written as:\nε̂D(MΘ) = 1\nm m∑ i=1 1{ŷi /∈ Yxi}\nNow, consider the objective function\nL′(Θ) = min w Lw(Θ,w) = min w\n1\nm m∑ i=1 ∑ yij∈Yxi wij1{ŷi 6= yij}\n= 1\nm m∑ i=1 min wi ∑ yij∈Yxi wij1{ŷi 6= yij}\ns.t. wij ∈ {0, 1} ∀i,∀j and |Yxi |∑ j=1 wij = 1,∀i = 1 . . .m\nFor any xi, if the prediction ŷi is correct, i.e., ∃yij∗ ∈ Yxi s.t. ŷi = yij∗, then 1{ŷi 6= yij∗} = 0 and 1{ŷi 6= yij} = 1,∀yij ∈ Yxi ,yij 6= yij∗. Now minimizing over wi ensures wij∗ = 1 and wij = 0 ∀yij ∈ Yxi ,yij 6= yij∗. Thus, the contribution to the overall loss from this example xi is zero. On the other hand if the prediction is incorrect then 1{ŷi 6= yij} = 1, ∀yij ∈ Yxi , thus making the loss from this example to be 1 irrespective of the choice of wi. As a result, L′(Θ) is exactly equal to ε̂D(MΘ) and hence it is a consistent estimator for generalization error." }, { "heading": "3.3 GREEDY FORMULATION: MINLOSS", "text": "Example 2. Consider a simple task with a one-dimensional continuous input space X ⊂ R, and target space Y = {0, 1}. Consider learning with 10 examples, given as (x = 1,Yx = {1}) (5 examples), (x = −1,Yx = {0, 1}) (4 examples), (x = −2,Yx = {1}) (1 example). The optimal decision hypothesis is given as: y = 1{x > α}, for α ≤ −2, or y = 1{x < β}, for β ≥ 1. Assume learning this with logistic regression using MINLOSS as the training algorithm optimizing\nthe objective in eq. (3). If we initialize the parameters of logistic such that the starting hypothesis is given by y = 1{x > 0} (logistic parameters: θ1 = 0.1, θ0 = 0), MINLOSS will greedily pick the target y = 0 for samples with x = −1, repeatedly. This will result in the learning algorithm converging to the decision hypothesis y = 1{x > −0.55}, which is sub-optimal since the input with x = −2 is incorrectly classified (fig. 1, see Appendix for a detailed discussion).\nFor logistic regression, when input x is one dimensional, probability of the prediction being 1 for any given point x = [x] is given as:\nP (y = 1) = σ(θ1x+ θ0) where σ(z) = 1\n1 + e−z , z ∈ R\nThe decision boundary is the hyperplane on which the probability of the two classes, 0 and 1, is same, i.e. the hyperplane corresponding to P (y = 0) = P (y = 1) = 0.5 or θ1x+ θ0 = 0.\nInitially, θ1 = 0.1 and θ0 = 0 implies that decision boundary lies at x = 0 (shown in green). All the points on the left of decision boundary are predicted to have 0 label while all the points on the right have 1 label. For all the dual label points (x = 1), P (y = 1) < 0.5, thus MINLOSS greedily picks the label 0 for all these points. This choice by MINLOSS doesn’t change unless the decision boundary goes beyond -1.\nHowever, we observe that with gradient descent using a sufficiently small learning rate, logistic regression converges at x = −0.55 with MINLOSS never flipping its choice. Clearly, this decision boundary is sub-optimal since we can define a linear decision boundary (y = 1{x > α}, for α ≤ −2, or y = 1{x < β}, for β ≥ 1) that classifies all the points with label 1 and achieves 100% accuracy." }, { "heading": "4 EXPERIMENTS", "text": "All the experiments are repeated thrice using different seeds. Hyperparameters are selected based on the held out validation set performance.\nHardware Architecture: Each experiment is run on a 12GB NVIDIA K40 GPU with 2880 CUDA cores and 4 cores of Intel E5-2680 V3 2.5GHz CPUs.\nOptimizer: We use Adam as our optimizer in all our experiments. Initial learning rate is set to 0.005 for NLM (Dong et al., 2019) experiments while it is kept at 0.001 for RRN (Palm et al., 2018) experiments. Learning rate for RL phase is kept at 0.1 times the initial learning rate. We reduce learning rate by a factor of 0.2 whenever the performance on the dev set plateaus." }, { "heading": "4.1 DETAILS FOR N-QUEENS EXPERIMENT", "text": "Data Generation: To generate the train data, we start with all possible valid 10–Queens board configurations. We then generate queries by randomly masking any 5 queens. We check for all\npossible valid completions to generate potentially multiple solutions for any given query. Test data is also generated similarly. Training and testing on different board sizes ensures that no direct information leaks from the test dataset to the train dataset. Queries with multiple solutions have a small number of total solutions (2-6), hence we choose Yxi = Yxi ,∀xi.\nArchitecture Details for Prediction Network MΘ: We use Neural Logic Machines (NLM)9 (Dong et al., 2019) as the base prediction network for this task. NLM consists of a series of basic blocks, called ‘Logic Modules’, stacked on top of each other with residual connections. Number of blocks in an NLM architecture is referred to as its depth. Each block takes grounded predicates as input and learns to represent M intermediate predicates as its output. See (Dong et al., 2019) for further details. We chose an architecture with M = 8 and depth = 30. We keep the maximum arity of intermediate predicates learnt by the network to be 2.\nInput Output for Prediction Network: Input to NLM is provided in terms of grounded unary and binary predicates and the architecture learns to represent an unknown predicate in terms of the input predicates. Each cell on the board acts as an atomic variable over which predicates are defined.\nUnary Predicates: To indicate the presence of a Queen on a cell in the input, we use a unary predicate, ‘HasQueenPrior’. It is represented as a Boolean tensor x of size N2 with 1 on k out of N2 cells indicating the presence of a Queen. The output y of the network is also a unary predicate ‘HasQueen’ which indicates the final position of the queens on board.\nBinary Predicates: We use 4 binary predicates to indicate if two cells are in same row, same column, same diagonal or same off-diagonal. The binary predicates are a constant for all board configurations for a given size N and hence can also be thought of as part of network architecture instead of input.\nArchitecture Details for Selection Module Sφ: We use another NLM as our latent modelGφ within the selection module Sφ. We fix depth = 4 and M = 10 for the latent model.\nInput Output for Gφ: Input to Gφ is provided in terms of grounded unary and binary predictates represented as tensors just like the prediction network. Gφ takes 1 unary predicate as input, represented as an N2 sized vector, yij − ŷi_, where ŷi_ is the prediction from its internal copy of the prediction network (MΘ_) given the query xi. For each yij ∈ Yxi , Gφ returns a score which is converted into a probability distribution Prφ(yij) over Yxi using a softmax layer.\nHyperparameters:\nThe list below enumerates the various hyper-parameters with a brief description (whenever required) and the set of its values that we experiment with. Best value of a hyper-parameter is selected based on performance on a held out validation set.\n1. Data Sampling: Since number of queries with multiple solutions is underrepresented in the training data, we up-sample them and experiment with different ratios of multi-solution\n8Image Source: Game play on http://www.brainmetrix.com/8-queens/ 9Code taken from: https://github.com/google/neural-logic-machines\nqueries in the training data. Specifically, we experiment with the ratios of 0.5 and 0.25 in addition to the two extremes of selecting queries with only unique or only multiple solutions. Different data sampling may be used during pre-training and RL fine tuning phases.\n2. Batch Size: We use a batch size of 4. We selected the maximum batch size that can be accommodated in 12GB GPU memory.\n3. copyitr: We experiment with two extremes of copying the prediction network after every update and copying after every 2500 updates.\n4. Weight Decay in Optimizer: We experiment with different weight decay factors of 1E-4, 1E-5 and 0.\n5. Pretraining φ: We pretrain Gφ for 250 updates.\nTraining Time: Pre-training takes 10 − 12 hours while RL fine-tuning take roughly 6 − 8 hours using the hardware mentioned in the beginning of the section." }, { "heading": "4.1 DETAILS FOR FUTOSHIKI EXPERIMENT", "text": "Data Generation: We start with generating all the possible ways in which we can fill a N ×N grid such that no number appears twice in a row or column. For generating a query we sample any solution and randomly mask out k positions on it. Also we enumerate all the GreaterThan and LessThan relations between adjacent pair of cells in the chosen solution and randomly add q of these relations to the query. We check for all possible valid completions to generate potentially multiple solutions for any given query. Test data is also generated similarly. Training and testing on different board sizes ensures that no direct information leaks from the test dataset to the training data. Queries with multiple solutions have a small number of total solutions (2-6), so we choose Yxi = Yxi ,∀xi . Architecture Details for Prediction Network MΘ: Same as N-Queens experiment.\nInput Output for Prediction Network: Just like N-Queens experiment, the input to the network is a set of grounded unary and binary predicates. We define a grid cell along with the digit to be filled in it as an atomic variable. There are N2 cells in the grid and each cell can take N values, thus we have N3 atomic variables over which the predicates are defined.\nUnary Predicates: To indicate the presence of a value in a cell in the input, we use a unary predicate, ‘IsPresentPrior’. It is represented as a Boolean tensor x of size N3 with 1 on k positions indicating the presence of a digit in a cell. The output y of the network is also a unary predicate ‘IsPresent’ which indicates the final prediction of grid. Additionally, there are two more unary predicates which represent the inequality relations that need to be honoured. Since inequality relations are defined only between pair of adjacent cells we can represent them using unary predicates.\nBinary Predicates: We use 3 binary predicates to indicate if two vairables are in same row, same column, or same grid cell. The binary predicates are a constant for all board configurations for a given size N .\nArchitecture Details for Selection Module Sφ: Same as N-Queens experiment.\nInput Output for Gφ: Same as N-Queens experiment except for the addition of two more unary predicates corresponding to the inequality relations. First unary predicate is yij − ŷi_ which is augmented with the inequality predicates.\nHyperparameters: Same as N-Queens experiment.\nTraining Time: Pre-training takes roughly 12− 14 hours while RL fine-tuning takes 7− 8 hours." }, { "heading": "4.1 DETAILS FOR SUDOKU EXPERIMENT", "text": "DATA GENERATION FOR SUDOKU\nWe start with the dataset proposed by Palm et al. (2018). It has 180k queries with only unique solution and the number of givens are uniformly distributed in the range from 17 to 34. 10. For the\n10Available at https://data.dgl.ai/dataset/sudoku-hard.zip\nqueries with unique solution, we randomly sample 10000 queries from their dataset, keeping their train, val and test splits. Using the queries with 17-givens from the entire dataset of size 180k, we use the following procedure to create queries with multiple solutions:\nWe know that for a Sudoku puzzle to have a unique solution it must have 17 or more givens (McGuire et al., 2012). So we begin with the set of 17-givens puzzles having a unique solution and randomly remove 1 of the givens, giving us a 16-givens puzzle which necessarily has more than 1 correct solution. We then randomly add 1 to 18 of the digits back from the solution of the original puzzle, while ensuring that the query continues to have more than 1 solution. 11 This procedure gives us multi-solution queries with givens in the range of 17 to 34, just as the original dataset of puzzles with only unique solution. We also observed that often there are queries which have a very large number of solutions (> 100). We found that such Sudoku queries are often too poorly defined to be of any interest. So we filter out all queries having more than 50 solutions. To have the same uniform distribution of number of givens as in the original dataset of puzzles with unique solution, we sample queries from this set of puzzles with multiple solutions such that we have a uniform distribution of number of givens in our dataset.\nWe repeat this procedure to generate our validation and test data by starting from validation and test datasets from Palm et al. (2018).\nArchitecture Details for Prediction Network MΘ: We use Recurrent Relational Network (RRN) (Palm et al., 2018) 12 as the prediction network for this task. RRN uses a message passing based inference algorithm on graph objects. We use the same architecture as used by Palm et al. (2018) for their Sudoku experiments. Each cell in grid is represented as a node in the graph. All the cells in the same row, column and box are connected in the graph. Each inference involves 32 steps of message passing between the nodes in the graph and the model outputs a prediction at each step.\nInput Output for Prediction Network: Input to the prediction network is represented as a 81× 10 matrix with each of the 81 cell represented as a one-hot vector representing the digits (0-9, 0 if not given). Output of the prediction network is a 81 × 10 × 32 tensor formed by concatenating the prediction of network at each of the 32 steps of message passing. The prediction at the last step is used for computing accuracy.\nArchitecture Details for Selection Module Sφ: We use a CNN as the latent modelGφ. The network consists of four convolutional layers followed by a fully connected layer. The four layers have 100, 64, 32 and 32 filters respectively. Each filter has a size of 3× 3 with stride of length 1. Input Output for Gφ: Similar to the other two experiments, the input to Gφ is the output ŷi_ from the selection module’s internal copy MΘ_ along with yij. Since the prediction network gives an output at each step of message passing, we modify the Gφ and the rewards for Sφ accordingly to be computed from prediction at each step instead of relying only on the final prediction.\nHyperparameters:\n1. Data Sampling: Since number of queries with multiple solutions and queries with unique solution are in equal proportion, we no longer need to upsample multi-solution queries.\n2. Batch Size: We use a batch size of 32 for training the baselines, while for RL based training we use a batch size of 16.\n3. copyitr: We experiment with copyitr = 1 i.e. copying MΘ to MΘ_ after every update. 4. Weight Decay in Optimizer: We experiment with weight decay factor of 1E-4 (same as\nPalm et al. (2018)).\n5. Pretraining φ: We pretrain Gφ for 1250 updates, equivalent to one pass over the train data.\nComparison with pretrained SOTA Model: We also evaluate the performance of a pretrained state-of-the-art neural Sudoku solver (Palm et al., 2018)13 on our dataset. This model trains and tests on instances with single solution. The training set used by this model is a super-set of the\n11We identify all solutions to a puzzle using http://www.enjoysudoku.com/JSolve12.zip 12Code taken from: https://github.com/dmlc/dgl/tree/master/examples/pytorch/\nrrn 13Available at: https://data.dgl.ai/models/rrn-sudoku.pkl\nunique solution queries in our training data and contains 180,000 queries. This model achieves a high accuracy of 94.32% on queries having unique solution (OS) in our test data which is a random sample from their test data only, but the accuracy drop to 24.48% when tested on subset of our test data having only queries that have multiple solutions (MS). We notice that the performance on MS is worse than Unique baseline, even though both are trained using queries with only unique solution. This is because the pretrained model overfits on the the queries with unique solution whereas the Unique baseline early stops based on performance on a dev set having queries with multiple solutions as well, hence avoiding overfitting on unique solution queries.\nTraining Time: Pre-training the RRN takes around 20− 22 hours whereas RL fine-tuning starting with the pretrained model takes around 10− 12 hours." }, { "heading": "4.3 RESULTS AND DISCUSSIONS", "text": "Table 3 reports the mean test accuracy along with the standard error over three runs for different baselines and our three approaches. Note that the standard errors reported here are over variations in the choice of different random seeds and it is difficult to do a large number of such experiments (with varying seeds) due to high computational complexity. Below, we compare the performance gains for each of the seed separately.\nSeed-wise Comparison for Gains of SELECTR over MINLOSS\nIn Table 4 we see that SELECTR performs better than MINLOSS for each of the three random seeds independently in all the experiments. We note that starting with the same seed in our implementation leads to identical initialization of the prediction network parameters.\nDetails of the Analysis Depicted in Figure 3\nThe large variation in the size of solution set (|Yx|) in Sudoku allows us to assess its effect on the overall performance. To do so, we divide the test data into different bins based on the number of possible solutions for each test input (xi) and compare the performance of the best model obtained in the three settings: Unique, MINLOSS and SELECTR.\nBy construction, the number of test points with a unique solution is equal to the total number of test points with more than one solution. Further, while creating the puzzles with more than one solution, we ensured uniform distribution of number of filled cells from 17 to 34, as is done in (Palm et al., 2018) for creating puzzles with unique solutions in their paper. Hence, the number of points across different bins (representing solution count) may not be the same. Figure 6 shows the average size of each bin and the average number of filled cells for multiple solution queries in a bin. As we move to the right in graph (i.e., increase the number of solutions for a given problem), the number of filled cells in the corresponding Sudoku puzzles decreases, re-\nsulting in harder problems. This is also demonstrated by the corresponding decrease in performance of all the models in Figure 3. SELECTR is most robust to this decrease in performance.\nDiscussion on Why SELECTR is better than I-EXPLR?\nIn this section, we argue why SELECTR is more powerful than I-EXPLR, even though the reward structure for training the RL agent is such that eventually the Gφ in the RL agent will learn to pick the target closest to the current prediction (to maximize reward), and hence Sφ will be reduced to I-EXPLR.\nWe see two reasons why SELECTR is better than I-EXPLR.\nFirst, recall that the I-EXPLR strategy gives the model an exploration probability based on its current prediction. But note that this is “only one” of the possible exploration strategies. For example, another strategy could be to explore based on a fixed epsilon probability. There could be several other such possible exploration strategies that could be equally justified. Instead of hard coding them, as done for I-EXPLR, our Gφ network gives the ability to learn the best exploration strategy, which may depend in complex ways on the global reward landscape (i.e., simultaneously optimizing reward over all the training examples). Hence we use a neural module for this.\nSecond, note that I-EXPLR is parameter-free and fully dependent on MΘ, thus, has limited representational power of its own to explore targets. This is not the case with Gφ. Its output ȳ and and the target (yc) closest to MΘ prediction ŷ may differ i.e. ȳ 6= yc (see next paragraph for an experiment on this). When this happens, the gradients will encourage change in Θ so that ŷ moves towards ȳ, and simultaneously encourage change in φ so that ȳ moves towards yc. That is, a stable alignment between the two models could be either of the two, yc or ȳ. This, we believe, increases the overall exploration of the model. Which of yc or ȳ get chosen depends on how strongly the global landscape (other data points) encourage one versus the other. Such flexibility is not available to I-EXPLR where only Θ parameters are updated. We believe that this flexibility to explore more could enable SELECTR to jump off early local optima, thus achieving better performance compared to I-EXPLR.\nWe provide preliminary experimental evidence that supports that SELECTR explores more. For every training data point q, we check if the arg max of Gφ probability distribution (i.e., highest probability ȳ) and yc differ from each other. We name such data points “exploratory”. We analyze the fraction of exploratory data points as a function of training batches. See fig. 7. We observe that in the initial several batches, SELECTR has 3− 10% of training data exploratory. This number is, by definition, 0% for I-EXPLR since it chooses ȳ based on model probabilities. This experiment suggests that SELECTR may indeed explore more early on." } ]
2,021
null
SP:3665fd208fe6506f389defd4267ebc6ed5fefe98
[ "The authors consider the issue of overconfidence in ReLU NN and BNNs, particularly for data that are far (in Euclidean distance) from the training data. They address this by modeling the residual (to the NN) in the latent space with a GP. The kernel for this GP is derived as the limit of infinitely many ReLU-based random features. Specifically, this kernel has the property that it scales cubically with the norm of the input, and so causes large uncertainty away from the origin. Crucially, the GP term changes little from its prior distribution after conditioning on the data, so no expensive inference is required under the approximation made." ]
Approximate Bayesian methods can mitigate overconfidence in ReLU networks. However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident. We suggest to fix this by considering an infinite number of ReLU features over the input domain that are never part of the training process and thus remain at prior values. Perhaps surprisingly, we show that this model leads to a tractable Gaussian process (GP) term that can be added to a pre-trained BNN’s posterior at test time with negligible cost overhead. The BNN then yields structured uncertainty in the proximity of training data, while the GP prior calibrates uncertainty far away from them. As a key contribution, we prove that the added uncertainty yields cubic predictive variance growth, and thus the ideal uniform (maximum entropy) confidence in multi-class classification far from the training data.
[]
[ { "authors": [ "Dario Amodei", "Chris Olah", "Jacob Steinhardt", "Paul Christiano", "John Schulman", "Dan Mané" ], "title": "Concrete Problems in AI safety", "venue": "arXiv preprint arXiv:1606.06565,", "year": 2016 }, { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding Deep Neural Networks with Rectified Linear Units", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Christopher M. Bishop" ], "title": "Pattern Recognition and Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "BJN Blight", "L Ott" ], "title": "A Bayesian Approach to Model Inadequacy for Polynomial Regression", "venue": null, "year": 1975 }, { "authors": [ "Mark N Gibbs" ], "title": "Bayesian Gaussian Processes for Regression and Classification", "venue": "Ph. D. Thesis, Department of Physics, University of Cambridge,", "year": 1997 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On Calibration of Modern Neural Networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko", "Julian Bitterwolf" ], "title": "Why ReLU Networks Yield Highconfidence Predictions Far Away from the Training Data and How to Mitigate the Problem", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep Anomaly Detection with Outlier Exposure", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "James Hensman", "Alexander Matthews", "Zoubin Ghahramani" ], "title": "Scalable Variational Gaussian Process Classification", "venue": "In AISTATS,", "year": 2015 }, { "authors": [ "Geoffrey E Hinton", "Drew Van Camp" ], "title": "Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights", "venue": "In COLT,", "year": 1993 }, { "authors": [ "Agustinus Kristiadi", "Matthias Hein", "Philipp Hennig" ], "title": "Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "The Evidence Framework Applied to Classification Networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "David JC MacKay" ], "title": "A Practical Bayesian Framework For Backpropagation Networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A Simple Baseline for Bayesian Uncertainty in Deep Learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Alexander Meinke", "Matthias Hein" ], "title": "Towards Neural Networks that Provably Know when They don’t Know", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified Linear Units Improve Restricted Boltzmann Machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Anthony O’Hagan" ], "title": "Curve Fitting and Optimal Design for Prediction", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1978 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "David Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty under Dataset Shift", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Xin Qiu", "Elliot Meyerson", "Risto Miikkulainen" ], "title": "Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "A Scalable Laplace Approximation for Neural Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "David J Spiegelhalter", "Steffen L Lauritzen" ], "title": "Sequential Updating of Conditional Probabilities on Directed Graphical Structures", "venue": null, "year": 1990 }, { "authors": [ "Shengyang Sun", "Guodong Zhang", "Jiaxin Shi", "Roger Grosse" ], "title": "Functional Variational Bayesian Neural Networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Grace Wahba" ], "title": "Improper Priors, Spline Smoothing and the Problem of Guarding Against Model Errors in Regression", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1978 }, { "authors": [ "Grace Wahba" ], "title": "Spline Models for Observational Data", "venue": null, "year": 1990 }, { "authors": [ "Andrew G Wilson", "Zhiting Hu", "Russ R Salakhutdinov", "Eric P Xing" ], "title": "Stochastic Variational Deep Kernel Learning", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "James T Wilson", "Viacheslav Borovitskiy", "Alexander Terenin", "Peter Mostowsky", "Marc Peter Deisenroth" ], "title": "Efficiently Sampling Functions from Gaussian Process Posteriors", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Qiu" ], "title": "f̃(x) := w>φ(x) + f̂(x)", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Calibrated uncertainty is crucial for safety-critical decision making by neural networks (NNs) (Amodei et al., 2016). Standard training methods of NNs yield point estimates that, even if they are highly accurate, can still be severely overconfident (Guo et al., 2017). Approximate Bayesian methods, which turn NNs into Bayesian neural networks (BNNs), can be used to address this issue. Kristiadi et al. (2020) recently showed that for binary ReLU classification networks, far away from the training data (more precisely: when scaling any input x with a scalar α > 0 and taking the limit α → ∞), the uncertainty of BNNs can be bounded away from zero. This is an encouraging result when put in contrast to the standard point-estimated networks, for which Hein et al. (2019) showed earlier that the same asymptotic limit always yields arbitrarily high (over-)confidence. Nevertheless, BNNs can still be asymptotically overconfident (albeit less so than the standard NNs) since the aforementioned uncertainty bound can be loose. This issue is our principal interest in this paper. An intuitive interpretation is that ReLU NNs “miss out on some uncertainty” even in their Bayesian formulation, because they fit a finite number of ReLU features to the training data, by “moving around” these features within the coverage of the data. This process has no means to encode a desideratum that the model should be increasingly uncertain away from the data.\nIn this work, we “add in” additional uncertainty by considering an infinite number of additional ReLU features spaced at regular intervals away from the data in the input and hidden spaces of the network. Since these features have negligible values in the data region, they do not contribute to the training process. Hence, we can consider a prior for their weights, chosen to be an independent Gaussian, and arrive at a specific Gaussian process (GP) which covariance function is a generalization of the classic cubic-spline kernel (Wahba, 1990). This GP prior can be added to any pre-trained ReLU BNN as a simple augmentation to its output. Considering the additive combination of a parametric BNN and GP prior together, we arrive at another view of the method: It approximates the “full GP posterior” that models the residual of a point-estimated NN (Blight & Ott, 1975; Qiu et al., 2020). In our factorization, the BNN models uncertainty around the training data, while the GP prior models uncertainty far away from them. By factorizing these two parts from each other, our formulation requires no (costly) GP posterior inference, and thus offers lightweight, modular uncertainty calibration. See Fig. 1 for illustration.\nTheoretical analysis is a core contribution of this work. We show that the proposed method (i) preserves the predictive performance of the base ReLU BNN. Furthermore, it (ii) ensures that the\nsurrounding output variance asymptotically grows cubically in the distance to the training data, and thus (iii) yields uniform asymptotic confidence in the multi-class classification setting. These results extend those of Kristiadi et al. (2020) in so far as their analysis is limited to the binary classification case and their bound can be loose. Furthermore, our approach is complementary to the method of Meinke & Hein (2020) which attains maximum uncertainty far from the data for non-Bayesian point-estimate NNs. Finally, our empirical evaluation confirms our analysis and shows that the proposed method also improves uncertainty estimates in the non-asymptotic regime." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 BAYESIAN NEURAL NETWORKS", "text": "Let f : RN ×RD → RC defined by (x,θ) 7→ f(x;θ) =: fθ(x) be a neural network. Here, θ is the collection of all parameters of f . Given an i.i.d. dataset D := (xm, ym)Mm=1, the standard training procedure amounts to finding a point estimate θ∗ of the parameters θ, which can be identified in the Bayesian framework with maximum a posteriori (MAP) estimation1\nθ∗ = arg max θ log p(θ | D) = arg max θ M∑ m=1 log p(ym | fθ(xm)) + log p(θ).\nWhile this point estimate may yield highly accurate predictions, it does not encode uncertainty over θ, causing an overconfidence problem (Hein et al., 2019). Bayesian methods can mitigate this issue, specifically, by treating the parameter of f as a random variable and applying Bayes rule. The resulting network is called a Bayesian neural network (BNN). The common way to approximate the posterior p(θ | D) of a BNN is by a Gaussian q(θ | D) = N (θ | µ,Σ), which can be constructed for example by a Laplace approximation (MacKay, 1992b) or variational Bayes (Hinton & Van Camp, 1993). Given such an approximate posterior q(θ | D) and a test point x∗ ∈ RN , one then needs to marginalize the parameters to make predictions, i.e. we compute the integral y∗ = ∫ h(f(x∗;θ)) q(θ | D) dθ, where h is an inverse link function, such as the identity function for regression or the logistic-sigmoid and softmax functions for binary and multi-class classifications, respectively. Since the network f is a non-linear function of θ, this integral does not have an analytic solution. However, one can obtain a useful approximation via the following network linearization: Let x∗ ∈ RN be a test point and q(θ | D) = N (θ | µ,Σ) be a Gaussian approximate posterior. Linearizing f around µ yields the following marginal distribution over the function output f(x∗):2\np(f(x∗) | x∗,D) ≈ N (f(x∗) | f(x∗;µ)︸ ︷︷ ︸ =:m∗ ,J>∗ ΣJ∗︸ ︷︷ ︸ =:V∗ ), (1)\nwhere J∗ is the Jacobian of f(x∗;θ) w.r.t. θ at µ. (In the case of a real-valued network f , we use the gradient g∗ := ∇θf(x∗;θ)|µ instead of J∗.) This distribution can then be used as the predictive distribution p(y∗ | x∗,D) in the regression case. For classifications, we need another approximation since h is not the identity function. One such approximation is the generalized probit approximation\n1In the statistical learning view, log p(ym | fθ(xm)) is identified with the empirical risk, log p(θ) with the regularizer. The two views are equivalent in this regard.\n2See Bishop (2006, Sec. 5.7.1) for more details.\n(Gibbs, 1997; Spiegelhalter & Lauritzen, 1990; MacKay, 1992a):\np(y∗ = c | x∗,D) ≈ exp(m∗c κ∗c)∑C i=1 exp(m∗i κ∗i) , for all c = 1, . . . , C, (2)\nwhere for each i = 1, . . . , C, the real numbers m∗i is the i-th component of the vector m∗, and κ∗i := (1 + π/8 v∗ii)\n−1/2 where v∗ii is the i-th diagonal term of the matrix V∗. These approximations are analytically useful, but can be expensive due to the computation of the Jacobian matrix J∗. Thus, Monte Carlo (MC) integration is commonly used as an alternative, i.e. we approximate y∗ ≈ 1S ∑S s=1 h(f(x∗;θs)) with θs ∼ q(θ | D). Finally, given a classification predictive distribution p(y∗ | x∗,D), we define the predictive confidence of x∗ as the maximum probability conf(x∗) := maxc∈{1,...,C} p(y∗ = c | x∗,D) over class labels." }, { "heading": "2.2 RELU AND GAUSSIAN PROCESSES", "text": "The ReLU activation function ReLU(z) := max(0, z) (Nair & Hinton, 2010) has become the defacto choice of non-linearity in deep learning. Given arbitrary real numbers c, it can be generalized as ReLU(z; c) := max(0, z − c), with the “kink” at location c. An alternative formulation, useful below, is in terms of the Heaviside function H as ReLU(z; c) = H(z − c)(z − c). We may define a collection of d such ReLU functions evaluated at some point in R as the function φ : R → RK with z 7→ (ReLU(z; c1), . . . ,ReLU(z; cK))>. We call this function the ReLU feature map; it can be interpreted as “placing” ReLU functions at different locations in R.\nConsider a linear model g : R × RK → R defined by g(x;w) := w>φ(x). Suppose φ regularly places the K generalized ReLU functions centered at (ci)Ki=1 over [cmin, cmax] ⊂ R, where cmin < cmax. If we consider a Gaussian prior p(w) := N ( w ∣∣0, σ2K−1(cmax − cmin)I) over the weights w then, as K goes to infinity, the distribution over g(x) is a Gaussian process with mean 0 and covariance (using the shorthand gx := g(x) and x̄ := min(x, x′); full derivation in Appendix A):\nlim K→∞\ncov(gx, gx′) = σ 2H(x̄− cmin)\n( 1\n3 (x̄3 − c3min)−\n1 2 (x̄2 − c2min)(x+ x′) + (x̄− cmin)xx′ ) =: k1(x, x′; cmin, σ 2),\nfor x̄ ≤ cmax. Since this expression does not depend on cmax, we consider the limit cmax → ∞. The resulting covariance function is the cubic spline kernel (Wahba, 1990)." }, { "heading": "3 METHOD", "text": "Hein et al. (2019) showed that the confidence of point-estimated ReLU networks (i.e. feed-forward nets which use piecewise-affine activation functions and are linear in the output layer) approaches 1 with increasing distance from the training data. For binary classification, Kristiadi et al. (2020) showed that Gaussian-approximated ReLU BNNs f instead approach a constant confidence bounded away from 1, but not necessarily close to the maximum uncertainty value of 1/2. Thus, just being Bayesian as such does not fix overconfidence entirely. A close look at their proof suggests that the issue is a structural limitation of the deep model itself: for any input x∗ and a sufficiently large scalar α, both the mean and standard deviation of the output f(αx∗) are linear functions of x∗. Intuitively, this issue arises because the net only has finitely many ReLU features available to “explain” the data, and thus it “lacks” ReLU features for modeling uncertainty away from the data.\nIn this section, we will utilize the cubic spline kernel to construct a new kernel and method that, intuitively speaking, adds an infinite number ReLU features away from the data to pre-trained BNNs. This construction adds the “missing” ReLU features and endows BNNs with super-quadratic output variance growth, without affecting predictions. All proofs are in Appendix B." }, { "heading": "3.1 THE DOUBLE-SIDED CUBIC SPLINE KERNEL", "text": "The cubic spline kernel constructed above is non-zero only on (cmin,∞) ⊂ R. To make it suitable for modeling uncertainty in an unbounded domain, we set cmin = 0 and obtain a kernel k1→(x, x ′;σ2) := k1(x, x′; 0, σ2) which is non-zero only on (0,∞). Doing an entirely analogous\nconstruction with infinitely many ReLU functions pointing to the left, i.e. ReLU(−z; c), we obtain the kernel k1←(x, x\n′;σ2) := k1→(−x,−x′;σ2), which is non-zero only on (−∞, 0). We combine both into the kernel\nk1↔(x, x ′;σ2) := k1←(x, x ′;σ2) + k1→(x, x ′;σ2),\nwhich covers the whole real line (the value at the origin k1↔(0, 0) is zero)—see Figure 2. For multivariate input domains, we define\nk↔(x,x ′;σ2) :=\n1\nN N∑ i=1 k1↔(xi, x ′ i;σ 2) (3)\nfor any x,x′ ∈ RN with N > 1. We here deliberately use a summation, instead of the alternative of a product, since we want the associated GP to add uncertainty anywhere where at least one input dimension has non-vanishing value.3 We call this kernel the double-sided cubic spline (DSCS) kernel. Two crucial properties of this kernel are that it has negligible values around the origin and for any x∗ ∈ RN and α ∈ R, the value k↔(αx∗, αx∗) is cubic in α." }, { "heading": "3.2 RELU-GP RESIDUAL", "text": "Let f : RN × RD → R be an L-layer, real-valued ReLU BNN. Suppose we place infinitely many ReLU features by following the previous construction. Then, we arrive at a zero-mean GP prior GP(f̂ (0) | 0, k↔) of some real-valued function f̂ (0) : RN → R over the input space RN . We can use this GP to model the “missing” uncertainty which, due to the lack of its presence, makes f overconfident far-away from the data. We do so in a standard manner by assuming that the “true” latent function f̃ is the sum of f and f̂ (0):\nf̃ := f + f̂ (0), where f̂ (0) ∼ GP(f̂ (0) | 0, k↔). (4)\nUnder this assumption, given an input x∗, it is clear that f̂ (0) does not affect the expected output of the BNN since the GP over f̂ (0) has zero mean. However, f̂ (0) do additively affect the uncertainty of the BNN’s output f∗ := f(x∗) since if we assume that f∗ ∼ N (E f∗, var f∗), then it follows that f̃∗ ∼ N (E f∗, var f∗ + k↔(x∗,x∗)). Hence, the random function f̂ (0), resulting from placing an infinite number of ReLU features in the input space, indeed models the uncertainty residual of the BNN f . We thus call our method ReLU-GP residual (RGPR).\nUnlike previous methods for modeling residuals with GPs, RGPR does not require a posterior inference since intuitively, the additional infinitely many ReLU features are never part of the training process—their “kinks” are pointing away from the data. So even if we were to actively include them in the training process somehow, they would have (near) zero training gradient and just stay where and as they are. The following statements illustrate this intuition more formally in GP regression under the linearization (1) by assuming w.l.o.g. that the kernel values over the dataset are negligible (by shifting and scaling until the data is sufficiently close to 0 ∈ RN ).\n3By contrast, a product k↔(x,x′;σ2) is zero if one of the k1↔(xi, x′i;σ 2) is zero.\nProposition 1. Suppose f : RN×RD → R defined by (x,θ) 7→ f(x;θ) is a ReLU regression BNN with a prior p(θ) = N (θ | 0,B) and D := {xm, ym}Mm=1 is a dataset. Let f̂ (0) and f̃ be defined as in (4), and let x∗ ∈ RN be arbitrary. Under the linearization of f w.r.t. θ around 0, given that all x1, . . . ,xM are sufficiently close to the origin, the GP posterior of f̃∗ := f̃(x∗) is given by p(f̃∗ | x∗,D) ≈ N (f̃∗ | f(x;µ), g>∗ Σg∗ + k↔(x∗,x∗)), (5) where µ and Σ are the mean and covariance of the posterior of the linearized network, respectively, and g∗ := ∇θf(x∗;θ)|0.\nThe previous proposition shows that the GP prior of f̂ (0) does not affect the BNN’s approximate posterior—f̃ is written as a posteriori f plus a priori f̂ (0). Therefore, given a pre-trained BNN f with its associated posterior p(θ | D) ≈ N (θ | µ,Σ), we can simply add to its output f(x∗;θ) (with θ ∼ p(θ | D)) a random number f̂ (0)(x∗) ∼ GP(f̂ (0) | 0, k↔(x∗,x∗)). We henceforth assume that f is a pre-trained BNN.\nWhile the previous construction is sufficient for modeling uncertainty far away from the data, it does not model the uncertainty near the data region well. Figure 3(a) shows this behavior: placing infinitely many ReLU features over just the input space yields uncertainty that is not adapted to the data and hence, far away from them, we can still have low variance. To alleviate this issue, we additionally place infinite ReLU features on the representation space of the point-estimated fµ(·) = f( · ;µ), which indeed encodes information about the data since f is a trained BNN, as follows. For each l = 1, . . . , L − 1 and any input x∗, let Nl be the size of the l-th hidden layer of fµ and h(l)(x∗) =: h (l) ∗ be the l-th hidden units. By convention, we assume that N0 := N and h (0) ∗ := x∗. We place for each l = 0, . . . , L− 1 an infinite number of ReLU features on the representation space RNl , and thus we obtain a random function f̂ (l) : RNl → R distributed by the Gaussian process GP(f̂ (l) | 0, k↔). Now, given that N̂ := ∑L−1 l=0 Nl, we define the function f̂ : RN̂ → R by f̂ := f̂ (0) + · · · + f̂ (L−1). This function is therefore a function over all representation (including the input) spaces of fµ, distributed by the additive Gaussian process GP(f̂ | 0, ∑L−1 l=0 k↔). In other words, given the representations h∗ := (h (l) ∗ ) L−1 l=0 of x∗, the marginal over the function output f̂(h∗) =: f̂∗ is thus given by\np(f̂∗) = N ( f̂∗ ∣∣∣∣∣ 0, L−1∑ l=0 k↔ ( h (l) ∗ ,h (l) ∗ ;σ 2 l )) . (6)\nFigure 3(c) visualizes the effect of this definition. The low-variance region modeled by the random function f̂ becomes more compact around the data and can be controlled by varying the kernel hyperparameter σ2l for each layer l = 0, . . . , L − 1. Finally, we can then model the residual in (4) using f̂ instead, i.e. we assume f̃ = f + f̂ .\nThe generalization of RGPR to BNNs with multiple outputs is straightforward. Let f : RN ×RD → RC be a vector-valued, pre-trained, L-layer ReLU BNN. We assume that the sequence of random functions (f̂c : RN̂ → R)Cc=1 is independent and identically distributed by the previous Gaussian process GP(f̂ | 0,∑L−1l=0 k↔). Thus, defining f̂∗ := f̂(h∗) := (f̂1(h∗), . . . , f̂C(h∗))>, we have\np(f̂∗) = N ( f̂∗ ∣∣∣∣∣0, L−1∑ l=0 k↔ ( h (l) ∗ ,h (l) ∗ ;σ 2 l ) I ) . (7)\nFurthermore, as in the real-valued case, for any x∗, the GP posterior p(f̃∗ | x∗,D) is approximately (under the linearization of f ) given by the Gaussians derived from (1) and (7):\np(f̃∗ | x∗,D) ≈ N ( f̃∗ ∣∣∣∣∣ fµ(x∗),J>∗ ΣJ∗ + L−1∑ l=0 k↔ ( h (l) ∗ ,h (l) ∗ ;σ 2 l ) I ) . (8)\nAlthough the derivations above may appear involved, it is worth emphasizing that in practice, the only overheads compared to the usual MC-integrated BNN prediction step are (i) a single additional forward-pass over fµ, (ii) L evaluations of the kernel k↔ and (ii) sampling the C-dimensional Gaussian (7). Note that their costs are negligible compared to the cost of obtaining the standard MC-prediction of f . We refer the reader to Algorithm 1 for a step-by-step pseudocode.\nAlgorithm 1 MC-prediction using RGPR. Differences from the standard procedure are in red. Input:\nPre-trained multi-class BNN classifier f : RN ×RD → RC with posterior p(θ | D). Test point x∗ ∈ RN . Prior variance hyperparameters (σ2l )L−1l=0 of f̂ . Inverse link function h. Number of MC samples S. 1: {h(l)∗ }L−1l=1 = forward(fµ,x∗) . Compute representations of x∗ via a forward pass on fµ 2: vs(x∗) = ∑L−1 l=0 k↔(h (l) ∗ ,h (l) ∗ ;σ 2 l ) . Compute the prior variance of f̂ 3: for s = 1, . . . , S do 4: θs ∼ N (θ | µ,Σ) . Sample from the (approximate) posterior of f 5: fs(x∗) = f(x∗;θs) . Forward pass on f using the sampled parameter 6: f̂s(x∗) ∼ N (f̂(h∗) | 0, vs(x∗)I) . Sample from the marginal (7) 7: f̃s(x∗) = fs(x∗) + f̂s(x∗) . Compute f̃(x∗;θs) 8: end for 9: return S−1 ∑S s=1 h(f̃s(x∗)) . Make prediction by averaging" }, { "heading": "4 ANALYSIS", "text": "Here, we will study the theoretical properties of RGPR. Our assumptions are mild: we (i) assume that RGPR is applied only to the input space and (ii) use the network linearization technique. Assumption (i) is the minimal condition for the results presented in this section to hold—similar results can also easily be obtained when hidden layers are also utilized in RGPR. Meanwhile, assumption (ii) is necessary for tractability—in Section 6 we will validate our analysis in general settings.\nThe following two propositions (i) summarize the property that RGPR preserves the original BNN’s prediction and (ii) show that asymptotically, the marginal variance of the output of f̃ grows cubically.\nProposition 2 (Invariance in Predictions). Let f : RN×RD → RC be any network with posterior N (θ | µ,Σ) and f̃ be obtained from f via RGPR (4). Then under the linearization of f , for any x∗ ∈ RN , we have Ep(f̃∗|x∗,D) f̃∗ = Ep(f∗|x∗,D) f∗.\nProposition 3 (Asymptotic Variance Growth). Let f : RN × RD → RC be a C-class ReLU network with posteriorN (θ | µ,Σ) and f̃ be obtained from f via RGPR over the input space. Suppose that the linearization of f w.r.t. θ around µ is employed. For any x∗ ∈ RN with x∗ 6= 0 there exists β > 0 such that for any α ≥ β, the variance of each output component f̃1(αx∗), . . . , f̃C(αx∗) under p(f̃∗ | x∗,D) (8) is in Θ(α3).\nAs a consequence of Proposition 3, in the binary classification case, the confidence of αx∗ decays like 1/ √ α far away from the training data. This can be seen using the (binary) probit approximation. Thus, in this case we obtain the maximum entropy in the limit of α→∞. In the following theorem we formalize this statement in the more general multi-class classification setting.\nTheorem 4 (Uniform Asymptotic Confidence). Let f : RN × RD → RC be a C-class ReLU network equipped with the posterior N (θ | µ,Σ) and let f̃ be obtained from f via RGPR over the\ninput space. Suppose that the linearization of f and the generalized probit approximation (2) is used for approximating the predictive distribution p(y∗ = c | αx∗, f̃ ,D) under f̃ . Then for any input x∗ ∈ RN with x∗ 6= 0 and for every class c = 1, . . . , C,\nlim α→∞\np(y∗ = c | αx∗, f̃ ,D) = 1\nC ." }, { "heading": "5 RELATED WORK", "text": "The mitigation of the asymptotic overconfidence problem has been studied recently. Although Hein et al. (2019) theoretically demonstrated this issue, their proposed method does not fix this issue for α large enough. Kristiadi et al. (2020) showed that any Gaussian-approximated BNN could mitigate this issue even for α = ∞. However, the asymptotic confidence estimates of BNNs converge to a constant in (0, 1), not to the ideal uniform confidence. In a non-Bayesian framework, using Gaussian mixture models, Meinke & Hein (2020) integrate density estimates of inliers and outliers data into the confidence estimates of an NN to achieve the uniform confidence far away from the data. Nevertheless, this property has not been previously achieved in the context of BNNs.\nModeling the residual of a predictive model with GP has been proposed by Blight & Ott (1975); Wahba (1978); O’Hagan (1978); Qiu et al. (2020). The key distinguishing factors between RGPR and those methods are (i) RGPR models the residual of BNNs, in contrast to that of point-estimated networks, (ii) RGPR uses a novel kernel which guarantees cubic uncertainty growth, and (iii) RGPR requires no posterior inference. Nevertheless, whenever those methods uses our DSCS kernel, RGPR can be seen as an economical approximation of their posterior: RGPR estimates uncertainty near the data with a BNN, while the GP-DSCS prior estimates uncertainty far away from them.\nA combination of weight- and function-space models has been proposed in the context of nonparametric GP posterior sampling. Wilson et al. (2020) proposed to approximate a function as the sum of a weight-space prior and function-space posterior. In contrast, RGPR models a function as the sum of weight-space posterior and function-space prior in the context of parametric BNNs." }, { "heading": "6 EMPIRICAL EVALUATIONS", "text": "Our goal in this section is (i) to validate our analysis in the preceding section: we aim to show that RGPR’s low confidence far-away from the training data is observable in practice, and (ii) to explore the effect of the hyperparameters of RGPR to the non-asymptotic confidence estimates. We focus on classification—experiments on regression are in Appendix D." }, { "heading": "6.1 ASYMPTOTIC REGIME", "text": "We use standard benchmark datasets: MNIST, CIFAR10, SVHN, and CIFAR100. We use LeNet and ResNet-18 for MNIST and the rest of the datasets, respectively. Our main reference is the method based on Blight & Ott (1975) (with our kernel): We follow Qiu et al. (2020) for combining the network and GP, and for carrying out the posterior inference. We refer to this baseline as the Blight and Ott method (BNO)—cf. Appendix C for an exposition about this method. The base methods, which RGPR is implemented on, are the following recently-proposed BNNs: (i) last-layer Laplace (LLL, Kristiadi et al., 2020), (ii) Kronecker-factored Laplace (KFL, Ritter et al., 2018), (iii) stochastic weight averaging-Gaussian (SWAG, Maddox et al., 2019), and (iv) stochastic variational deep kernel learning (SVDKL, Wilson et al., 2016). All the kernel hyperparameters for RGPR are set to 1. In all cases, MC-integral with 10 posterior samples is used for making prediction.\nTo validate Theorem 4, we construct a test dataset artificially by sampling 2000 uniform noises in [0, 1]N and scale them with a scalar α = 2000. The goal is to distinguish test points from these outliers based on the confidence estimates. Since a visual inspection of these confidence estimates as in Figure 1 is not possible in high dimension, we measure the results using the mean maximum confidence (MMC) and area under ROC (AUR) metrics (Hendrycks & Gimpel, 2017). MMC is useful for summarizing confidence estimates, while AUR tells us the usefulness of the confidences for distinguishing between inliers and outliers.\nThe results are presented in Table 1. We observe that the RGPR-augmented methods are significantly better than their respective base methods. In particular, the confidences drop, as shown by the MMC values. We also observe in Table 3 (Appendix D) that the confidence estimates close to the training data do not significantly change. These two facts together yield high AUR values, close to the ideal value of 100. Moreover, most RGPR-imbued methods achieve similar or better performance to BNO baseline, likely be due to uncertainty already presents in the base BNNs. However, these confidences on far-away points are not quite the uniform confidence due to the number of MC samples used—recall that far away from the data, RGPR yields high variance; since the error of MC-integral depends on both the variance and number of samples, a large amount of samples are needed to get accurate MC-estimates. See Figure 5 (Appendix D) for results with 1000 samples: in this more accurate setting, the convergence to the uniform confidence happens at finite (and small) α. Nevertheless, this issue not a detrimental to the detection of far-away outliers, as shown by the AUR values in Table 1." }, { "heading": "6.2 NON-ASYMPTOTIC REGIME", "text": "The main goal of this section is to show that RGPR can also improve uncertainty estimates near the data by varying its kernel hyperparameters. For this purpose, we use a simple hyperparameter optimization using a noise out-of-distribution (OOD) data, similar to Kristiadi et al. (2020), to tune (σ2l )—the details are in Section C.2. We use LLL as the base BNN.\nFirst, we use the rotated-MNIST experiment proposed by Ovadia et al. (2019), where we measure methods’ calibration at different rotation angle, see Figure 4. LLL gives significantly better performance than BNO and RGPR improves the performance further. Moreover, we use standard OOD data tasks where one distinguishes in-distribution from out-distribution samples. We do this with CIFAR10 as the in-distribution dataset against various OOD datasets (more results in Appendix D). As shown in Table 2, LLL outperforms for CIFAR10 BNO and RGPR further improves LLL." }, { "heading": "7 CONCLUSION", "text": "We have shown that adding “missing uncertainty” to ReLU BNNs with a carefully-crafted GP prior that represents infinite ReLU features fixes the asymptotic overconfidence problem of such networks. The core of our method is a generalization of the classic cubic-spline kernel, which, when used as the covariance function of the GP prior, yields a marginal variance which scales cubically in the distance between a test point and the training data. Our main strength lies in the simplicity of the proposed method: RGPR is relative straightforward to implement, and can be applied inexpensively to any pre-trained BNN. Furthermore, extensive theoretical analyses show that RGPR provides significant improvements to previous results with vanilla BNNs. In particular, we were able to show uniform confidence far-away from the training data in multi-class classifications. On a less formal\nnote, our construction, while derived as a post-hoc addition to the network, follows a pleasingly simple intuition that bridges the worlds of deep learning and non-parametric/kernel models: The RGPR model amounts to considering a non-parametric model of infinitely many ReLU features, only finitely many of which are trained as a deep ReLU network." }, { "heading": "APPENDIX A DERIVATIONS", "text": "A.1 THE CUBIC SPLINE KERNEL\nRecall that we have a linear model f : [cmin, cmax] × RK → R with the ReLU feature map φ defined by f(x;w) := w>φ(x) over the input space [cmin, cmax] ⊂ R, where cmin < cmax. Furthermore, φ regularly places the K generalized ReLU functions centered at (ci)Ki=1 where ci = cmin + i−1 K−1 (cmax − cmin) in the input space and we consider a Gaussian prior p(w) :=\nN ( w ∣∣0, σ2K−1(cmax − cmin)I) over the weight w. Then, as K goes to infinity, the distribution over the function output f(x) is a Gaussian process with mean 0 and covariance\ncov(f(x), f(x′)) = σ2 cmax − cmin\nK φ(x)>φ(x′) = σ2 cmax − cmin K K∑ i=1 ReLU(x; ci)ReLU(x ′; ci)\n= σ2 cmax − cmin\nK\nK∑ i=1 H(x− ci)H(x′ − ci)(x− ci)(x′ − ci)\n= σ2 cmax − cmin\nK\nK∑ i=1 H(min(x, x′)− ci) ( c2i − ci(x+ x′) + xx′ ) , (9)\nwhere the last equality follows from (i) the fact that both x and x′ must be greater than or equal to ci, and (ii) by expanding the quadratic form in the second line.\nLet x̄ := min(x, x′). Since (9) is a Riemann sum, in the limit of K → ∞, it is expressed by the following integral\nlim K→∞ cov(f(x), f(x′)) = σ2 ∫ cmax cmin H(x̄− c) ( c2 − c(x+ x′) + xx′ ) dc\n= σ2H(x̄− cmin) ∫ min{x̄,cmax} cmin c2 − c(x+ x′) + xx′ dc\n= σ2H(x̄− cmin) [ 1\n3 (z3 − c3min)−\n1 2 (z2 − c2min)(x+ x′) + (z − cmin)xx′\n]\nwhere we have defined z := min{x̄, cmax}. The term H(x̄ − cmin) has been added in the second equality as the previous expression is zero if x̄ ≤ cmin (since in this region, all the ReLU functions evaluate to zero). Note that\nH(x̄− cmin) = H(x− cmin)H(x′ − cmin) is itself a positive definite kernel. We also note that cmax can be chosen sufficiently large so that [−cmax, cmax]d contains for sure the data, e.g. this is anyway true for data from bounded domains like images in [0, 1]d, and thus we can set z = x̄ = min(x, x′)." }, { "heading": "APPENDIX B PROOFS", "text": "Proposition 1. Suppose f : RN×RD → R defined by (x,θ) 7→ f(x;θ) is a ReLU regression BNN with a prior p(θ) = N (θ | 0,B) and D := {xm, ym}Mm=1 is a dataset. Let f̂ (0) and f̃ be defined as in (4), and let x∗ ∈ RN be arbitrary. Under the linearization of f w.r.t. θ around 0, given that all x1, . . . ,xM are sufficiently close to the origin, the GP posterior of f̃∗ := f̃(x∗) is given by\np(f̃∗ | x∗,D) ≈ N (f̃∗ | f(x;µ), g>∗ Σg∗ + k↔(x∗,x∗)), (5) where µ and Σ are the mean and covariance of the posterior of the linearized network, respectively, and g∗ := ∇θf(x∗;θ)|0.\nProof. Under the linearization of f w.r.t. θ around 0, we have\nf(x;θ) ≈ f(x; 0)︸ ︷︷ ︸ =0 +∇θf(x;θ)|0︸ ︷︷ ︸ =:g(x) >θ = g(x)>θ.\nNow, the definition of RGPR implies that we have\nf̃(x) ≈ g(x)>θ + f̂ (0)(x); f̂ (0)(x) ∼ N (0, k↔(x,x)).\nFollowing O’Hagan (1978), we thus obtain the following GP prior over f̃ , which marginal is\nf̃(x) ∼ N (f̃(x) | 0, g(x)>Bg(x) + k↔(x,x)). Suppose we write the dataset as D = (X,y) whereX is the data matrix and y is the target vectors, and x∗ ∈ RN is an arbitrary test point. Let k↔ := (k↔(x∗,x1), . . . k↔(x∗,xM ))>, let K↔ := (K+σ2I) withKij := k↔(xi,xj) and σ2 > 0 sufficiently large be the (regularized) kernel matrix, and let G := (g(x1), . . . , g(xM )) be the matrix of training “features”. As Rasmussen & Williams (2005, Sec. 2.7) suggests, we have then the following GP posterior mean and variance\nE(f̃(x∗) | D) = g(x∗)>µ+ k↔K−1↔ (y − g(x∗)>µ) (10) var (f̃(x∗) | D) = k↔(x∗,x∗) + k>↔K−1↔ k↔ + r>(B−1 +GK−1↔ G>)−1r, (11)\nwhereµ := (B−1+GK−1↔ G >)−1GK−1↔ y and r := g(x∗)−GK−1↔ k↔. Since all training points x1, . . . ,xM are sufficiently close to the origin, by definition of the DSCS kernel, we have k↔ ≈ 0 andK−1↔ ≈ 1/σ2I . These imply that\nµ ≈ (B−1 + 1/σ2GG>)−1(1/σ2Gy) and r ≈ g(x∗). In particular, notice that µ is approximately the posterior mean of the Bayesian linear regression on f (Bishop, 2006, Sec. 3.3). Furthermore (10) and (11) become\nE(f̃(x∗) | D) ≈ g(x∗)>µ = f(x∗;µ) var (f̃(x∗) | D) ≈ k↔(x∗,x∗) + g(x∗)> (B−1 + 1/σ2GG>)−1︸ ︷︷ ︸\n=:Σ\ng(x∗),\nrespectively. Notice in particular that Σ is the posterior covariance of the Bayesian linear regression on f . Thus, the claim follows.\nProposition 2 (Invariance in Predictions). Let f : RN×RD → RC be any network with posterior N (θ | µ,Σ) and f̃ be obtained from f via RGPR (4). Then under the linearization of f , for any x∗ ∈ RN , we have Ep(f̃∗|x∗,D) f̃∗ = Ep(f∗|x∗,D) f∗.\nProof. Simply compare the means of the Gaussians p(f̃∗ | x∗,D) in (8) and p(f∗ | x∗,D) in (1).\nTo prove Proposition 3 and Theorem 4, we need the following definition. Let f : RN × RD → RC defined by (x,θ) 7→ f(x;θ) be a feed-forward neural network which use piecewise affine activation functions (such as ReLU and leaky-ReLU) and are linear in the output layer. Such a network is called a ReLU network and can be written as a continuous piecewise-affine function (Arora et al., 2018). That is, there exists a finite set of polytopes {Qi}Pi=1—referred to as linear regions f—such that ∪Pi=1Qi = RN and f |Qi is an affine function for each i = 1, . . . , P (Hein et al., 2019). The following lemma is central in our proofs below (the proof is in Lemma 3.1 of Hein et al. (2019)).\nLemma 5 (Hein et al., 2019). Let {Qi}Pi=1 be the set of linear regions associated to the ReLU network f : RN × RD → RC , For any x ∈ RN with x 6= 0 there exists a positive real number β and j ∈ {1, . . . , P} such that αx ∈ Qj for all α ≥ β.\nProposition 3 (Asymptotic Variance Growth). Let f : RN × RD → RC be a C-class ReLU network with posteriorN (θ | µ,Σ) and f̃ be obtained from f via RGPR over the input space. Suppose that the linearization of f w.r.t. θ around µ is employed. For any x∗ ∈ RN with x∗ 6= 0 there exists β > 0 such that for any α ≥ β, the variance of each output component f̃1(αx∗), . . . , f̃C(αx∗) under p(f̃∗ | x∗,D) (8) is in Θ(α3).\nProof. Let x∗ ∈ RN with x∗ 6= 0 be arbitrary. By Lemma 5 and definition of ReLU network, there exists a linear region R and real number β > 0 such that for any α ≥ β, the restriction of f to R can be written as f |R(αx;θ) = W (αx) + b, for some matrixW ∈ RC×N and vector b ∈ RC , which are functions of the parameter θ, evaluated at µ. In particular, for each c = 1, . . . , C, the c-th output component of f |R can be written by\nfc|R = w>c (αx) + bc, where wc and bc are the c-th row ofW and b, respectively.\nLet c ∈ {1, . . . , C} and let jc(αx∗) be the c-th column of the Jacobian J(αx∗) as defined in (1). Then by definition of p(f̃∗ | x∗,D), the variance of f̃c|R(αx∗)—the c-th diagonal entry of the covariance of p(f̃∗ | x∗,D)—is given by\nvar(f̃c|R(αx∗)) = jc(αx∗)>Σjc(αx∗) + k↔(αx∗, αx∗). Now, from the definition of the DSCS kernel in (3), we have\nk↔(αx∗, αx∗) = 1\nN N∑ i=1 k1↔(αx∗i, αx∗i)\n= 1\nN N∑ i=1 α3 σ2 3 x3∗i\n= α3\nN N∑ i=1 k1↔(x∗i, x∗i)\n∈ Θ(α3). Furthermore, we have\njc(αx∗) >Σjc(αx∗) = ( α(∇θwc|µ)>x+∇θbc|µ )> Σ ( α(∇θwc|µ)>x+∇θbc|µ ) .\nThus, jc(αx∗)>Σjc(αx∗) is a quadratic function of α. Therefore, var(f̃c|R(αx∗)) is in Θ(α3).\nTheorem 4 (Uniform Asymptotic Confidence). Let f : RN × RD → RC be a C-class ReLU network equipped with the posterior N (θ | µ,Σ) and let f̃ be obtained from f via RGPR over the input space. Suppose that the linearization of f and the generalized probit approximation (2) is used for approximating the predictive distribution p(y∗ = c | αx∗, f̃ ,D) under f̃ . Then for any input x∗ ∈ RN with x∗ 6= 0 and for every class c = 1, . . . , C,\nlim α→∞\np(y∗ = c | αx∗, f̃ ,D) = 1\nC .\nProof. Let x∗ 6= 0 ∈ RN be arbitrary. By Lemma 5 and definition of ReLU network, there exists a linear region R and real number β > 0 such that for any α ≥ β, the restriction of f to R can be written as f |R(αx) = W (αx) + b, where the matrix W ∈ RC×N and vector b ∈ RC are functions of the parameter θ, evaluated at µ. Furthermore, for i = 1, . . . , C we denote the i-th row and the i-th component ofW and b aswi and bi, respectively. Under the linearization of f , the marginal distribution (8) over the output f̃(αx) holds. Hence, under the generalized probit approximation, the predictive distribution restricted to R is given by\np̃(y∗ = c | αx∗,D) ≈ exp(mc(αx∗)κc(αx∗))∑C i=1 exp(mi(αx∗)κi(αx∗))\n= 1 1 + ∑C i 6=c exp(mi(αx∗)κi(αx∗)−mc(αx∗)κc(αx∗)︸ ︷︷ ︸\n=:zic(αx∗)\n) ,\nwhere for all i = 1, . . . , C,\nmi(αx∗) = fi|R(αx;µ) = w>i (αx) + bi ∈ R, and\nκi(αx) = (1 + π/8 (vii(αx∗) + k↔(αx∗, αx∗))) − 12 ∈ R>0.\nIn particular, for all i = 1, . . . , C, note that m(αx∗)i ∈ Θ(α) and κ(αx)i ∈ Θ(1/α 3 2 ) since vii(αx∗) +k↔(αx∗, αx∗) is in Θ(α3) by Proposition 3. Now, notice that for any c = 1, . . . , C and any i ∈ {1, . . . , C} \\ {c}, we have\nzic(αx∗) = (mi(αx∗)κi(αx∗))− (mc(αx∗)κc(αx∗)) = (κi(αx∗)wi︸ ︷︷ ︸\nΘ ( 1/α 3 2 ) −κc(αx∗)wc︸ ︷︷ ︸ Θ ( 1/α 3 2 ) ) >(αx∗) + κi(αx∗) bi︸ ︷︷ ︸ Θ ( 1/α 3 2 ) −κc(αx∗) bc︸ ︷︷ ︸ Θ ( 1/α 3 2 ) .\nThus, it is easy to see that limα→∞ zic(αx∗) = 0. Hence we have\nlim α→∞ p̃(y∗ = c | αx∗,D) = lim α→∞\n1 1 + ∑C i 6=c exp(zic(αx∗))\n= 1 1 + ∑C i6=c exp(0)\n= 1\nC ,\nas required." }, { "heading": "APPENDIX C FURTHER DETAILS", "text": "C.1 THE BLIGHT AND OTT’S METHOD\nThe Blight and Ott’s method (BNO) models the residual of polynomial regressions. That is, suppose φ : R → RD is a polynomial basis function defined by φ(x) := (1, x, x2, . . . , xD−1), k is an arbitrary kernel, and w ∈ RD is a weight vector, BNO assumes\nf̃(x) := w>φ(x) + f̂(x), where f̂(x) ∼ GP(0, k(x, x)).\nRecently, this method has been extended to neural network. Qiu et al. (2020) apply the same idea— modeling residuals with GPs—to pre-trained networks, resulting in a method called RIO. Suppose that fµ : RN → R is a neural-network with a pre-trained, point-estimated parameters µ. Their method is defined by\nf̃(x) := fµ(x) + f̂(x), where f̂(x) ∼ GP(0, kIO(x,x)). The kernel kIO is a sum of RBF kernels applied on the dataset D (inputs) and the network’s predictions overD (outputs), hence the name IO—input-output. As in the original Blight and Ott’s method, RIO also focuses in doing posterior inference on the GP. Suppose that m(x) and v(x) is the a posteriori marginal mean and variance of the GP, respectively. Then, via standard computations, one can see that even though f is a point-estimated network, f̃ is a random function, distributed a posteriori by\nf̃(x) ∼ N ( f̃(x) ∣∣∣ f̃µ(x) +m(x), v(x)) . Thus, BNO and RIO effectively add uncertainty to point-estimated networks.\nThe posterior inference of BNO and RIO can be computationally intensive, depending on the number of training examples M : The cost of exact posterior inference is in Θ(M3). While it can be alleviated by approximate inference, such as via inducing point methods and stochastic optimizations, the posterior inference requirement can still be a hindrance for a practical adoption of BNO and RIO, especially on large problems.\nC.2 HYPERPARAMETER TUNING\nWe have shown in the main text (both theoretically and empirically) that the asymptotic performance of RGPR does not depend on the choice of its hyperparameters (σ2l ) L−1 l=0 . Indeed we simply set each σ2l to its default value 1 for all experiments and showed that RGPR could already fix the asymptotic overconfidence problem effectively.\nNevertheless, Figure 3 gives us a hint that learning these hyperparameters might be beneficial for uncertainty estimation. Intuitively, by increasing (σ2l ), one might be able to make the high confidence (low uncertainty) region more compact. However, if the values of (σ2l ) were too large, the uncertainty will be high even in the data region, resulting in underconfidenct predictions.\nBorrowing the contemporary method in robust learning literature (Hendrycks et al., 2019; Hein et al., 2019; Meinke & Hein, 2020, etc.) one way to train (σ2l ) is by using the following min-max objective which intuitively balances high-confidence predictions on inliers and low-confidence predictions on outliers. LetH be the entropy functional,D the training dataset,Dout an outlier dataset, σ2 := (σ2l ), and λ ∈ R be a trade-off parameter. We define:\nL(σ2) := E x (in) ∗ ∈D\nH ( p̃(y∗ | x(in)∗ ,D;σ2) ) − λ E\nx (out) ∗ ∈Dout\nH ( p̃(y∗ | x(out)∗ ,D;σ2) ) , (12)\nwhere the predictive distribution p̃(y∗ | x∗,D;σ2) is as defined in Section 4 with its dependency to σ2 explicitly shown. In this paper, for the outlier dataset Dout, we use noise dataset constructed by Gaussian blur and contrast scaling as proposed by Hein et al. (2019). We found that this simple dataset is already sufficient for showing good improvements over the default values (σ2l = 1). Nevertheless, using more sophisticated outlier datasets, e.g. those used in robust learning literature, could potentially improve the results further. Lastly, we use the trade-off value of λ = 1 and λ = 0.75 for our experiments with LeNet/ResNet-18 and DenseNet-BC-121, respectively since we found that λ = 1 in the latter architecture generally make the network severely underconfident." }, { "heading": "APPENDIX D ADDITIONAL EXPERIMENTS", "text": "D.1 CLASSIFICATION\nWe show the behavior of a RGPR-imbued image classifier (LLL) in terms of α in Figure 5. While Table 1 has already shown that RGPR makes confidence estimates close to uniform, here we show that the convergence to low confidence occurred for some small α. Furthermore, notice that when α = 1, i.e. at the test data, RGPR maintains the high confidence of the base method.\nD.2 REGRESSION\nTo empirically validate our method and analysis (esp. Proposition 3), we present a toy regression results in Figure 6. RGPR improves the BNN further: Far-away from the data, the error bar becomes wider.\nFor more challenging problems, we employ a subset of the standard UCI regression datasets. Our goal here, similar to the classification case, is to compare the uncertainty behavior of RGPRaugmented BNN baselines near the training data (inliers) and far-away from them (outliers). The outlier dataset is constructed by sampling 1000 points from the standard Gaussian and scale them with α = 2000. Naturally, the metric we choose is the predictive error bar (standard deviation), i.e. the same metric used in Figure 1. Following the standard practice (see e.g. Sun et al. (2019)), we use a two-layer ReLU network with 50 hidden units. The Bayesian methods used are LLL, KFL, SWAG, and stochastic variational GP (SVGP, Hensman et al., 2015) using 50 inducing points. Finally, we standardize the data and the hyperparameter for RGPR is set to 0.001 so that RGPR does not incur significant uncertainty on the inliers.\nThe results are presented in Table 4. We can observe that all RGPRs retain high confidence estimates over inlier data and yield much larger error bar compared to the base methods. Furthermore, as we show in Table 5, the RGPR-augmented methods retain the base methods’ predictive performances in terms of test RMSE. All in all, these findings confirm the effectiveness of RGPR in far-away outlier detection.\nD.3 NON-ASYMPTOTIC REGIME\nUsing (12), we show the results of a tuned-RGPR on standard out-of-distribution (OOD) data detection benchmark problems on LeNet/ResNet architecture in Tables 2 and 6. Furthermore, we show results for deeper network (121-layer DenseNet-BC) in Table 7. We optimize (σ2l ) using Adam with learning rate 0.1 over each validation set and the noise dataset (both contain 2000 points) for 10 epochs. Note that this process is quick since no backpropagation over the networks is required. In general tuning the kernel hyperparameters of RGPR lead to significantly lower average confidence (MMC) over outliers compared to the vanilla method (LLL) which leads to higher detection\nperformance (AUR). Finally, we show the calibration performance of RGPR on the DenseNet in Table 8. We observe that the base BNN we use, LLL, does not necessarily give good calibration performance. Applying RGPR improves this, making LLL better calibrated than the “gold standard” baseline BNO.\nWe also compare LLL-RGPR to Deep Ensemble (DE) (Lakshminarayanan et al., 2017) which has been shown to perform better compared to Bayesian methods (Ovadia et al., 2019). As we can see in Table 10, LLL-RGPR is competitive to DE. These results further reinforce our finding that RGPR is also useful in non-asymptotic regime.\nInspecting the optimal hyperparameters (σ2l ), we found that high kernel variances on higher layers tend to be detrimental to the uncertainty estimate, as measured by (12), leading to low variance values on those layers, cf. Table 9. Specifically, for the LeNet architecture, we found that having high kernel variance on the input (the bottom-most layer) is desirable. Meanwhile, the first residual block and the second dense block are the most impactful in terms of uncertainty estimation for the ResNet and DenseNet architectures, respectively." } ]
2,020
null
SP:bad1f2bea2a00f6edc474cd1e78c4011525348e5
[ "The paper explores a new approach to credit assignment that complements existing work. It focuses on model-free approaches to credit assignment using hindsight information. In contrast to some prior work on this topic, e.g., (Harutyunyan et al. 2019), the paper does not rely explicitly on hand-crafted information, but instead learns to extract useful hindsight information. The contributions of the paper are two-fold. First, the paper introduces two new policy gradient estimators, FC-PG and CCA-PG, and it proves that the novel gradient estimators are unbiased. Second, it provides experimental evidence that the novel estimators are beneficial compared to some prior work (in particular (Harutyunyan et al. 2019)). " ]
Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating skill from luck, ie. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We then propose to use these as future-conditional baselines and critics in policy gradient algorithms and we develop a valid, practical variant with provably lower variance, while achieving unbiasedness by constraining the hindsight information not to contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative problems.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "OpenAI: Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Jozefowicz", "Bob McGrew", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray" ], "title": "Learning dexterous in-hand manipulation", "venue": "The International Journal of Robotics Research,", "year": 2020 }, { "authors": [ "Jose A Arjona-Medina", "Michael Gillhofer", "Michael Widrich", "Thomas Unterthiner", "Johannes Brandstetter", "Sepp Hochreiter" ], "title": "Rudder: Return decomposition for delayed rewards", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Lars Buesing", "Theophane Weber", "Shakir Mohamed" ], "title": "Stochastic gradient estimation with finite differences", "venue": "In NIPS2016 Workshop on Advances in Approximate Inference,", "year": 2016 }, { "authors": [ "Lars Buesing", "Theophane Weber", "Yori Zwols", "Sebastien Racaniere", "Arthur Guez", "Jean-Baptiste Lespiau", "Nicolas Heess" ], "title": "Woulda, coulda, shoulda: Counterfactually-guided policy search. 2019 International Conference for Learning Representations (ICLR), 2019", "venue": null, "year": 2019 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Gated feedback recurrent neural networks", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Johan Ferret", "Raphaël Marinier", "Matthieu Geist", "Olivier Pietquin" ], "title": "Credit assignment as a proxy for transfer in reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Paul Glasserman", "David D Yao" ], "title": "Some guidelines and guarantees for common random numbers", "venue": "Management Science,", "year": 1992 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Jordan Hoffmann", "Shagun Sodhani", "Sergey Levine", "Yoshua Bengio", "Bernhard Schölkopf" ], "title": "Recurrent independent mechanisms", "venue": null, "year": 1909 }, { "authors": [ "Evan Greensmith", "Peter L Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Arthur Guez", "Fabio Viola", "Theophane Weber", "Lars Buesing", "Steven Kapturowski", "Doina Precup", "David Silver", "Nicolas Heess" ], "title": "Value-driven hindsight modelling", "venue": null, "year": 2019 }, { "authors": [ "Jessica B Hamrick" ], "title": "Analogues of mental simulation and imagination in deep learning", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Anna Harutyunyan", "Will Dabney", "Thomas Mesnard", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Nicolas Heess", "Hado P van Hasselt", "Gregory Wayne", "Satinder Singh", "Doina Precup" ], "title": "Hindsight credit assignment", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Nitish Srivastava", "Kevin Swersky" ], "title": "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent", "venue": "Cited on,", "year": 2012 }, { "authors": [ "Chia-Chun Hung", "Timothy Lillicrap", "Josh Abramson", "Yan Wu", "Mehdi Mirza", "Federico Carnevale", "Arun Ahuja", "Greg Wayne" ], "title": "Optimizing agent behavior over long time scales by transporting value", "venue": "Nature communications,", "year": 2019 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": "arXiv preprint arXiv:1907.02057,", "year": 2019 }, { "authors": [ "Marvin Minsky" ], "title": "Steps toward artificial intelligence", "venue": "Proceedings of the IRE,", "year": 1961 }, { "authors": [ "Andrew Y Ng", "Michael I Jordan" ], "title": "Pegasus: A policy search method for large mdps and pomdps", "venue": "arXiv preprint arXiv:1301.3878,", "year": 2013 }, { "authors": [ "Michael Oberst", "David Sontag" ], "title": "Counterfactual off-policy evaluation with gumbel-max structural causal models", "venue": "arXiv preprint arXiv:1905.05824,", "year": 2019 }, { "authors": [ "Emilio Parisotto", "H Francis Song", "Jack W Rae", "Razvan Pascanu", "Caglar Gulcehre", "Siddhant M Jayakumar", "Max Jaderberg", "Raphael Lopez Kaufman", "Aidan Clark", "Seb Noury" ], "title": "Stabilizing transformers for reinforcement learning", "venue": "arXiv preprint arXiv:1910.06764,", "year": 2019 }, { "authors": [ "Judea Pearl" ], "title": "Causality: Models, reasoning, and inference", "venue": null, "year": 2009 }, { "authors": [ "Paulo Rauber", "Avinash Ummadisingu", "Filipe Mutz", "Juergen Schmidhuber" ], "title": "Hindsight policy gradients", "venue": "arXiv preprint arXiv:1711.06006,", "year": 2017 }, { "authors": [ "Julian Schrittwieser", "Ioannis Antonoglou", "Thomas Hubert", "Karen Simonyan", "Laurent Sifre", "Simon Schmitt", "Arthur Guez", "Edward Lockhart", "Demis Hassabis", "Thore Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Eric Tzeng", "Judy Hoffman", "Kate Saenko", "Trevor Darrell" ], "title": "Adversarial discriminative domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Joel Veness", "Marc G Bellemare", "Marcus Hutter", "Alvin Chua", "Guillaume Desjardins" ], "title": "Compress and control", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft II using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Lex Weaver", "Nigel Tao" ], "title": "The optimal reward baseline for gradient-based reinforcement learning", "venue": "In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence,", "year": 2001 }, { "authors": [ "Théophane Weber", "Nicolas Heess", "Lars Buesing", "David Silver" ], "title": "Credit assignment techniques in stochastic computation graphs", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Cathy Wu", "Aravind Rajeswaran", "Yan Duan", "Vikash Kumar", "Alexandre M Bayen", "Sham Kakade", "Igor Mordatch", "Pieter Abbeel" ], "title": "Variance reduction for policy gradient with action-dependent factorized baselines", "venue": null, "year": 2018 }, { "authors": [ "Kenny Young" ], "title": "Variance reduced advantage estimation with δ-hindsight credit assignment", "venue": "arXiv preprint arXiv:1911.08362,", "year": 2019 }, { "authors": [ "Weber" ], "title": "Et+)) and Jensen’s inequality", "venue": null, "year": 2004 }, { "authors": [ "policy. F" ], "title": "BETTING AGAINST A FAIR COIN We begin from a simple example, borrowed from Pearl (2009b), to show that two SCMs that induce the same interventional and observational distributions can imply different counterfactual distributions. The example consists of a game to guess the outcome of a fair coin toss. The action A and", "venue": null, "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) agents act in their environments and learn to achieve desirable outcomes by maximizing a reward signal. A key difficulty is the problem of credit assignment (Minsky, 1961), i.e. to understand the relation between actions and outcomes and to determine to what extent an outcome was caused by external, uncontrollable factors, i.e. to determine the share of ‘skill’ and ‘luck’. One possible solution to this problem is for the agent to build a model of the environment, and use it to obtain a more fine-grained understanding of the effects of an action. While this topic has recently generated a lot of interest (Ha & Schmidhuber, 2018; Hamrick, 2019; Kaiser et al., 2019; Schrittwieser et al., 2019), it remains difficult to model complex, partially observed environments.\nIn contrast, model-free reinforcement learning algorithms such as policy gradient methods (Williams, 1992; Sutton et al., 2000) perform simple time-based credit assignment, where events and rewards happening after an action are credited to that action, post hoc ergo propter hoc. While unbiased in expectation, this coarse-grained credit assignment typically has high variance, and the agent will require a large amount of experience to learn the correct relation between actions and rewards. Another issue of model-free methods is that counterfactual reasoning, i.e. reasoning about what would have happened had different actions been taken with everything else remaining the same, is not possible. Given a trajectory, model-free methods can in fact only learn about the actions that were actually taken to produce the data, and this limits the ability of the agent to learn quickly. As environments grow in complexity due to partial observability, scale, long time horizons, and large number of agents, actions taken by the agent will only affect a vanishing part of the outcome, making it increasingly difficult to learn from classical reinforcement learning algorithms. We need better credit assignment techniques.\nIn this paper, we investigate a new method of credit assignment for model-free reinforcement learning which we call Counterfactual Credit Assignment (CCA), that leverages hindsight information to implicitly perform counterfactual evaluation - an estimate of the return for actions other than the ones which were chosen. These counterfactual returns can be used to form unbiased and lower variance estimates of the policy gradient by building future-conditional baselines. Unlike classical Q functions, which also provide an estimate of the return for all actions but do so by averaging over all possible futures, our methods provide trajectory-specific counterfactual estimates, i.e. an estimate of the return for different actions, but keeping as many of the external factors constant between the return and its counterfactual estimate. Our method is inspired by ideas from causality theory, but does not require learning a model of the environment. Our main contributions are: a) proposing a set of environments which further our understanding of when difficult credit assignment leads to poor\npolicy learning; b) introducing new model-free policy gradient algorithms, with sufficient conditions for unbiasedness and guarantees for lower variance. In the appendix, we further c) present a collection of model-based policy gradient algorithms extending previous work on counterfactual policy search; d) connect the literature about causality theory, in particular notions of treatment effects, to concepts from the reinforcement learning literature." }, { "heading": "2 COUNTERFACTUAL CREDIT ASSIGNMENT", "text": "" }, { "heading": "2.1 NOTATION", "text": "We use capital letters for random variables and lowercase for the value they take. Consider a generic MDP (X ,A, p, r, γ). Given a current state x ∈ X and assuming an agent takes action a ∈ A, the agent receives reward r(x, a) and transitions to a state y ∼ p(·|x, a). The state (resp. action, reward) of the agent at step t is denoted Xt (resp. At, Rt). The initial state of the agent X0 is a fixed x0. The agent acts according to a policy π, i.e. action At is sampled from the policy πθ(·|Xt) where θ are the parameters of the policy, and aims to optimize the expected discounted return E[G] = E[ ∑ t γ tRt]. The return Gt from step t is Gt = ∑ t′≥t γ\nt′−tRt′ . Finally, we define the score function sθ(πθ, a, x) = ∇θ log πθ(a|x); the score function at time t is denoted St = ∇θ log πθ(At|Xt). In the case of a partially observed environment, we assume the agent receives an observation Et at every time step, and simply define Xt to be the set of all previous observations, actions and rewards Xt = (O≤t), with Ot = (Et, At−1, Rt−1).1 P(X) will denote the probability distribution of a random variable X ." }, { "heading": "2.2 POLICY GRADIENT ALGORITHMS", "text": "We begin by recalling two forms of policy gradient algorithms and the credit assignment assumptions they make. The first is the REINFORCE algorithm introduced by Williams (1992), which we will also call the single-action policy gradient estimator: Proposition 1 (single action estimator). The gradient of E[G] is given by ∇θE[G] = E [∑\nt≥0 γ t St (Gt − V (Xt)) ] , where V (Xt) = E[Gt|Xt].\nThe appeal of this estimator lies in its simplicity and generality: to evaluate it, the only requirement is the ability to simulate trajectories, and compute both the score function and the return. Let us note two credit assignment features of the estimator. First, the score function St is multiplied not by the whole return G, but by the return from time t. Intuitively, action At can only affect states and rewards coming after time t, and it is therefore pointless to credit action At with past rewards. Second, removing the value function V (Xt) from the return Gt does not bias the estimator and typically reduces variance. This estimator updates the policy through the score term; note however the learning signal only updates the policy πθ(a|Xt) at the value taken by action At = a (other values are only updated through normalization). The policy gradient theorem from (Sutton et al., 2000), which we will also call all-action policy gradient, shows it is possible to provide learning signal to all actions, given we have access to a Q-function Qπ(x, a) = E[Gt|Xt = x,At = a], which we will call a critic in the following. Proposition 2 (All-action policy gradient estimator). The gradient of E[G] is given by ∇θE[G] = E [ ∑ t γ t ∑ a∇θπθ(a|Xt)Qπθ (Xt, a)] .\nA particularity of the all-actions policy gradient estimator is that the term at time t for updating the policy ∇πθ(a|Xt)Qπθ (Xt, a) depends only on past information; this is in contrast with the score function estimates above which depend on the return, a function of the entire trajectory. Proofs can be found in appendix D.1." }, { "heading": "2.3 INTUITIVE EXAMPLE ON HINDSIGHT REASONING AND SKILL VERSUS LUCK", "text": "Imagine a scenario in which Alice just moved to a new city, is learning to play soccer, and goes to the local soccer field to play a friendly game with a group of other kids she has never met. As the game goes on, Alice does not seem to play at her best and makes some mistakes. It turns out however her partner Megan is a strong player, and eventually scores the goal that makes the game a victory. What should Alice learn from this game?\n1Previous actions and rewards are provided as part of the observation as it is generally beneficial to do so in partially observable Markov decision processes.\nWhen using the single-action policy gradient estimate, the outcome of the game being a victory, and assuming a ±1 reward scheme, all her actions are made more likely; this is in spite of the fact that during this particular game she may not have played well and that the victory is actually due to her strong teammate. From an RL point of view, her actions are wrongly credited for the victory and positively reinforced as a result; effectively, Alice was lucky rather than skillful. Regular baselines do not mitigate this issue, as Alice did not a priori know the skill of Megan, resulting in a guess she had a 50% chance of winning the game and corresponding baseline of 0. This could be fixed by understanding that Megan’s strong play were not a consequence of Alice’s play, that her skill was a priori unknown but known in hindsight, and that it is therefore valid to retroactively include her skill level in the baseline. A hindsight baseline, conditioned on Megan’s estimated skill level, would therefore be closer to 1, driving the advantage (and corresponding learning signal) close to 0.\nAs pointed out by Buesing et al. (2019), situations in which hindsight information is helpful in understanding a trajectory are frequent. In that work, the authors adopt a model-based framework, where hindsight information is used to ground counterfactual trajectories (i.e. trajectories under different actions, but same randomness). Our proposed approach follows a similar intuition, but is model-free: we attempt to measure—instead of model— information known in hindsight to compute a future-conditional baseline, with the constraint that the captured information must not have been caused by the agent." }, { "heading": "2.4 FUTURE-CONDITIONAL POLICY GRADIENT ESTIMATOR (FC-PG)", "text": "Intuitively, our approach for assigning proper credit to action At is as follows: via learning statistics Φt we capture relevant information from the rest of the trajectory, e.g. including observations Ot′ at times t′ greater than t. We then learn value functions which are conditioned on the additional hindsight information contained in Φt. In general, these future-conditional values and critics would be biased for use in a policy gradient algorithm; we therefore need to correct their impact on the policy gradient through an importance correction term. Theorem 1 (Future single-action policy gradient estimator). Let Φt be an arbitrary random variable. The following is an unbiased estimator of the gradient of E[G]:\n∇θE[G] = E [∑ t γt St ( Gt − πθ(At|Xt) Pπθ (At|Xt,Φt) V (Xt,Φt) )] (1)\nwhere V (Xt,Φt) = E[Gt|Xt,Φt] is the future Φ-conditional value function2, and Pπθ (At|Xt,Φt) is the posterior probability of action At given (Xt,Φt), for trajectories generated by policy πθ. Theorem 2 (Future all-action policy gradient estimator). The following is an unbiased estimator of the gradient of E[G]:\n∇θE[G] = E [∑\nt γt ∑ a ∇θ log πθ(a|Xt)Pπθ (a|Xt,Φt)Qπθ (Xt,Φt, a) ]\n(2)\nwhere Qπ(Xt,Φt, a) = E[Gt|Xt,Φt, At = a] is the future-conditional Q function (critic). Furthermore, we have Qπθ (Xt, a) = E [ Qπθ (Xt,Φt, a)\nPπ(a|Xt,Φt) π(a|Xt)\n] .\nProofs can be found in appendix D.2. These estimators bear similarity to (and indeed, generalize) the Hindsight Credit Assignment estimator (Harutyunyan et al., 2019), see the literature review and appendix C for a discussion of the connections." }, { "heading": "2.5 COUNTERFACTUAL CREDIT ASSIGNMENT POLICY GRADIENT (CCA-PG)", "text": "The previous section provides a family of estimators, but does not specify which Φ should be used, and what type of Φ would make the estimator useful. Instead of hand-crafting Φ, we will learn to extract Φ from the trajectory (the sequence of observations) (Ot′)t′≥0. A useful representation Φ of the future will simultaneously satisfy two objectives:\n• Φt is predictive of the outcome (the return) by learning a Φ-conditional value function, through minimization of (Gt − V (Xt,Φt))2 or (Gt −Q(Xt, a,Φt))2.\n2Note more generally that any function of Xt and Φt can in fact be used as a valid baseline.\n• The statistic Φt is ‘not a consequence’ of action At; this is done by minimizing (with respect to Φt) a surrogate independence maximization (IM) loss LIM which is non-negative and zero if and only if At and Φt are conditionally independent given Xt.\nIntuitively, the statistics Φ capture exogenous factors to the agent (hence the conditional independence constraint), but that still significantly affect the outcome (hence the return prediction loss). The IM constraint enables us to derive the CCA-PG estimator: Theorem 3 (single-action CCA-PG estimator). IfAt is independent from Φt givenXt, the following is an unbiased estimator of the gradient of E[G]:\n∇θE[G] = E [∑ t γt St (Gt − V (Xt,Φt)) ] (3)\nFurthermore, the hindsight advantage has no higher variance than the forward one: E [ (Gt − V (Xt,Φt))2 ] ≤ E [ (Gt − V (Xt))2 ] .\nTheorem 4 (all-action CCA-PG estimator). Under the same condition, the following is an unbiased estimator of the gradient of E[G]:\n∇θE[G] = E [∑\nt γt ∑ a ∇θπθ(a|Xt)Qπθ (Xt,Φt, a) ]\n(4)\nAlso, we have for all a, Qπθ (Xt, a) = E[Qπθ (Xt,Φt, a)|Xt, At = a].\nProofs can be found in appendix D.3. The benefit of the first estimator (equation 3) is clear: under the specified condition, and compared to the regular policy gradient estimator, the CCA estimator is also unbiased, but the variance of its advantage Gt − V (Xt,Φt) (the critical component behind variance of the overall estimator) is no higher.\nFor the all-action estimator, the benefits of CCA (equation 4) are less self-evident, since this estimator has higher variance than the regular all action estimator (which has variance 0). The interest here lies in bias due to learning imperfect Q functions. Both estimators require learning a Q function from data; any error in Q leads to a bias in π. Learning Q(Xt, a) requires averaging over all possible trajectories initialized with state Xt and action a: in high variance situations, this will require a lot of data. In contrast, if the agent could measure a quantity Φt which has a high impact on the return but is not correlated to the agent action At, it could be far easier to learn Q(Xt,Φt, a). This is because Q(Xt,Φt, a) computes the averages of the return Gt conditional on (Xt,Φt, a); if Φt has a high impact on Gt, the variance of that conditional return will be lower, and learning its average will in turn be simpler. Interestingly, note also that Q(Xt,Φt, a) (in contrast to Q(Xt, a)) is a trajectory-specific estimate of the return for a counterfactual action." }, { "heading": "2.6 ALGORITHMIC AND IMPLEMENTATION DETAILS", "text": "In this section, we provide one potential implementation of the CCA-PG estimator. Note however than in order to be valid, the estimator only needs to satisfy the conditional independence assumption, and alternative strategies could be investigated. The agent is composed of four components:\n• Agent network: We assume the agent constructs an internal state Xt from (Ot′)t′≤t using an arbitrary network, for instance an RNN, i.e.Xt = RNNθ(Ot, Xt−1). FromXt the agent computes a policy πθ(a|Xt).\n• Hindsight network: Additionally, we assume the agent uses a hindsight network ϕ with parameters which computes a hindsight statistic Φt = ϕθ((O,X,A)) (where (O,X,A) is the sequence of all observations, agent states and actions in the trajectory), which may depend arbitrarily on the vectors of observations, agent states and actions (in particular, it may depend on observations from timesteps t′ ≥ t). We investigated two architectures. The first is a backward RNN, where (Φt, Bt) = RNNθ(Xt, Bt+1), where Bt is the state of the backward RNN. Backward RNNs are justified in that they can extract information from arbitrary length sequences, and allow making the statistics Φt a function of the entire trajectory. They also have the inductive bias of focusing more on near-future observations. The second is a transformer (Vaswani et al., 2017; Parisotto et al., 2019). Alternative networks could be used, such as attention-based networks (Hung et al., 2019) or RIMs (Goyal et al., 2019).\n• Value network: The third component is a future-conditional value network Vθ(Xt,Φt).\n• Hindsight classifier: The last component is a probabilistic classifier hω with parameters ω that takes Xt,Φt as input and outputs a distribution over At.\nLearning is ensured through the minimization of four losses: the hindsight baseline loss Lhs =∑ t(Gt − Vθ(Xt,Φt))2 (optimized with respect to θ); the hindsight classifier loss, Lsup =\n− ∑ t E[log hω(At|Xt,Φt)] (optimized with respect to ω only - all other parameters are treated\nas constants); the policy gradient surrogate loss LPG = ∑ t log πθ(At|Xt)(Gt − V (Xt,Φt)), where the bar notation indicates that the quantity is treated as a constant from the point of view of gradient computation; and finally the aforementioned independence loss LIM, which ensures the conditional independence between At and Φt. We investigated two IM losses. The first is the KullbackLeibler divergence between the distributions Pπθ (At|Xt) and Pπθ (At|Xt,Φt). In this case, the KL can be estimated by ∑ a Pπθ (a|Xt) (logPπθ (a|Xt)− logPπθ (a|Xt,Φt)); Pπθ (a|Xt) is simply the policy πθ(a|Xt), and the posterior Pπθ (a|Xt,Φt) can be approximated by probabilistic classifier hω(At|Xt,Φt). This results in LIM(t) = ∑ a πθ(a|Xt) (log πθ(a|Xt)− log hω(a|Xt,Φt)). We also investigated the conditional mutual information between At and Φt; again approximated using h. We did not see significant differences between the two, with the KL slightly outperforming the mutual information. Finally, note that conversely to the classifier loss, when optimizing the IM loss, ω is treated as a constant.\nParameter updates and a figure depicting the architecture can be found in Appendix A." }, { "heading": "3 NUMERICAL EXPERIMENTS", "text": "Given its guarantees on lower variance and unbiasedness, we run all our experiments on the single action version of CCA-PG." }, { "heading": "3.1 BANDIT WITH FEEDBACK", "text": "We first demonstrate the benefits of hindsight value functions in a toy problem designed to highlight these. We consider a contextual bandit problem with feedback. Given N,K ∈ N, we sample for each episode an integer context −N ≤ C ≤ N as well as an exogenous noise r ∼ N (0, σr). Upon taking actionA ∈ {−N, . . . , N}, the agent receives a rewardR = −(C−A)2+ r. Additionally, the agent is provided with aK-dimensional feedback vector F = UC +VA+W r where Un, Vn ∈ RK for −N ≤ n ≤ N , and W ∈ RK are fixed vectors; in our case, for each seed, they are sampled from standard Gaussian distribution and kept constant through all episodes. More details about this problem as well as variants are presented in Appendix B.1.\nFor this problem, the optimal policy is to choose A = C, resulting in average reward of 0. However, the reward R is the sum of the informative reward −(C−A)2 and the noisy reward r, uncorrelated to the action. The higher the standard deviation σr, the more difficult it is to perform proper credit assignment, as high rewards are more likely due to a high value of r than an appropriate choice of action. On the other hand, the feedback F contains information about C, A and r. If the agent can extract information Φ from F in order to capture information about r and use it to compute a hindsight value function, the effect of the perturbation r may be removed from the advantage, resulting in a significantly lower variance estimator. However, if the agent blindly uses F to compute the hindsight value information, information about the context and action will ‘leak’ into the hindsight value, leading to an advantage of 0 and no learning: intuitively, the agent will assume the outcome is entirely controlled by chance, and that all actions are equivalent, resulting in a form of learned helplessness.\nWe investigate the proposed algorithm with N = 10,K = 64. As can be seen on Fig. 1, increasing the variance of the exogenous noise leads to dramatic decrease of performance for the vanilla PG estimator without the hindsight baseline; in contrast, the CCA-PG estimator is generally unaffected by the exogenous noise. For very low level of exogenous noise however, CCA-PG suffers from a decrease in performance. This is due to the agent computing a hindsight statistic Φ which is not perfectly independent from A, leading to bias in the policy gradient update. The agent attributes part of the reward to chance, despite the fact that in low-noise regime, the outcome is entirely due to the agent’s action. To demonstrate this, and evaluate the impact of the independence constraint on performance, we run CCA-PG with different values of the weight λIM of the independence max-\nimization loss, as seen in Fig. 1. For lower values of this parameter, i.e. when Φ and A have a larger mutual information, the performance is dramatically degraded." }, { "heading": "3.2 KEY-TO-DOOR ENVIRONMENTS", "text": "Task Description. We introduce the Key-To-Door family of environments as a testbed of tasks where credit assignment is hard and is necessary for success. In this environment (cf. Fig. 2), the agent has to pick up a key in the first room, for which it has no immediate reward. In the second room, the agent can pick up 10 apples, that each give immediate rewards. In the final room, the agent can open a door (only if it has picked up the key in the first room), and receive a small reward. In this task, a single action (i.e picking up the key) has a very small impact on the reward it receives in the final room, while its episode return is largely driven by its performance in the second room (i.e picking up apples).\nWe now consider two instances of the Key-To-Door family that illustrate the difficulty of credit assignment in the presence of extrinsic variance. In the Low-Variance-Key-To-Door environment, each apple is worth a reward of 1 and opening the final door also gets a reward of 1. Thus, an agent that solves the apple phase perfectly sees very little variance in its episode return and the learning signal for picking up the key and opening the door is relatively strong.\nHigh-Variance-Key-To-Door keeps the overall structure of the Key-To-Door task, but now the reward for each apple is randomly sampled to be either 1 or 10, and fixed within the episode. In this setting, even an agent that has a perfect apple-phase policy sees a large variance in episode returns, and thus the learning signal for picking up the key and opening the door is comparatively weaker. Appendix B.2.1 has some additional discussion illustrating the difficulty of learning in such a setting.\nResults We test CCA-PG on our environments, and compare it against Actor-Critic (Williams (1992), as well as State-conditional HCA and Return-conditional HCA (Harutyunyan et al., 2019) as baselines. We test using both a backward-LSTM (referred to as CCA-PG RNN) or an attention model (referred to as CCA-PG Attn) for the hindsight function. Details for experimental setup are provided in Appendix B.2.2. All results are reported as median performances over 10 seeds.\nWe evaluate agents both on their ability to maximize total reward, as well as solve the specific credit assignment problem of picking up the key and opening the door. Figure 3 compares CCA-PG with the baselines on the High-Variance-Key-To-Door task. Both CCA-PG architectures outperform the baselines in terms of total reward, as well as probability of picking up the key and opening the door.\nThis example highlights the capacity of CCA-PG to learn and incorporate trajectory-specific external factors into its baseline, resulting in lower variance estimators. Despite being a difficult task for credit assignment, CCA-PG is capable of solving it quickly and consistently. On the other hand, vanilla actor-critic is greatly impacted by this external variance, and needs around 3.109 environment steps to have an 80% probability of opening the door. CCA-PG also outperforms State- and ReturnConditional HCA, which do use hindsight information but in a more limited way than CCA-PG.\nOn the Low-Variance-Key-To-Door task, due to the lack of extrinsic variance, standard actor-critic is able to perfectly solve the environment. However, it is interesting to note that CCA-PG still matches this perfect performance. On the other hand, the other hindsight methods struggle with both dooropening and apple-gathering. This might be explained by the fact that both these techniques do not guarantee lower variance, and rely strongly on their learned hindsight classifiers for their policy gradient estimators, which can be harmful when these quantities are not perfectly learned. See Appendix B.2.3 for additional experiments and ablations on these environments.\nThese experiments demonstrate that CCA-PG is capable of efficiently leveraging hindsight information to mitigate the challenge of external variance and learn strong policies that outperform baselines. At the same time, it suffers no drop in performance when used in cases where external variance is minimal." }, { "heading": "3.3 TASK INTERLEAVING", "text": "Motivation. In the real world, human activity can be seen as solving a large number of loosely related problems. These problems are not solved sequentially, as one may temporarily engage with a problem and only continue engaging with it or receive feedback from its earlier actions significantly later. At an abstract level, one could see this lifelong learning process as solving problems not in a sequential, but an interleaved fashion instead. The structure of this interleaving also will typically vary over time. Despite this very complex structure and receiving high variance rewards from the future, humans are able to quickly make sense of these varying episodes and correctly credit their actions. This learning paradigm is quite different from what is usually considered in reinforcement learning. Indeed, focus is mostly put on agents trained on a single task, with an outcome dominated by the agent’s actions, where long term credit assignment is not required and where every episode will be structurally the same. To the end of understanding the effects of this interleaving on lifelong learning, we introduce a new class of environments capturing the structural properties mentioned above. In contrast to most work on multi-task learning, we do not assume a clear delineation between subtasks —each agent will encounter multiple tasks in a single episode, and it is the agent’s responsibility to implicitly detect boundaries between them.\nTask Description. As described in Fig. 4, this task consists of interleaved pairs of query-answer rooms with different visual contexts that represents different tasks. Each task has an associated mapping of ‘good’ (resp. ‘bad’) colors yielding to high (resp. zero) reward. Each episode is composed of randomly sampled tasks and color pairs within those tasks. The ordering and the composition of each episode is random across tasks and color pairs. A visual example of what an episode looks like can be seen in Fig. 4. Additional details are provided in B.3.1.\nThe 6 tasks we will consider next (numbered #1 to #6) are respectively associated with a reward of 80, 4, 100, 6, 2 and 10. Tasks #2, #4, #5 and #6 are referred to as ‘hard’ while tasks #1 and #3 as ‘easy’ because of their large associated rewards. The settings 2, 4 and 6-task are respectively considering tasks 1-2, 1-4 and 1-6. In addition to the total reward, we record the probability of picking up the correct square for the easy and hard tasks separately. Performance in the hard tasks will indicate ability to do fine-grained credit assignment.\nResults. While CCA-PG is able to perfectly solve both the ‘easy’ and ‘hard’ tasks in the three setups in less than 5.108 environment steps (Fig. 5), actor-critic is only capable to solve the ’easy’ tasks for which the associated rewards are large. Even after 2.109 environment steps, actor-critic is still greatly impacted by the variance and remains incapable of solving ‘hard’ tasks in any of the three settings. CCA also outperforms actor-critic in terms of the total reward obtained in each setting. State-conditional and Return-conditional HCA were also evaluated on this task but results are not reported as almost no learning was taking place on the ’hard’ tasks. Details for experimental setup are provided in B.3.2. All results are reported as median performances over 10 seeds. More results along with an ablation study can be found in B.3.3.\nThrough efficient use of hindsight, CCA-PG is able to take into account trajectory-specific factors such as the kinds of rooms encountered in the episode and their associated rewards.\nIn the case of the Multi-Task Interleaving environment, an informative hindsight function would capture the reward for different contexts and exposes as Φt all rewards obtained in the episode except those associated with the current context. This experiment again highlights the capacity of CCA-PG to solve hard credit assignment problems in a context where the return is affected by multiple distractors, while PG remains highly sensitive to them." }, { "heading": "4 RELATED WORK", "text": "This paper builds on work from Buesing et al. (2019) which shows how causal models and real data can be combined to generate counterfactual trajectories and perform off-policy evaluation for RL. Their results however require an explicit model of the environment. In contrast, our work proposes a model-free approach, and focuses on policy improvement. Oberst & Sontag (2019) also investigate counterfactuals in reinforcement learning, point out the issue of non-identifiability of the correct SCM, and suggest a sufficient condition for identifiability; we discuss this issue in appendix F. Closely related to our work is Hindsight Credit Assignment, a concurrent approach from Harutyunyan et al. (2019); in this paper, the authors also investigate value functions and critics that depend on future information. However, the information the estimators depend on is hand-crafted (future state or return) instead of arbitrary functions of the trajectory; their estimators is not guaranteed to have lower variance. Our FC estimator generalizes their estimator, and CCA further characterizes which statistics of the future provide a useful estimator. Relations between HCA, CCA and FC are discussed in appendix C. The HCA approach is further extended by Young (2019), and Zhang et al. (2019) who minimize a surrogate for the variance of the estimator, but that surrogate cannot be guaranteed to actually lower the variance. Similarly to state-HCA, it treats each reward separately instead of taking a trajectory-centric view as CCA. Guez et al. (2019) also investigate future-conditional value functions; similar to us, they learn statistics of the future Φ from which returns can be accurately predicted, and show that doing so leads to learning better representations (but use regular policy gradient estimators otherwise). Instead of enforcing a information-theoretic constraint, they bottleneck information through the size of the encoding Φ. In domain adaptation (Ganin et al., 2016; Tzeng et al., 2017), robustness to the training domain can be achieved by constraining the agent representation not to be able to discriminate between source and target domains, a mechanism similar to the one constraining hindsight features not being able to discriminate the agent’s actions.\nBoth Andrychowicz et al. (2017) and Rauber et al. (2017) leverage the idea of using hindsight information to learn goal-conditioned policies. Hung et al. (2019) leverage attention-based systems and episode memory to perform long term credit assignment; however, their estimator will in general be biased. Ferret et al. (2019) looks at the question of transfer learning in RL and leverage transformers to derive a heuristic to perform reward shaping. Arjona-Medina et al. (2019) also addresses the problem of long-term credit assignment by redistributing delayed rewards earlier in the episode; their approach still fundamentally uses time as a proxy for credit.\nPrevious research also leverages the fact that baselines can include information unknown to the agent at time t (but potentially revealed in hindsight) but not affected by action At, see e.g. (Wu et al., 2018; Foerster et al., 2018; Andrychowicz et al., 2020; Vinyals et al., 2019). Note however that all of these require privileged information, both in the form of feeding information to the baseline inaccessible to the agent, and in knowing that this information is independent from the agent’s action At and therefore won’t bias the baseline. Our approach seeks to replicate a similar effect, but in a more general fashion and from an agent-centric point of view, where the agent learns itself which information from the future can be used to augment its baseline at time t." }, { "heading": "5 CONCLUSION", "text": "In this paper we have considered the problem of credit assignment in RL. Building on insights from causality theory and structural causal models we have developed the concept of future-conditional value functions. Contrary to common practice these allow baselines and critics to condition on future events thus separating the influence of an agent’s actions on future rewards from the effects of other random events thus reducing the variance of policy gradient estimates. A key difficulty lies in the fact that unbiasedness relies on accurate estimation and minimization of mutual information. Learning inaccurate hindsight classifiers will result in miscalibrated estimation of luck, leading to bias in learning. Future research will investigate how to scale these algorithms to more complex environments, and the benefits of the more general FC-PG and all-actions estimators." }, { "heading": "A ARCHITECTURE", "text": "The parameter updates are as follows:\nParameter updates For each trajectory (Xt, At, Rt)t≥0, compute the parameter updates : • ∆θ = −λPG ∑ t∇θ log πθ(At|Xt)(Gt − V (Xt,Φt))− λhs∇θLhs(t)− λIM∇θ ∑ t LIM(t)\n• ∆ω = −∇ωLsup(t)\nwhere the different λ are the weights of each loss." }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS", "text": "" }, { "heading": "B.1 BANDITS", "text": "" }, { "heading": "B.1.1 ARCHITECTURE", "text": "For the bandit problems, the agent architecture is as follows:\n• The hindsight feature Φ is computed by a backward RNN. We tried multiple cores for the RNN: GRU ( (Chung et al., 2015) with 32 hidden units, a recurrent adder (bt = bt−1 + MLP(xt), where the MLP has two layers of 32 units), or an exponential averager (bt = λbt−1 + (1− λ)MLP(xt)).\n• The hindsight classifier hω is a simple MLP with two hidden layers with 32 units each. • The policy and value functions are computed as the output of a simple linear layer with\nconcatenated observation and feedback as input. • All weights are jointly trained with Adam (Kingma & Ba, 2014). • Hyperparameters are chosen as follows (unless specified otherwise): learning rate 4e94,\nentropy loss 4e93, independence maximization tolerance βIM = 0.1; λfwd = λhw = 1; λIM is set through Lagrangian optimization (GECO, Rezende & Viola (2018))." }, { "heading": "B.1.2 ADDITIONAL RESULTS", "text": "Multiagent Bandit Problem: In the multi-agent version, which we will call MULTI-BANDIT, the environment is composed of M replicas of the bandit with feedback task. Each agent i = 1, . . . ,M interacts with its own version of the environment, but feedback and rewards are coupled across agents; MULTI-BANDIT is obtained by modifying the single agent version as follows:\n• The contexts Ci are sampled i.i.d. from {−N, . . . , N}. C and A now denote the concatenation of all agents’ contexts and actions.\n• The feedback tensor is (M,K) dimensional, and is computed as Wc1(C) +Wa1(A) + f ; where the W are now three dimensional tensors. Effectively, the feedback for agent i depends on the context and actions of all other agents.\n• The observation for agent i at step t ≥ 1 is (0, F [t]), where F [t] = (Fi,(t−1)B+1:tB). • The terminal joint reward is ∑ i−(Ci −Ai0)2 for all agents.\nThe multi-agent version does not require the exogenous noise e, as other agents play the role of exogenous noise; it is a minimal implementation of the example found in section 2.3.\nFinally, we report results from the MULTI-BANDIT version of the environment, which can be found in Fig. 7. As the number of interacting agents increases, the effective variance of the vanilla PG estimator increases as well, and the performance of each agent decreases. In contrast, CCA-PG agents learn faster and reach higher performance (though they never learn the optimal policy)." }, { "heading": "B.2 KEY TO DOOR TASKS", "text": "" }, { "heading": "B.2.1 ENVIRONMENT DETAILS", "text": "Table 1 shows the advantages for either picking up the key or not, for an agent that has a perfect apple-phase policy, but never picks up the key or door, on High-Variance-Key-To-Door. Since there are 10 apples which can be worth 1 or 10, the return will be either 10 or 100. Thus the forward baseline in they key phase, i.e. before it has seen how much an apple is worth in the current episode, will be 55. As seen here, the difference in advantages due to Luck is far larger than the difference in advantage due to Skill when not using hindsight, making learning difficult, leading to the policy never learning to start picking up the key or door. However, when we use a hindsight-conditioned baseline, we are able to learn a Φ (i.e. the value of a single apple in the current episode) that is completely independent from the actions taken by the agent, but which can provide a perfect hindsight-conditioned baseline of either 10 or 100." }, { "heading": "B.2.2 ARCHITECTURE", "text": "The agent architecture is as follows:\n• The observation are first fed to 2-layer CNN with with (16, 32) output channels, kernel shapes of (3, 3) and strides of (1, 1). The output of the CNN is the flattened and fed to a linear layer of size 128.\n• The agent state is computed as a forward LSTM with a state size of 128. The input to the LSTM are the output of the previous linear layer, concatenated with the reward at the previous timestep.\n• The hindsight feature Φ is computed either by a backward LSTM (i.e CCA-PG RNN) with a state size of 128 or by an attention mechanism Vaswani et al. (2017) (i.e CCA-PG Att) with value and key sizes of 64, 1 transformer block with 2 attention heads and a 1 hidden layer mlp of size 1024, an output size of 128 and a rate of dropout of 0.1. The input provided is the concatenation of the output of the forward LSTM and the reward at the previous timestep.\n• The policy is computed as the output of a simple MLP with one layer with 64 units where the output of the forward LSTM is provided as input.\n• The forward baseline is computed as the output of a 3-layer MLP of 128 units each where the output of the forward LSTM is provided as input.\n• For CCA, the hindsight classifier hω is computed as concatenation of the output of an MLP, with four hidden layers with 256 units each where the the concatenation of the output of the forward LSTM and the hindsight feature Φ is provided as input, and the log of the policy outputs.\n• For State HCA, the hindsight classifier hω is computed as the output of an MLP, with four hidden layers with 256 units each where the the concatenation of the outputs of the forward LSTM at two given time steps is provided as input.\n• For Return HCA, the hindsight classifier hω is computed as the output of an MLP, with four hidden layers with 256 units each where the the concatenation of the output of the forward LSTM and the return is provided as input.\n• The hindsight baseline is computed as the output of a 3-layer MLP of 128 units each where the concatenation of the output of the forward LSTM and the hindsight feature Φ is provided as input. The hindsight baseline is trained to learn the residual between the return and the forward baseline.\n• All weights are jointly trained with RMSprop (Hinton et al., 2012) with epsilon 1e94, momentum 0 and decay 0.99.\nFor High-Variance-Key-To-Door, the optimal hyperparameters found and used for each algorithm can be found in Table 2.\nFor Key-To-Door, the optimal hyperparameters found and used for each algorithm can be found in Table 3.\nThe agents are trained on full-episode trajectories, using a discount factor of 0.99." }, { "heading": "B.2.3 ADDITIONAL RESULTS", "text": "0.0 0.2 0.4 0.6 0.8 1.0 environment steps 1e9\n10 2\n10 1\n100\n101\n102\nba se\nlin e\nlo ss\n(l og\nsc al\ne)\nPG CCA-PG Att CCA-PG RNN Figure 8: Baseline loss for policy gradient\nversus conditioned baseline loss for CCA in High Variance Key To Door.\nAs shown in Fig. 8, in the case of of actor-critic, the baseline loss increases at first. As the reward associated with apples vary from one episode to another, getting more apples also means increasing the forward baseline loss. On the other hand, as CCA is able to take into account trajectory specific exogenous factors, the hindsight baseline loss can nicely decrease as learning takes place.\nFig. 9 shows the impact of the variance level induced by the apple reward discrepancy between episodes on the probability of picking up the key and opening the door. Thanks to the use of hindsight in its value function, CCA-PG is almost not impacted by this whereas actor-critic sees its performances drops dramatically as variance increases.\nFigure 10 shows a qualitative analysis of the attention weights learned by CCA-PG Att on the HighVariance-Key-To-Door task. For this experiment, we use only a single attention head for easier interpretation of the hindsight function, and show both a heatmap of the attention weights over the entire episode, and a histogram of attention weights at the step where the agent picks up the key. As\n0.0 0.2 0.4 0.6 0.8 1.0 environment steps 1e9\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\ndo or\np ro\nb\nPG 1-20 PG 1-10 PG 1-1 CCA-PG Att 1-20 CCA-PG Att 1-10 CCA-PG Att 1-1\nFigure 9: Impact of variance over credit assignment performances. Probability of opening the door and total reward obtained as a function of the variance level induced by the apple reward discrepancy.\nexpected, the most attention is paid to timesteps just after the agent picks up an apple - since these are the points at which the apple reward is provided to the Φ computation. In particular, very little attention is paid to the timestep where the agent opens the door. These insights further show that the hindsight function learned is highly predictive of the episode return, while not having mutual information with the action taken by the agent, thus ensuring an unbiased policy gradient estimator." }, { "heading": "B.3 MULTI TASKS INTERLEAVING", "text": "" }, { "heading": "B.3.1 ENVIRONMENT DETAILS", "text": "For each task, a random set, but fixed through training, set of 5 out of 10 colored squares are leading to a positive reward. Furthermore, a small reward of 0.5 is provided to the agent when it picks up any colored square. Each episode are 140 steps long and it takes 9 steps for the agent to reach one colored square from it initial position." }, { "heading": "B.3.2 ARCHITECTURE", "text": "We use the same architecture setup as reported in Appendix B.2.2. The agents are also trained on full-episode trajectories, using a discount factor of 0.99.\nFor Multi Tasks Interleaving, the optimal hyperparameters found and used for each algoritm found and used for each algorithm can be found in 4." }, { "heading": "B.3.3 ADDITIONAL RESULTS", "text": "As explained in 3.3, CCA is able to solve all 6 tasks quickly despite the variance induced by the exogenous factors. Actor-critic on the other hand despite solving the easy tasks 1 and 3 for which the agent receives a big reward, it is incapable of reliably solve the 4 remaining tasks for which the associated reward is smaller. This helps unpacking Fig. 5.\n0.0 0.2 0.4 0.6 0.8 1.0 environment steps 1e9\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\npr ob\nso lv\ne ha\nrd ta\nsk s\n1 BPTT steps 5 BPTT steps 30 BPTT steps 140 BPTT steps\nFigure 12: Impact of the number of back-propagation through time steps performed into the hindsight function for CCA RNN. Probability of solving the hard tasks in the 6-task setup of Multi Task Interleaving." }, { "heading": "B.3.4 ABLATION STUDY", "text": "Fig.12 shows the impact of the the number of back-propagation through time steps performed into the backward RNN of the hindsight function while performing full rollouts. This show that learning in hard tasks, where hindsight is crucial for performances, is not much impacted by the number of back-propagation steps performed into the backward RNN. This is great news as this indicates that learning in challenging credit assignment tasks can happen when the hindsight function sees the whole future but only can backprop through a limited window.\nFig.13 shows how performances of CCA with an RNN for the hindsight function are impacted by the unroll length. As expected, the less you are able to look into the future, the harder it becomes to solve this hard credit assignment task as you become limited in your capacity to take exogenous effects into account. The two previous results are exciting because to work at its fullest CCA seems to only require to have access to as many steps into the future as possible while not needing to do back-propagation through the full sequence. This observation is really handy as the environments considered become more complex and with longer episodes." }, { "heading": "C RELATION BETWEEN HCA, CCA, AND FC ESTIMATORS", "text": "The FC estimators generalizes both the HCA and CCA estimator. From FC, we can derive CCA by assuming that Φt and At are conditionally independent (see next section). We can also derive state and return HCA from FC.\nFor return HCA, we obtain both an all-action and baseline version of return HCA by choosing Φt = Gt. For state HCA, we first need to decompose the return into sums of rewards, and apply the policy gradient estimator to each reward separately. For a pair (Xt, Rt+k), and assuming that Rt+k is a function of Xt+k for simplicity, we choose Φt = Xt+k. We then sum the different FC\nestimators for different values of k and obtain both an all-action and single-action version of state HCA.\nNote however that HCA and CCA cannot be derived from one another. Both estimators leverage different approaches for unbiasedness, one (HCA) leveraging importance sampling, and the other (CCA) eschewing importance sampling in favor of constraint satisfaction (in the context of inference, this is similar to the difference between obtaining samples of the posterior by importance sampling versus directly parametrizing the posterior distribution)." }, { "heading": "D PROOFS", "text": "" }, { "heading": "D.1 POLICY GRADIENTS", "text": "Proof of Proposition 1. By linearity of expectation, the expected return can be written as E[G] =∑ t γ tE[Rt]. Writing the expectation as an integral over trajectories, we have:\nE[Rt] = ∑\nx0,...,xt a0,...,at\n∏ s≤t (πθ(as|xs)P (xs+1|xs, as)) R(xt, at) Taking the gradient with respect to θ: ∇θE[Rt] = ∑\nx0,...,xt a0,...,at\n∑ s′≤t ∇θπθ(as′ |xs′)P (xs′+1|xs′ , as′) ∏ s≤t,s6=s′ (πθ(as|xs)P (xs+1|xs, as)) R(xt, at) We then rewrite∇θπθ(as′ |xs′) = ∇θ log πθ(as′ |xs′)πθ(as′ |xs′), and obtain\n∇θE[Rt] = ∑\nx0,...,xt a0,...,at\n∑ s′≤t ∇θπθ(as′ |xs′) ∏ s≤t,s (πθ(as|xs)P (xs+1|xs, as)) R(xt, at) =E\n∑ s′≤t ∇θ log πθ(As′ |Xs′)Rt Summing over t, we obtain\n∇θE[G] =E ∑ t≥0 γt ∑ s′≤t ∇θ log πθ(As′ |Xs′)Rt which can be rewritten (with a change of variables):\n∇θE[G] =E ∑ t≥0 ∇θ log πθ(At|Xt) ∑ t′≥t γt ′ Rt′ =E\n∑ t≥0 γt∇θ log πθ(At|Xt) ∑ t′≥t γt ′−tRt′ =E\n∑ t≥0 γtStGt To complete the proof, we need to show that E[StV (Xt)] = 0. By iterated expectation, E[StV (Xt)] = E[E[StV (Xt)|Xt]] = E[V (Xt)E[St|Xt]], and we have E[St|Xt] =∑ a∇θπθ(a|Xt) = ∇θ( ∑ a πθ(a|Xt)) = ∇θ1 = 0.\nProof of Proposition 2. We start from the single action policy gradient ∇θE[G] = E [∑\nt≥0 γ tStGt ] and analyse the term for time t, E[StGt].\nE[StGt] =E[E[StGt|Xt, At]] =E[StE[Gt|Xt, At]] =E[StQ(Xt, At)] =E [E[StQ(Xt, At)|Xt]]\n=E [∑ a ∇θπθ(a|Xt)Q(Xt, a) ] The first and fourth inequality come from different applications of iterated expectations, the second from the fact St is a constant conditional on Xt, At, and the third from the definition of Q(Xt, At)." }, { "heading": "D.2 PROOF OF FC-PG THEOREM", "text": "Proof of theorem 1. We need to show that E [ St\nπθ(At|Xt) Pπ(At|Xt,Φt)V (Xt,Φt)\n] = 0, so that\nπθ(At|Xt) Pπ(At|Xt,Φt)V (Xt,Φt) is a valid baseline. As previously, we proceed with the law of iterated expectations, by conditioning successively on Xt then Φt\nE [ St\nπθ(At|Xt) Pπ(At|Xt,Φt) V (Xt,Φt)\n] =E [ E [ St\nπθ(At|Xt) Pπ(At|Xt,Φt) V (Xt,Φt) ∣∣∣∣Xt,Φt]] =E [ V (Xt,Φt)E [ St\nπθ(At|Xt) Pπ(At|Xt,Φt) ∣∣∣∣Xt,Φt]] Then we note that\nE [ St\nπθ(At|Xt) Pπ(At|Xt,Φt) ∣∣∣∣Xt,Φt] = ∑ a Pπ(a|Xt,Φt)∇ log πθ(a|Xt) πθ(a|Xt) Pπ(a|Xt,Φt) = ∑ a ∇πθ(a|Xt) = 0.\nProof of theorem 2. We start from the definition of the Q function:\nQ(Xt, a) =E [Gt|Xt, At = a] = EΦt [E [Gt|Xt,Φt, At = a] |Xt, At = a]\n= ∫ φ Pπ(Φ = ϕ|Xt, At = a)Q(Xt,Φt = ϕ, a)\nWe also have\nPπ(Φ = ϕ|Xt, At) = Pπ(Φ = ϕ|Xt)Pπ(At = a|Xt,Φt = φ)\nPπ(At = a|Xt) ,\nwhich combined with the above, results in:\nQ(Xt, a) = ∫ φ Pπ(Φ = ϕ|Xt) Pπ(At = a|Xt,Φt = φ) πθ(a|Xt) Q(Xt,Φt, a)\n=E [ Pπ(At = a|Xt,Φt = φ)\nπθ(a|Xt) Q(Xt,Φt, a) ∣∣∣∣Xt] For the compatibility with policy gradient, we start from:\nE[StGt] =E [∑ a ∇θπθ(a|Xt)Q(Xt, a) ]\nWe replace Q(Xt, a) by the expression above and obtain\nE[StGt] =E [∑ a ∇θπθ(a|Xt)E [ Pπ(At = a|Xt,Φt = φ) πθ(a|Xt) Q(Xt,Φt, a) ∣∣∣∣Xt] ]\n=E [ E [∑ a ∇θπθ(a|Xt) Pπ(At = a|Xt,Φt = φ) πθ(a|Xt) Q(Xt,Φt, a) ∣∣∣∣Xt ]]\n=E [ E [∑ a ∇θ log πθ(a|Xt)Pπ(At = a|Xt,Φt = φ)Q(Xt,Φt, a) ∣∣∣∣Xt ]]\n=E [∑ a ∇θ log πθ(a|Xt)Pπ(At = a|Xt,Φt = φ)Q(Xt,Φt, a) ]\nNote that in the case of a large number of actions, the above can be estimated by\n∇θ log πθ(A′t|Xt)Pπ(A′t|Xt,Φt = φ) πθ(A′t|Xt) Q(Xt,Φt, A ′ t),\nwhereA′t is an independent sample from πθ(.|Xt); note in particular thatA′t shall NOT be the action At that gave rise to Φt, which would result in a biased estimator." }, { "heading": "D.3 PROOF OF CCA-PG THEOREMS", "text": "Assume that Φt and At are conditionally independent on Xt. Then, Pπ(At=a|Xt,Φt=φ)\nPπ(At=a|Xt) = 1. In particular, it is true when evaluating at the random value At. From this simple observation, both CCA-PG theorems follow from the FC-PG theorems.\nTo prove the lower variance of the hindsight advantage, note that\nV[Gt − V (Xt,Φ)] = E[(Gt − V (Xt,Φt))2] =E[G2t ]− E[V (Xt,Φt)2] V[Gt − V (Xt)] = E[(Gt − V (Xt))2] =E[G2t ]− E[V (Xt)2]\nwhere the second equality comes from the fact that E[GtV (Xt,Φt)|Xt,Φt] = V (Xt,Φt)2. To prove the first statement, we have (Gt − V (Xt,Φt)2 = G2t + V (Xt,Φt)2 − 2GtV (Xt,Φt), and apply the law of iterated expectations to the last term:\nE[GtV (Xt,Φt)] =E[E[GtV (Xt,Φt)|Xt,Φt]] =E[V (Xt,Φt)E[Gt|Xt,Φt]] = E[V (Xt,Φt)2]\nThe proof for the second statement is identical. Finally, we note that by Jensen’s inequality, we have E[V (Xt,Φt)2] ≤ E[V (Xt)2], from which we conclude that V[Gt−V (Xt,Φt)] ≤ V[Gt−V (Xt)]." }, { "heading": "D.4 PROOFS OF MODEL-BASED GRADIENT THEOREMS IN APPENDIX E", "text": "Proof of Lemma 1. The proof follows from two simple facts. The first is that the return is a deterministic function G(Xt, a, Et+). The second is that, from the law of iterated expectations we have EEt+ [G(Xt, a ′, εt+)] = EXt+ [EEt+ |Xt+ [G(Xt, a ′, εt+)]], for any distribution of Xt+ . The left hand-side is EXt+∼p(.|Xt,a′). Taking the distribution of Xt+ to be p(.|Xt, a), we obtain the desired result.\nProof of Lemma 2. The policy gradient can be written:∫ A′t,Et+ P (Et+)πθ(A′t|Xt)∇θ log πθ(A′t|Xt)G(Xt, A′t, Et+) (5)\nBut we also have: P (Et+) = P (Et+ |Xt) = ∫ Xt+ ,At πθ(At|Xt)P (Xt+ |At, Xt)P (Et+ |Xt+ , At)\nFor simplicity, denote κ = πθ(At|Xt)P (Xt+ |At, Xt)P (Et+ |Xt+ , At). Combined with equation (5), we find: ∫\nXt+ ,At,Et+ ,A ′ t\nκπθ(A ′ t|Xt)∇θ log πθ(A′t|Xt)G(Xt, A′t, Et+) (6)\nNext, we analyze the same quantity but replacing G(Xt, A′t, Et+) by G(Xt, At, Et+), and find:∫ Xt+ ,At,Et+ ,A ′ t\nκπθ(A ′ t|Xt)∇θ log πθ(A′t|Xt)G(Xt, At, Et+) =∫\nXt+ ,At,Et+ κG(Xt, At, Et+) (∫ A′t πθ(A ′ t|Xt)∇θ log πθ(A′t|Xt) ) = 0 (7)\nsince (∫\nA′t πθ(A\n′ t|Xt)∇θ log πθ(A′t|Xt)\n) = 0.\nSubtracting equation (7) from (6), we obtain the desired result." }, { "heading": "E RL ALGORITHMS, COMMON RANDOMNESS, STRUCTURAL CAUSAL MODELS", "text": "In this section, we provide an alternative view and intuition behind the CCA-PG algorithm by investigating credit assignment through the lens of causality theory, in particular structural causal models (SCMs) (Pearl, 2009a). We relate these ideas to the use of common random numbers (CRN), a standard technique in optimization with simulators (Glasserman & Yao, 1992). We start by presenting algorithms with full knowledge of the environment in the form of both a perfect model and access to the random number generator (RNG) and see how an SCM of the environment can improve credit assignment. We progressively relax assumptions until no knowledge of the environment or its random number generator is required and CCA-PG is recovered." }, { "heading": "E.1 STRUCTURAL CAUSAL MODEL OF THE MDP", "text": "Structural causal models (SCM) (Pearl, 2009a) are, informally, models where all randomness is exogenous, and where all variables of interest are modeled as deterministic functions of other variables and of the exogenous randomness. They are of particular interest in causal inference as they enable reasoning about interventions, i.e. how would the distribution of a variable change under external influence (such as forcing a variable to take a given value, or changing the process that defines a varaible), and about counterfactual interventions, i.e. how would a particular observed outcome (sample) of a variable have changed under external influence. Formally, a SCM is a collection of model variables {V ∈ V }, exogenous random variables {E ∈ E}, and distributions {pE(ε), E ∈ E}, one per exongeous variable, and where the exogenous random variables are all assumed to be independent. Each variable V is defined by a function V = fV (pa(V ),E), where pa(V ) is a subset of V called the parents of V . The model can be represented by a directed graph in which every node has an incoming edge from each of its parents. For the SCM to be valid, the induced graph has to be a directed acyclic graph (DAG), i.e. there exists a topological ordering of the variables such that for any variable Vi, pa(Vi) ⊂ {V1, . . . , Vi−1}; in the following we will assume such an ordering. This provides a simple sampling mechanism for the model, where the exogenous random variables are first sampled according to their distribution, and each node is then computed in indexing order. Note that any probabilistic model can be represented as a SCM by virtue of reparametrization Kingma & Ba (2014); Buesing et al. (2019). However, such a representation is not unique, i.e. different SCMs can induce the same distribution.\nWe now parameterize the MDP given in section 2.1 as a SCM. The transition from Xt to Xt+1 under At is given by the transition function fX : Xt+1 = fX(Xt, At, EXt ) with exogenous variable / random number EXt . The policy function fπ maps a random number Eπt , policy parameters θ, and current state Xt to the action At = fπ(Xt, Eπt , θ). Together, fπ and Eπt induce the policy, a distribution πθ(At|Xt) over actions. Without loss of generality we assume that the reward is a deterministic function of the state and action: Rt = fR(Xt, At). EX and Eπ are random\nvariables with a fixed distribution; all changes to the policy are absorbed by changes to the deterministic function fπ . Denoting Et = (EXt , Eπt ), note the next reward and state (Xt+1, Rt) are deterministic functions of Xt and Et, since we have Xt+1 = fX(Xt, fπ(Xt, Eπt , θ), EXt ) and similarly Rt = R(Xt, fπ(Xt, Eπt , θ). Let Xt+ = (Xt′)t′>t and similarly, Et+ = (EXt , Et′)t′>t Through the composition of the functions fX , fπ and R, the return Gt (under policy πθ) is a deterministic function (denoted G for simplicity) of Xt, At and Et+ ." }, { "heading": "E.2 MODEL-KNOWN POLICY GRADIENT", "text": "In this section, we assume perfect knowledge of the transition functions, reward functions, and SCM distribution. We use the term ‘model-known’ rather than ‘model-based’ to describe this situation.\nConsider a time t, stateXt, and a possible action a forAt. The returnGt is given by the deterministic functionG(Xt, a, Et+), and the Q functionQ(Xt, a) = EEt+ [Gt|Xt, At = a] is its expectation over the exogenous variables Et+ . We are generally interested in evaluating the Q function difference Q(Xt, a)−Q(Xt, a′) for two actions a and a′. Note in particular that the advantage can be written A(Xt, a) = Ea′∼πθ [Q(Xt, a) − Q(Xt, a′)]). The Q function difference can be estimated by a difference G(Xt, a, Et+) − G(Xt, a′, E ′t+) where Et+ and E ′ t+ are two independent samples. If we have direct access to Et+ , for instance because we we have access to a simulator and to its random number generator, we can use common random numbers to potentially reduce variance: G(Xt, a, Et+)−G(Xt, a′, Et+). Note that if actions were continuous, G differentiable and a′ = a + δ with δ small, the quantity becomes ∂G∂a (Xt, a, Et+)× δ, i.e. the gradient of the return G with respect to the action a (see Silver et al. 2014; Heess et al. 2015; Buesing et al. 2016). In general, we will be interested in cases where R may not be differentiable. However, the example highlights that the use of gradient methods implicitly assumes the use of common random numbers, and that return differences computed with common random numbers can be seen as a numerical approximation to the gradient of the return.\nHaving access to the model, suppose we make a two sample estimate of the policy gradient using common random numbers, and use the return of one action as baseline for the other. The policy gradient estimate is\n∇θV (x0) = EAt,A′t∼πθ,Et+ [St(G(Xt, At, Et+)−G(Xt, A ′ t, Et+))] (8)\nwhere we recall that we defined St as ∇θ log πθ(At|Xt; θ). In many situations this estimate will have lower variance than one obtained with a state-conditional baseline (cf. eq. (1)) since the use of common noise for G will strongly correlate return and baseline (which differ only in a single argument to the function G).\nSince At and A′t are samples from the same distribution, the update above remains valid if we swap At and A′t; averaging the two updates, we obtain a two point policy gradient:\n∇θV (x0) = 1\n2 EAt,A′t,Et+ [Yt(G(Xt, At, Et+)−G(Xt, A ′ t, Et+))], (9)\nwhere Yt denotes the score function differential (∇θ log πθ(At|Xt; θ)−∇θ log πθ(A′t|Xt; θ)). The use of a model is required since we need returns from the same state with two different actions (note that in the case of a POMDP the same initial state would require the same history, which is often computationally excessive to do).\nMore generally, we could use K i.i.d. samples A(1)t , . . . , A (K) t and use the leave-one-out average empirical return as a baseline for each sample, which yields\n∇θV (x0) = 1\nK E A (1) t ,...,A (K) t ,Et+ [∑ i ∇θ log πθ(A(i)t |Xt; θ) ∆i ] , (10)\nwhere ∆i ∆ = ( G(Xt, A (i) t , Et+)− 1K−1 ∑ j 6=iG(Xt, A (j) t , Et+) ) .\nThe idea of using multiple rollouts from the same initial state to perform more accurate credit assignment for policy gradient methods has been used under the name vine by Schulman et al. (2015). The authors also note the need for common random numbers to reduce the variance of the multiple rollout estimate (see also Ng & Jordan 2013). Interestingly, if we replace ∆i by the argmax of softmax of the ∆, we obtain a gradient estimate similar to that of the cross-entropy method, a classical and very effective planning algorithm (De Boer et al., 2005; Langlois et al., 2019)." }, { "heading": "E.3 MODEL-BASED COUNTERFACTUAL POLICY GRADIENT", "text": "In the previous section, we derived low-variance policy updates under the assumption that we have access to both a perfect model and its noise generation process. We will now see how modelbased counterfactual reasoning allows us to address both of these restrictions, recalling results from (Buesing et al., 2019). First we briefly recall what counterfactuals are, in particular in the context of reinforcement learning. Counterfactual query intuitively correspond to question of the form ’how would this precise outcome have changed, had I changed a past action to another one?’. In a structural model that consists of outcome variables X and action variables A set to a, estimating the counterfactual outcome X ′ under an alternative action a′ consists in the following three steps:\n• Abduction: infer the exogenous noise variables E under the observation: E ∼ P (E|X). • Intervention: Fix the value of A to a′.\n• Prediction: Evaluate the outcome X ′ conditional on the fixed values E and A = a′.\nWe begin with a lemma (following results from Buesing et al. (2019)), which explains that assuming model correctness, expectations of counterfactual estimates are equal to regular interventional expectations. Denote p(τ |Xt) the distribution of trajectories starting from Xt and following πθ, p(τ |Xt, a) the distribution of trajectories starting from Xt, At = a, and following πθ after At, and p(τ |Xt, a, Et+) the distribution of the trajectories starting atXt,At = a, following the policy πθ but forcing the value of all the SCM exogenous random variables to Et+ (note that this last distribution is in fact a deterministic quantity, since all randomness has been fixed).\nLemma 1. Under the assumptions above,\nQ(Xt, a ′) = EXt+∼p(.|Xt,a′)[G] = EXt+∼p(.|Xt,a) [ EEt+ |Xt+ [ EX′ t+ ∼p(.|Xt,a′,εt+ ) [G] ]] . (11)\nIn other words, we can use SCMs to perform off-policy or counterfactual evaluation without importance sampling, as long as we can infer the exogenous variables of interest.\nThis lemma is particularly useful when using an imperfect model, which we now assume is the only model available. We denote the ‘real-world’ or data distribution by pD and model distributions by pM . Also, let GD denote the true return function and GM the imperfect model of it. Using the model, model-based variants of equation (8) are obtained by simply replacing p by pM :\n∇θV (x0) = ∑ t γtEAt,A′t∼πθ,Et+∼pM [St(GM (Xt, At, Et+)−GM (Xt, A ′ t, Et+))], (12)\nUsing an imperfect model, this update could have high bias. Instead of fully trusting the synthetic data generated by the model, we can combine model data and real data in equation (11), by sampling the outer expectation with respect to pD and the inner ones with respect to pM . We obtain the following counterfactual policy gradient estimate:\nLemma 2. Assuming no model bias, the policy gradient update is equal to ∇θV (x0) = ∑ t γtEXt+∼pD(Xt,πθ) [ EEt+∼pM (Et+ |Xt+ ),A′t∼πθ [S ′ t(G ′ t −Gt)] ] (13)\nwhere S′t = ∇θ log πθ(A′t|Xt) is the score function for the counterfactual action, and where G′t = GM (Xt, A ′ t, Et+) is the model-based counterfactual return estimate. If we explicitly marginalize out A′t, we obtain:\nEXt+∼pD(Xt) [ EEt+∼pM (Et+ |Xt+ ) [∑ a ∇θπθ(a|Xt)(GM (Xt, a, Et+)−Gt) ]] (14)\nNote that in contrast to eq. (13), and even when assuming a perfect model and posterior, the following update will generally be biased (we will later explain why):\n∇θV (x0) 6= ∑ t γtEXt+∼pD(Xt) [ EEt+∼pM (Et+ |Xt+ ),A′t∼πθ [St(Gt −G ′ t)] ]\n(15)\nIn equations (13) and (14) above, Xt+ is sampled by the real environment, and Et+ is from the posterior noise given the observations (which would generally be given by Bayes rule, following P (Et+ |Xt+) ∝ P (Et+)P (Xt+ |Et+)). In particular, this estimate does not require access to the random number generator - instead, it ‘measures’ (estimates) what noise in the model must have been to give rise to the observations given by the real environment.\nThe term Gt = GD(Xt, At, Et+) is the empirical real-world return, while G′t = GM (Xt, A′t, Et+) is the counterfactual return that would have happened mutatis mutandis for action A′t, under the same noise realization. A very slight modification to equation (13) is to use the environment only to sample the trajectory Xt+ , but to use the model for both evaluations of the return:\nEXt+∼pD(Xt,πθ) [ EEt+∼pM (Et+ |Xt+ ) [S ′ t(GM (Xt, A ′ t, Et+)−GM (Xt, At, Et+))] ] (16)\nThis may lead to less variance, and potentially even less bias: even thoughGM is a biased estimate of GD, some of the bias will show up in both terms and cancel out, while it would remain inGD−GM . Note also that in the presence of model bias, it is likely that equations (13) and (16) would suffer from significantly less issues (bias and variance) than their purely model-based alternative (12), as the counterfactual updates are grounded in real data (Xt+ ∼ pD(X)) and corresponding to a “reconstruction” instead of a prior sample." }, { "heading": "E.4 FUTURE CONDITIONAL VALUE FUNCTIONS", "text": "In the previous sub-section, we assumed knowledge of a (potentially imperfect) model but no access to the random number generation; in this sub-section, we make the inverse assumption: we assume we have access to the random number generation, but develop a model-free method that can leverage the access to the entropy engine without explicitly assuming the model.\nLet us consider again the vanilla, single action policy gradient estimate:\n∇θV (x0) = E[St(Gt − V (Xt))] Classically, the baseline function is assumed to be a function of Xt (recall that in the POMDP case, Xt includes the history of observations). If V is a function of any quantity which is statistically dependent on At conditionally on Xt, the baseline could result in a biased estimator for the policy gradient. A sufficient, standard assumption to guarantee this condition, is to not use any data from the future relative to time step t as input for V , although such knowledge is available in principle in off-line policy updates. While the optimal baseline may not necessarily be a state value function, a good surrogate for determining a baseline is to minimize the variance of the advantage: for a state-dependent baseline, this corresponds to setting the baseline to the value function.\nNote that in a structural causal model the random variables E are explicitly assumed to have a distribution affected by no other random variables, in particular Et+ ⊥ At|Xt. It is therefore valid to include them in the baseline; by the same argument as above, a strong candidate baseline is therefore V (Xt, Et+) = E[Gt|Xt, Et+ ]. What does this baseline correspond to? Note that in this expectation the only randomness left is in action at; the corresponding generalized value function is in fact V (Xt, Et+) = ∑ a πθ(a|st)G(Xt, a, Et+). Learning this value function is therefore very closely related to learning the return function G, which itself is closely related to learning the composition of the transition and reward functions. The corresponding policy gradient becomes:\nEEt+ ,At [St(Gt − V (Xt, Et+)]] (17)\nwhere V (Xt, Et+) can be learned by minimizing the square loss between a function of Xt, Et+ and empirical returns Gt. Note that this estimate of the advantage is also lower variance than that of Gt − V (Xt), following V (Xt) = E(V (Xt, Et+)) and Jensen’s inequality (see Weber et al. (2019) for a proof, generalized definitions of value functions, and conditions for valid baselines; and see Weaver & Tao (2001); Greensmith et al. (2004) for results on optimal baselines for policy gradients.)." }, { "heading": "E.5 RECOVERING MODEL-FREE COUNTERFACTUAL POLICY GRADIENTS (CCA-PG)", "text": "In the last two sections, we relaxed the assumptions of having access to either a model of the environment, or to the access to the random number generator. In this section, we combine both ideas to recover our proposed algorithm, CCA-PG.\nTo do so, we follow the idea from section E.4 that a future-conditional value function can be modellike and result in improved credit assignment; however, like in section E.3, instead of assuming knowledge of Et+ , we will estimate it from trajectory information. Let Ft represent any subset or function of the trajectory, such as the sequence of states Xt+ , the return, the sequence of observations, actions, etc. In MDPs, Ft only needs to be a function of present and future states, in POMDPs, Ft will need to be a function of the entire trajectory, for instance, of present and future agent state. A first approach, related to distributional reinforcement learning (Veness et al., 2015; Bellemare et al., 2017), is to forego modeling the environment (as in section E.3) and directly model distributions over returns or value functions. We can induce such a probabilistic models over returns, by assuming a given parametrized base distribution pθ(E), and approximate posterior q(E|F), and value function V (Xt, Et+). These components can be learned by the KL-regularized regression∑\nt\nE [∫ E q(Et|Ft) log q(Et|Ft) p(Et) + (V (Xt, Et)−Gt)2 + (Q(Xt, At, Et)−Gt)2 ] .\nThis equation intuitively captures the idea of measuring a Et from Ft such that Et is approximately independent from the trajectory (represented by the KL loss) yet good at predicting the return (represented by the value loss). We can then train a policy with counterfactual policy gradient in the following way: For each time t, sample Et from q(Et|Ft), compute V and Q and either apply update (3) or (4).\nThis approach is flawed however: Even if E ⊥ At|Xt is assumed to hold under the prior, this will not hold in general under the the posterior q, i.e. to which extent the agent knows about the true value of Et will in general depend on At. For instance, consider a POMDP corresponding to a maze navigation task, where the only uncertainty is the maze layout. Including the maze layout in the value function will not bias the policy gradient update, and typically lower its variance. However, if we train (in a supervised fashion) a network to produce an estimate of the map given the agent’s observations, and provided the value function with a hindsight estimate of the map, the resulting policy update would in general be biased. This is the same reason why equation (15) is in general biased, even assuming a perfect model and posterior.\nFor this reason, we forgo explicit probabilistic modeling, and choose an implicit approach, modeling a function Φt of the trajectory that captures information for predicting return, and therefore only implicitly perfors inference over Et. Following the intuition developed in this section, we require that Φt be independent of At while predicting returns accurately, which finally connects back to the algorithms detailed in section 2." }, { "heading": "F LINKS TO CAUSALITY AND SIMPLE EXAMPLES", "text": "In this section, we will further link the ideas developed in this report to causality theory. In particular we will connect them to two notions of causality theory known as individual treatment effect (ITE) and average treatment effect (ATE). In the previous section, we extensively leveraged the framework of structural causal models. It is however known that distinct SCMs may correspond to the same distribution; learning a model from data, we may learn a model with correct distribution but with with incorrect structural parametrization and counterfactuals. We may therefore wonder whether counterfactual-based approaches may be flawed when using such a model. We investigate this question, and analyze our algorithm in very simple settings for which closed-form computations can be worked out.\nF.1 INDIVIDUAL AND AVERAGE TREATMENT EFFECTS\nConsider a simple medical example which we model with an SCM as illustrated in figure 14. We assume population of patients, each with a full medical state denoted S, which summarizes all factors, known or unknown, which affect a patient’s future health such as genotype, phenotype etc. While S is never known perfectly, some of the patient’s medical historyH may be known, including current symptoms. On the basis ofH , a treatment decision T is taken; as is often done, for simplicity we consider T to be a binary variable taking values in {1=‘treatment’, 0=‘no treatment’}. Finally, health state S and treatment T result in a observed medical outcome O, a binary variable taking values in {1=‘cured’, 0=‘not cured’}. For a given value S = s and T = t, the outcome is a function\n(also denoted O for simplicity) O(s, t). Additional medical information F may be observed, e.g. further symptoms or information obtained after the treatment, from tests such as X-rays, blood tests, or autopsy.\nIn this simple setting, we can charactertize the effectiveness of the treatment for an individual a patient with profile S by the Individual Treatment Effect (ITE) which is defined as the difference between the outcome under treatment and no treatment. Definition 1 (Individual Treatment Effect).\nITE(s) = E[O|S = s,do(T = 1)]− E[O|S = s,do(T = 0)] = O(s, T = 1)−O(s, T = 0) (18)\nThe conditional average treatment effect is the difference in outcome between the choice of T = 1 and T = 0 when averaging over all patients with the same set of symptoms H = h Definition 2 (Conditional Average Treatment Effect).\nATE(h) = E[O|H = h,do(T = 1)]− E[O|H = h,do(T = 0)] = ∫ s p(S = s|H = h)(O(s, T = 1)−O(s, T = 0))\n(19)\nSince the exogenous noise (here, S) is generally not known, the ITE is typically an unknowable quantity. For a particular patient (with hidden state S), we will only observe the outcome under T = 0 or T = 1, depending on which treatment option was chosen; the counterfactual outcome will typically be unknown. Nevertheless, for a given SCM, it can be counterfactually estimated from the outcome and feedback, using the procedure detailed in section E.3 (we suppose O is included in F to simplify notation) Definition 3 (Counterfactually Estimated Individual Treatment Effect).\nCF-ITE[H = h, F = f, T = 1] = δ(o = 1)− ∫ s′ P (S = s′|H = h, F = f, T = 1)O(s′, T = 0)\n(20) CF-ITE[H = h, F = f, T = 0] = ∫ s′ P (S = s′|H = h, F = f, T = 1)O(s′, T = 0)− δ(o = 1)\n(21)\nIn general the counterfactually estimated ITE will not be exactly the ITE, since there may be remaining uncertainty on s. However, the following statements relate CF-ITE, ITE and ATE:\n• If S is identifiable from O and F with probability one, then the counterfactually-estimated ITE is equal to the ITE.\n• The average (over S, conditional on H) of the ITE is equal to the ATE. • The average (over S and F , conditional on H) of CF-ITE is equal to the ATE.\nAssimilating O to a reward, the above illustrates that the ATE (equation 19) essentially corresponds to a difference of Q functions, the ITE (equation 18) to the return differences found in equations (8) and (17), and the counterfactual ITE to the quantities found in equations (13) and (14). In contrast, the advantage Gt − V (Ht) is a difference between a return (a sample-level quantity) and a value\nfunction (a population-level quantity, which averages over all individuals with the same medical history H); this discrepancy explains why the return-based advantage estimate can have very high variance.\nAs mentioned previously, for a given joint distribution over observations, rewards and actions, there may exist distinct SCMs that capture that distribution. Those SCMs will all have the same ATE, which measures the effectiveness of a policy on average. But they will generally have different ITE and counterfactual ITE, which, when using model-based counterfactual policy gradient estimators, will lead to different estimators. Choosing the ‘wrong’ SCM will lead to the wrong counterfactual, and so we may wonder if this is a cause for concern for our methods.\nWe argue that in terms of learning optimal behaviors (in expectation), estimating inaccurate counterfactual is not a cause for concern. Since all estimators have the same expectation, they would all lead to the correct estimates for the effect of switching a policy for another, and therefore, will all lead to the optimal policy given the information available to the agent. In fact, one could go further and argue that for the purpose of finding good policies in expectations, we should only care about the counterfactual for a precise patient inasmuch as it enables us to quickly and correctly taking better actions for future patients for whom the information available to make the decision (H) is very similar. This would encourage us to choose the SCM for which the CF-ITE has minimal variance, regardless of the value of the true counterfactual. In the next section, we elaborate on an example to highlight the difference in variance between different SCMs with the same distribution and optimal policy." }, { "heading": "F.2 BETTING AGAINST A FAIR COIN", "text": "We begin from a simple example, borrowed from Pearl (2009b), to show that two SCMs that induce the same interventional and observational distributions can imply different counterfactual distributions. The example consists of a game to guess the outcome of a fair coin toss. The action A and state S both take their values in {h, t}. Under model I, the outcome O is 1 if A = S and 0 otherwise. Under model II, the guess is ignored, and the outcome is simply O = 1 if S = h. For both models, the average treatment effect E[O|A = h]− E[O|A = t] is 0 implying that in both models, one cannot do better than random guessing. Under model I, the counterfactual for having observed outcome O = 1 and changing the action, is always O = 0, and vice-versa (intuitively, changing the guess changes the outcome). Therefore, the ITE is±1. Under model II, all counterfactual outcomes are equal to the observed outcomes, since the action has in fact no effect on the outcome. The ITE is always 0.\nIn the next section, we will next adapt the medical example into a problem in which the choice of action does affect the outcome. Using the CF-ITE as an estimator for the ATE, we will find how the choice of the SCM affects the variance of that estimator (and therefore how the choice of the SCM should affect the speed at which we can learn which is the optimal treatment decision)." }, { "heading": "F.3 MEDICAL EXAMPLE", "text": "Take the simplified medical example from figure 14, where a population of patients with the same symptoms come to the doctor, and the doctor has a potential treatment T to administer. The state S represents the genetic profile of the patient, which can be one of three {GENEA,GENEB,GENEC} (each with probability 1/3). We assume that genetic testing is not available and that we do not know the value of S for each patient. The doctor has to make a decision whether to administer drugs to this population or not, based on repeated experiments; in other words, they have to find out whether the average treatment effect is positive or not. We consider the two following models:\n• In model I, patients of type GENEA always recover, patients of type GENEC never do, and patients of type GENEB recover if they get the treatment, and not otherwise; in particular, in this model, administering the drug never hurts.\n• In model II, patients of type GENEA and GENEB recover when given the drug, but not patients of type GENEC; the situation is reversed (GENEA and GENEB patients do not recover, GENEC do) when not taking the drug.\nIn both models - the true value of giving the drug is 2/3, and not giving the drug 1/3, which leads to an ATE of 1/3. For each model, we will evaluate the variance of the CF-ITE, under one of the four possible treatment-outcome pair. The results are summarized in table 5. Under model A, the variance of the CF-ITE estimate (which is the variance of the advantage used in CCA-PG gradient) is 1/6, while it is 1 under model B, which would imply A is a better model to leverage counterfactuals into policy decisions." } ]
2,020
null
SP:2093f2f9d4bf15531dd76e02f8d36cddf6961352
[ "This work introduces a method for learning to prove theorems which can leverage prior proving experience in order to discover very long proofs. At its core it works by inputting a corpus of training problems (which can also be annotated with solutions i.e. proofs), training a policy to solve these training problems by curriculum learning. The curriculum works by first supervising on the trace of an entire solution, and then once the system can solve a particular problem, decreasing the amount of the trace that it supervises on. The authors claim that this is a kind of analogical reasoning, because the system's policy is implicitly learning to represent the state/action space on the basis of prior experience.", "In the paper, the authors present a new algorithm for training neural networks used in an automated theorem prover using theorems with or without proofs as training data. The algorithm casts this training task as a reinforcement learning problem, and employs curriculum learning and the Proximal Policy Optimization algorithm to find appropriate neural network parameters, in particular, those that make the prover good at finding long proofs. The authors also propose a new dataset for theorems and proofs for a simple equational theory of arithmetic, which is again suitable for improving (via learning) and testing the ability of the prover for finding long proofs. The proposed prover is tested against existing theorem provers, and for the authors' dataset, it outperforms those provers." ]
We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP focuses on generalizing from short proofs to longer ones of similar structure. To achieve that, FLoP uses state-of-the-art RL approaches that were previously not applied in theorem proving. In particular, we show that curriculum learning significantly outperforms previous learning-based proof guidance on a synthetic dataset of increasingly difficult arithmetic problems.
[ { "affiliations": [], "name": "LONGER PROOFS" } ]
[ { "authors": [ "Jesse Alama", "Tom Heskes", "Daniel Kühlwein", "Evgeni Tsivtsivadze", "Josef Urban" ], "title": "Premise selection for mathematics by corpus analysis and kernel methods", "venue": "J. Autom. Reasoning,", "year": 2014 }, { "authors": [ "Alexander A. Alemi", "François Chollet", "Niklas Een", "Geoffrey Irving", "Christian Szegedy", "Josef Urban" ], "title": "Deepmath - Deep Sequence Models for Premise Selection", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Thomas Anthony", "Zheng Tian", "David Barber" ], "title": "Thinking fast and slow with deep learning and tree search", "venue": "CoRR, abs/1705.08439,", "year": 2017 }, { "authors": [ "Franz Baader", "Tobias Nipkow" ], "title": "Term rewriting and all that", "venue": null, "year": 1998 }, { "authors": [ "Kshitij Bansal", "Sarah M. Loos", "Markus N. Rabe", "Christian Szegedy" ], "title": "Learning to reason in large theories without imitation", "venue": "CoRR, abs/1905.10501,", "year": 2019 }, { "authors": [ "Kshitij Bansal", "Sarah M. Loos", "Markus N. Rabe", "Christian Szegedy", "Stewart Wilcox" ], "title": "HOList: An environment for machine learning of higher-order theorem proving (extended version)", "venue": "CoRR, abs/1904.03241,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "Proceedings of the 26th Annual International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Jasmin Christian Blanchette", "David Greenaway", "Cezary Kaliszyk", "Daniel Kühlwein", "Josef Urban" ], "title": "A learning-based fact selector for Isabelle/HOL", "venue": "J. Autom. Reasoning,", "year": 2016 }, { "authors": [ "Greg Brockman", "Vicki Cheung", "Ludwig Pettersson", "Jonas Schneider", "John Schulman", "Jie Tang", "Wojciech Zaremba" ], "title": "URL http://arxiv.org/ abs/1606.01540", "venue": "OpenAI gym. CoRR,", "year": 2016 }, { "authors": [ "Tianqi Chen", "Carlos Guestrin" ], "title": "XGBoost: A scalable tree boosting system", "venue": "In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Karel Chvalovsky" ], "title": "Top-down neural model for formulae", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Karel Chvalovský", "Jan Jakubuv", "Martin Suda", "Josef Urban" ], "title": "ENIGMA-NG: efficient neural and gradient-boosted inference guidance for E", "venue": "CoRR, abs/1903.03182,", "year": 2019 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Christopher Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "CoRR, abs/1812.02341,", "year": 2018 }, { "authors": [ "Marc-Alexandre Côté", "Ákos Kádár", "Xingdi Yuan", "Ben Kybartas", "Tavian Barnes", "Emery Fine", "James Moore", "Matthew J. Hausknecht", "Layla El Asri", "Mahmoud Adada", "Wendy Tay", "Adam Trischler" ], "title": "TextWorld: A learning environment for text-based games", "venue": "CoRR, abs/1806.11532,", "year": 2018 }, { "authors": [ "Jeffrey L. Elman" ], "title": "Learning and development in neural networks: the importance of starting small", "venue": "Cognition, 48:71–99,", "year": 1993 }, { "authors": [ "Richard Evans", "David Saxton", "David Amos", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Can neural networks understand logical entailment", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Carlos Florensa", "David Held", "Markus Wulfmeier", "Pieter Abbeel" ], "title": "Reverse curriculum generation for reinforcement learning", "venue": "CoRR, abs/1707.05300,", "year": 2017 }, { "authors": [ "Vincent François-Lavet", "Peter Henderson", "Riashat Islam", "Marc G. Bellemare", "Joelle Pineau" ], "title": "An introduction to deep reinforcement learning", "venue": "CoRR, abs/1811.12560,", "year": 2018 }, { "authors": [ "Karlis Freivalds", "Renars Liepins" ], "title": "Improving the neural GPU architecture for algorithm learning", "venue": "CoRR, abs/1702.08727,", "year": 2017 }, { "authors": [ "Nancy Fulda", "Daniel Ricks", "Ben Murdoch", "David Wingate" ], "title": "What can you do with a rock? affordance extraction via word embeddings", "venue": "CoRR, abs/1703.03429,", "year": 2017 }, { "authors": [ "Thibault Gauthier", "Cezary Kaliszyk" ], "title": "Premise selection and external provers for HOL4", "venue": "Proceedings of the 2015 Conference on Certified Programs and Proofs,", "year": 2015 }, { "authors": [ "Thibault Gauthier", "Cezary Kaliszyk", "Josef Urban", "Ramana Kumar", "Michael Norrish" ], "title": "Learning to prove with tactics", "venue": "CoRR, abs/1804.00596,", "year": 2018 }, { "authors": [ "Georges Gonthier", "Andrea Asperti", "Jeremy Avigad", "Yves Bertot", "Cyril Cohen", "François Garillot", "Stéphane Le Roux", "Assia Mahboubi", "Russell O’Connor", "Sidi Ould Biha", "Ioana Pasca", "Laurence Rideau", "Alexey Solovyev", "Enrico Tassi", "Laurent Théry" ], "title": "A machine-checked proof of the odd order theorem", "venue": "Interactive Theorem Proving - 4th International Conference,", "year": 2013 }, { "authors": [ "Adam Grabowski", "Artur Kornilowicz", "Adam Naumowicz" ], "title": "Mizar in a nutshell", "venue": "J. Formalized Reasoning,", "year": 2010 }, { "authors": [ "William H. Guss", "Cayden Codel", "Katja Hofmann", "Brandon Houghton", "Noburu Kuno", "Stephanie Milani", "Sharada Prasanna Mohanty", "Diego Perez Liebana", "Ruslan Salakhutdinov", "Nicholay Topin", "Manuela Veloso", "Phillip Wang" ], "title": "The MineRL competition on sample efficient reinforcement learning using human priors", "venue": "URL http://arxiv.org/abs/ 1904.10079", "year": 1904 }, { "authors": [ "Thomas Hales", "Mark Adams", "Gertrud Bauer", "Dat Tat Dang", "John Harrison", "Truong Hoang", "Cezary Kaliszyk", "Victor Magron", "Sean McLaughlin", "Thang Tat Nguyen", "Truong Quang Nguyen", "Tobias Nipkow", "Steven Obua", "Joseph Pleso", "Jason Rute", "Alexey Solovyev", "An Ta", "Tran Trung", "Diep Thi Trieu", "Roland Zumkeller" ], "title": "A formal proof of the Kepler conjecture", "venue": "Forum of Mathematics,", "year": 2015 }, { "authors": [ "Matan Haroush", "Tom Zahavy", "Daniel J. Mankowitz", "Shie Mannor" ], "title": "Learning how not to act in text-based games", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "John Harrison" ], "title": "HOL Light: A tutorial introduction", "venue": "Palo Alto, California,", "year": 1996 }, { "authors": [ "Ji He", "Jianshu Chen", "Xiaodong He", "Jianfeng Gao", "Lihong Li", "Li Deng", "Mari Ostendorf" ], "title": "Deep reinforcement learning with an unbounded action", "venue": "space. CoRR,", "year": 2015 }, { "authors": [ "Daniel Huang", "Prafulla Dhariwal", "Dawn Song", "Ilya Sutskever" ], "title": "GamePad: A Learning Environment for Theorem Proving", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jan Jakubuv", "Josef Urban" ], "title": "ENIGMA: efficient learning-based inference guiding machine", "venue": "Edinburgh, UK,", "year": 2075 }, { "authors": [ "Jan Jakubuv", "Josef Urban" ], "title": "Hammering Mizar by learning clause guidance", "venue": "CoRR, abs/1904.01677,", "year": 2019 }, { "authors": [ "Arthur Juliani", "Ahmed Khalifa", "Vincent-Pierre Berges", "Jonathan Harper", "Hunter Henry", "Adam Crespi", "Julian Togelius", "Danny Lange" ], "title": "Obstacle tower: A generalization challenge in vision, control, and planning", "venue": "URL http://arxiv.org/abs/1902.01378", "year": 1902 }, { "authors": [ "Łukasz Kaiser", "Ilya Sutskever" ], "title": "Neural GPUs learn algorithms", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban" ], "title": "Learning-assisted automated reasoning with Flyspeck", "venue": "J. Autom. Reasoning,", "year": 2014 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban" ], "title": "FEMaLeCoP: Fairly efficient machine learning connection prover", "venue": "20th International Conference,", "year": 2015 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiří Vyskočil" ], "title": "Efficient semantic features for automated reasoning over large theories", "venue": "Proc. of the 24th International Joint Conference on Artificial Intelligence", "year": 2015 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Jiři Vyskočil" ], "title": "Certified connection tableaux proofs for HOL Light and TPTP", "venue": "In Proceedings of the 2015 Conference on Certified Programs and Proofs,", "year": 2015 }, { "authors": [ "Cezary Kaliszyk", "François Chollet", "Christian Szegedy" ], "title": "HolStep: A machine learning dataset for higher-order logic theorem proving", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Henryk Michalewski", "Miroslav" ], "title": "Olsák. Reinforcement learning of theorem proving", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Michael K. Kinyon", "Robert Veroff", "Petr Vojtechovský" ], "title": "Loops with abelian inner mapping groups: An application of automated deduction", "venue": "Automated Reasoning and Mathematics - Essays in Memory of William W. McCune, volume 7788 of LNCS,", "year": 2013 }, { "authors": [ "Bartosz Kostka", "Jaroslaw Kwiecien", "Jakub Kowalski", "Pawel" ], "title": "Rychlikowski. Text-based adventures of the golovin AI agent", "venue": "CoRR, abs/1705.05637,", "year": 2017 }, { "authors": [ "Laura Kovács", "Andrei Voronkov" ], "title": "First-order theorem proving and Vampire", "venue": "In CAV,", "year": 2013 }, { "authors": [ "Sarah M. Loos", "Geoffrey Irving", "Christian Szegedy", "Cezary Kaliszyk" ], "title": "Deep network guided proof search", "venue": "In 21st International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR),", "year": 2017 }, { "authors": [ "Jacob Menashe", "Peter Stone" ], "title": "Escape room: A configurable testbed for hierarchical reinforcement learning", "venue": "CoRR, abs/1812.09521,", "year": 2018 }, { "authors": [ "Karthik Narasimhan", "Tejas D. Kulkarni", "Regina Barzilay" ], "title": "Language understanding for textbased games using deep reinforcement learning", "venue": "CoRR, abs/1506.08941,", "year": 2015 }, { "authors": [ "Alex Nichol", "Vicki Pfau", "Christopher Hesse", "Oleg Klimov", "John Schulman" ], "title": "Gotta learn fast: A new benchmark for generalization in RL", "venue": "CoRR, abs/1804.03720,", "year": 2018 }, { "authors": [ "Jens Otten", "Wolfgang Bibel" ], "title": "leanCoP: lean connection-based theorem proving", "venue": "J. Symb. Comput.,", "year": 2003 }, { "authors": [ "Aditya Paliwal", "Sarah M. Loos", "Markus N. Rabe", "Kshitij Bansal", "Christian Szegedy" ], "title": "Graph representations for higher-order logic and theorem proving", "venue": "CoRR, abs/1905.10006,", "year": 2019 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Deirdre Quillen", "Chelsea Finn", "Sergey Levine" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "CoRR, abs/1903.08254,", "year": 2019 }, { "authors": [ "Cinjon Resnick", "Roberta Raileanu", "Sanyam Kapoor", "Alex Peysakhovich", "Kyunghyun Cho", "Joan Bruna" ], "title": "Backplay: \"Man muss immer umkehren", "venue": "CoRR, abs/1807.06919,", "year": 2018 }, { "authors": [ "Alan Robinson", "Andrei Voronkov (eds" ], "title": "Handbook of Automated Reasoning", "venue": "Elsevier Science Publishers B. V.,", "year": 2001 }, { "authors": [ "Raphael M. Robinson" ], "title": "An essentially undecidable axiom system", "venue": "Proceedings of the International Congress of Mathematics, pp", "year": 1950 }, { "authors": [ "Melrose Roderick", "Christopher Grimm", "Stefanie Tellex" ], "title": "Deep abstract q-networks", "venue": "CoRR, abs/1710.00459,", "year": 2017 }, { "authors": [ "Tim Salimans", "Richard Chen" ], "title": "Learning Montezuma’s Revenge from a single demonstration", "venue": "CoRR, abs/1812.03381,", "year": 2018 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy P. Lillicrap" ], "title": "Meta-learning with memory-augmented neural networks", "venue": "Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "CoRR, abs/1904.01557,", "year": 2019 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": null, "year": 2017 }, { "authors": [ "Stephan Schulz" ], "title": "System Description: E 1.8", "venue": "Proc. of the 19th LPAR,", "year": 2013 }, { "authors": [ "Daniel Selsam", "Nikolaj Bjørner" ], "title": "Neurocore: Guiding high-performance SAT solvers with unsat-core predictions", "venue": "CoRR, abs/1903.04671,", "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L. Dill" ], "title": "Learning a SAT solver from single-bit supervision", "venue": "CoRR, abs/1802.03685,", "year": 2018 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton", "Yutian Chen", "Timothy Lillicrap", "Fan Hui", "Laurent Sifre", "George van den Driessche", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go without human knowledge", "venue": "URL http: //dx.doi.org/10.1038/nature24270", "year": 2017 }, { "authors": [ "Konrad Slind", "Michael Norrish" ], "title": "A brief overview of HOL4", "venue": "Theorem Proving in Higher Order Logics, 21st International Conference,", "year": 2008 }, { "authors": [ "G. Sutcliffe" ], "title": "The TPTP Problem Library and Associated Infrastructure. From CNF to TH0, TPTP v6.4.0", "venue": "Journal of Automated Reasoning,", "year": 2017 }, { "authors": [ "Geoff Sutcliffe", "Josef Urban" ], "title": "The CADE-25 automated theorem proving system competition - CASC-25", "venue": "AI Commun.,", "year": 2016 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018", "venue": "URL http://incompleteideas.net/book/the-book-2nd. html", "year": 2018 }, { "authors": [ "Josef Urban" ], "title": "MPTP 0.2: Design, implementation, and initial experiments", "venue": "J. Autom. Reasoning,", "year": 2006 }, { "authors": [ "Josef Urban" ], "title": "The MPTP Challenge", "venue": "http://www.tptp.org/Seminars/ MizarVerification/TheMPTPChallenge.html,", "year": 2006 }, { "authors": [ "Josef Urban" ], "title": "MaLARea: a Metasystem for Automated Reasoning in Large Theories", "venue": "Proceedings of the CADE-21 Workshop on Empirically Successful Automated Reasoning in Large Theories, Bremen, Germany,", "year": 2007 }, { "authors": [ "Josef Urban", "Geoff Sutcliffe", "Petr Pudlák", "Jirí Vyskocil" ], "title": "MaLARea SG1- machine learner for automated reasoning with semantic guidance", "venue": "Automated Reasoning, 4th International Joint Conference,", "year": 2008 }, { "authors": [ "Josef Urban", "Jirí Vyskocil", "Petr Stepánek" ], "title": "MaLeCoP: Machine learning connection prover", "venue": "July 4-8,", "year": 2011 }, { "authors": [ "Robert Veroff" ], "title": "Using hints to increase the effectiveness of an automated reasoning program: Case studies", "venue": "J. Autom. Reasoning,", "year": 1996 }, { "authors": [ "Mingzhe Wang", "Yihe Tang", "Jian Wang", "Jia Deng" ], "title": "Premise selection for theorem proving by deep graph embedding", "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Oriol Vinyals", "Rémi Munos", "Samy Bengio" ], "title": "A study on overfitting in deep reinforcement learning", "venue": "CoRR, abs/1804.06893,", "year": 2018 } ]
[ { "heading": null, "text": "We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP focuses on generalizing from short proofs to longer ones of similar structure. To achieve that, FLoP uses state-of-the-art RL approaches that were previously not applied in theorem proving. In particular, we show that curriculum learning significantly outperforms previous learning-based proof guidance on a synthetic dataset of increasingly difficult arithmetic problems." }, { "heading": "1 INTRODUCTION", "text": "In 1958 B. F. Skinner, a pioneer of modern behaviorism, in the article “Teaching Machines” (Skinner, 1958) noticed that “in acquiring complex behavior the student must pass through a carefully designed sequence of steps, often of considerable length. Each step must be so small that it can always be taken, yet in taking it the student moves somewhat closer to fully competent behavior”. His study extended also to the teaching of arithmetic: “The student is expected to arrive at 9 · 7 = 63, not by memorizing it as he would memorize a line of poetry, but by putting into practice such principles as that nine times a number is the same as ten times the number minus the number . . . ”. The idea of learning using a curriculum of problems is also widely used in machine learning (Bengio et al., 2009; Elman, 1993; Resnick et al., 2018; Salimans & Chen, 2018) and in this work we apply curriculum learning to automatic theorem proving focusing on arithmetic.\nOur work has the following contributions. (1) We introduce a new theorem proving algorithm FLoP (Section 4) based on reinforcement learning and the connection tableau calculus. FLoP uses a meta-learning variation of the curriculum learning algorithms presented by Resnick et al. (2018) and Salimans & Chen (2018). (2) We introduce a synthetic dataset of increasingly difficult arithmetic problems organized as RL environments (Section 5). (3) We use this benchmark to compare (Section 6) the performance of our system with state-of-the-art saturation provers Vampire (Kovács & Voronkov, 2013) and E (Schulz, 2013) guided by human-designed strategies, and with rlCoP (Kaliszyk et al., 2018) – a recently developed RL-based connection tableau prover. FLoP significantly outperforms the other provers on harder problems, demonstrating its ability to find longer proofs.\nOur datasets presented in Section 5 seem to be particularly suited for machine learning methods: problems are simple, solutions are long, repetitive and rather predictable for humans. Still, state-ofthe-art systems struggle with solving some of the problems – see Section 6 for details.\nOther works using machine learning to guide a prover (Chvalovský et al., 2019; Jakubuv & Urban, 2017; Kaliszyk & Urban, 2015b; Kaliszyk et al., 2018; Loos et al., 2017; Urban et al., 2011) usually deal with large mathematical corpora, while we focus on a fragment of Robinson Arithmetic, which is a limited and simple theory. Our reasons behind this narrower focus: (a) We wanted to create a scalable RL benchmark with emphasis on the length of proofs. (b) A symbolic method based on human-designed sets of hints (Veroff, 1996) was previously successfully applied in abstract algebra by Kinyon et al. (2013) to discover long proofs and we wanted to check whether learning of long proofs is feasible using the state-of-the-art ML toolset. (c) We wanted interpretable failure modes. In the case of large mathematical corpora, the interpretation of failures may be a hard task because of multiple failures and the complicated structure of the corpora, requiring specialized domain knowledge both in mathematics and with regard to the inner workings of the proof system.\nOur code, datasets and all experiment configuration files are available at http://bit.ly/code_ atpcurr1. Supplementary materials including screencasts with gameplays performed in our environments are available at the project webpage http://bit.ly/site_atpcurr." }, { "heading": "2 RELATED WORK", "text": "Machine learning datasets and RL environments involving mathematics and logic. The arithmetic dataset which we introduce in Section 5 is geared towards longer proofs and is structurally much simpler than other theorem proving datasets which we list below. One can think about this suite of RL problems as gridworlds of theorem proving (see (Sutton & Barto, 2018, Example 3.5) for a broader explanation of importance of gridworlds in RL). Our dataset is intended to become a general purpose testing ground for theorem proving and reinforcement learning methods, in particular for meta-learning and hierarchical learning algorithms.\nTPTP (Sutcliffe, 2017) consists of 22507 problems in 53 domains collected over several decades. A large dataset for developing machine learning for theorem proving based on the Mizar Mathematical Library (MML) (Grabowski et al., 2010) was introduced by Urban (2006a) in the MPTP project. The dataset was used e.g. by Alemi et al. (2016); Kaliszyk & Urban (2015a); Urban (2007); Urban et al. (2008). Similar datasets based on the Isabelle/HOL, HOL Light/Flyspeck and HOL4/CakeML systems and projects (Blanchette et al., 2016; Gauthier & Kaliszyk, 2015; Kaliszyk & Urban, 2014) were introduced in the last decade and used for the CASC LTB (large theory) ATP competition (Sutcliffe & Urban, 2016) and other system evaluations. Such datasets cover large areas of mathematics and computer science and contain diverse axioms, lemmas, theorems, definitions, and symbols. Smaller subsets of lemmas leading to the Bolzano-Weierstrass theorem were selected from the MPTP dataset to form the MPTP Challenge (Urban, 2006b) and the MPTP2078 benchmark (Alama et al., 2014). HOLStep (Kaliszyk et al., 2017) introduced a dataset based on 11400 proofs, including a proof of the Kepler Conjecture (Hales et al., 2015), formalized using HOL Light (Harrison, 1996). In HOLStep and in FormulaNet (Wang et al., 2017) the dataset was used as a benchmark for various neural architectures. The recent HOList project (Bansal et al., 2019a;b; Paliwal et al., 2019) uses 29462 theorems formalized in HOL Light and instruments them for experiments oriented towards tactic selection, where a tactic is a human-designed program which aggregates multiple proof steps. GamePad (Huang et al., 2019) introduced a dataset based on a formalization of the Feit-Thompson Theorem (Gonthier et al., 2013) along with a generated algebra problems. It is intended for learning tactic selection together with an auxiliary task of predicting the number of proof steps left. A dataset based on theorems proved in HOL4 (Slind & Norrish, 2008) was used for developing the TacticToe (Gauthier et al., 2018) learning-guided tactical prover. Saxton et al. (2019) proposed a dataset of simple algebraic problems expressed in English. Arithmetic problems without a natural language context were tackled by Neural GPUs (Kaiser & Sutskever, 2016) and its improved successors (Freivalds & Liepins, 2017). Supervised learning was also applied to various instances of propositional satisfiability in NeuroSAT (Selsam et al., 2018) and NeuroCore (Selsam & Bjørner, 2019) as well as in (Chvalovsky, 2019; Evans et al., 2018). Datasets introduced in Section 5 are OpenAI-gym (Brockman et al., 2016) compliant and can be tested with modern RL algorithms. Previous work on theorem proving and RL includes TacticToe, HOList and rlCoP (Kaliszyk et al., 2018). TacticToe and rlCoP use guided Monte Carlo Tree Search (MCTS) and HOList proposes a custom RL algorithm. Machine learning systems for guidance of theorem provers. Our current work focuses on providing guidance for the fCoP (Kaliszyk et al., 2015b) theorem prover. fCoP is an OCaml implementation of the very compact lean connection tableau prover (Otten & Bibel, 2003). fCoP was used as the proof engine in the guided provers FEMaLeCoP (Kaliszyk & Urban, 2015b) and rlCoP (Kaliszyk et al., 2018). FEMaLeCoP learns only from positive data using two simple, but fast machine learning models (custom nearest neighbour and naive Bayes). In rlCoP, the value and policy functions of the guided MCTS algorithm are learned similar to (Anthony et al., 2017; Silver et al., 2017), using gradient boosted trees as implemented in the XGBoost (Chen & Guestrin, 2016) library. In contrast, we use neural network models instead of trees and the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm instead of MCTS. In a longer run we believe that these methods should be combined, see (François-Lavet et al., 2018, Section 6.2), but in this work we\n1This distribution does not include the fCoP theorem prover, which cannot yet be publicly released, however a binary can be obtained upon request.\npropose to investigate how much can be achieved directly via rollouts and without a search algorithm like MCTS. A distinctive feature of our approach is the ability to perform very long rollouts both in training and evaluation. We demonstrate this in Section 6, see Figure 4. Chvalovský et al. (2019); Jakubuv & Urban (2017; 2019); Loos et al. (2017) added learning-based guidance to E prover (Schulz, 2013). These are supervised experiments which learn from saturation-style proof traces.\nMeta-learning suites of RL environments. Meta-learning algorithms in the context of RL can be tested using a suite of simulated robotic tasks (Finn et al., 2017; Rakelly et al., 2019), one of discrete environments proposed in (Cobbe et al., 2018; Juliani et al., 2019; Menashe & Stone, 2018; Nichol et al., 2018; Roderick et al., 2017) or a new MineRL (Guss et al., 2019) suite of problems with mixed continuous and discrete actions. Our suite of tasks involves discrete actions.\nCurriculum learning and reinforcement learning. Our algorithm FLoP is an adaptation of the curriculum learning algorithms presented in (Resnick et al., 2018; Salimans & Chen, 2018) in a context of a suite of reinforcement learning environments presented in Section 5.\n3 FCOP AND THE CONNECTION TABLEAU CALCULUS\nIn this section, we give a brief overview of the connection tableau method, as implemented by the fCoP system. We assume basic first-order logic and theorem proving terminology (Robinson & Voronkov, 2001). The input is a (mathematical) problem consisting of axioms and conjectures formally stated in first-order logic (FOL). The calculus searches for refutational proofs, i.e. proofs showing that the axioms together with the negated conjectures are unsatisfiable. The FOL formulas are first translated to clause normal form (CNF), producing a set of first-order clauses consisting of literals (atoms or their negations). Figure 1 shows a closed connection tableau, i.e., a finished proof tree where every branch contains complementary literals (literals with opposite polarity). Since all branches contain a pair of contradictory literals, this shows that the set of clauses is unsatisfiable. Proof search starts with a start clause as a goal and proceeds by building a connection tableau by repeatedly applying extension steps and reduction steps.\nThe extension step connects (unifies) the current goal (a selected tip of a tableau branch) with a complementary literal of a new clause. This extends the current branch, possibly splitting it into several branches if there are more literals in the new clause, and possibly instantiating some variables in the tableau. The reduction step connects the current goal to a complementary literal of the active path, thus closing the current branch. The proof is finished when all branches are closed. The extension and reduction steps are nondeterministic, requiring backtracking in the standard connection calculus. Brute force search such as iterative deepening can be used to ensure completeness, i.e., making sure that the proof search finds a proof if there is any.\nfCoP represents theorem proving as a one-person game. The game ends with a success if a proof is found. The prover has many choices to make along the way, hence it typically has to explore a search space that is exponentially large in the length of the proof. In fCoP, the action space is roughly correlated with the size of the axiom set. While this can be large for large problems, typically only a few actions are available in any particular state.\n4 FLOP – THE MAIN ALGORITHM\nThe FLoP algorithm combines the connection tableau calculus with guidance based on PPO and curriculum learning Resnick et al. (2018); Salimans & Chen (2018). Actions in our theorem proving game consist of selecting an extension step as defined in Section 3 (reduction steps are performed automatically by the game engine). Figures 2 and 3 show how actions interact with other components of FLoP. Each extension step involves selecting one of the clauses, however, not all clauses are applicable as actions at a given proof step, due to the unification condition. The full information about the game state consists of all previous proof steps, the partial proof tree (proof state) and the current goal. The state and actions (formulas) are represented using previously developed features Kaliszyk & Urban (2015b); Kaliszyk et al. (2015a; 2018). The features mainly include (suitably hashed) triples of adjacent nodes in the formula trees and in the partial proof trees. This means that the proof states and the actions are presented as (sparse) fixed length vectors, see the inputs to the policy and value networks in Figure 2. These features have proved useful but are not free from problems. See the discussion in Appendix A.\nCurriculum Learning on Proofs. In theorem proving we are dealing with sparse rewards and we tackle this with the help of curriculum learning as implemented in Algorithm 1.\nFirst, in line 6 of Algorithm 1 we sample a problem. In lines 7-20 we play an episode: if we have a proof then we start from the state dictated by the global curriculum (lines 7-9). If\nwe do not have a proof then we start from the beginning. If the policy succeeds in finding a proof of a yet unproven problem then we reset the global curriculum to 1 in line 20. We sample k episodes repeating k times the loop in lines 6-20 and finally decide whether to increase the global curriculum in lines 23-24. We can advance curriculum globally (as in Algorithm 1) or independently for each problem. We found that global advancement makes learning more stable, so that is our default approach. We can start learning with or without training proofs. It does not change the processing of Algorithm 1. In Section 6 we provide experimental evidence with regard to both approaches." }, { "heading": "5 DATASETS", "text": "We introduce a suite\nRobinson Arithmetic defines basic properties of arithmetic expressions. The signature of the language contains an atom ’o’ (representing 0), functions ’s’, ’plus’ and ’mul’ (representing +1, + and ·,\nAlgorithm 1 FLoP: Curriculum Learning on Proofs Input: problem set P , policy π, progress threshold ∈ [0..1]\ntrain steps ∈ N, episodes between updates: k ∈ N Output: trained policy π, possible new proofs for problems in P\n1: steps← 0 2: curriculum← 1 3: while steps < train steps do 4: successes← 0 5: for j in 1..k do 6: p← random problem from problem set P . An episode corresponds to a problem 7: if p has stored proof then . Determine initial state 8: Take proof steps according to stored proof until curriculum number of steps remain 9: s0 ← state of problem p after initial proof steps taken 10: else 11: s0 ← starting state of problem p 12: while not episode over do 13: Take action according to policy π(ai|si), observe next state si+1 and reward ri+1 14: steps← steps + 1 15: if proof is found for p then 16: successes← successes + 1 17: if found proof is shorter than previous proof then 18: store proof as new proof for p 19: if no proof of p was known before then 20: curriculum← 1 . Restart curriculum learning 21: Update policy π 22: success rate← successes / k 23: if success rate > progress threshold then 24: curriculum← curriculum + 1 . Advance curriculum\nrespectively), and the equality predicate ’=’. For example, formula 3 · 1 + 2 = 4 + 1 is written as\nplus(mul(s(s(s(o))), s(o)), s(s(o))) = plus(s(s(s(s(o)))), s(o)).\nWe use the axioms provided in Table 1. The unary representation of numbers (e.g., s(s(s(o))) represents 3) results in large expressions and long proofs as the numbers increase. For example, ((8 + 5) · 8) · 5 = 520 takes over 16000 steps to prove in fCoP. We show an example of such a proof on the project website. In Table 2 we identify three problem sets of increasing complexity that we use in Section 6 to evaluate FLoP. For Stage 1, a good ordering of the available inference actions is sufficient to find a proof. Stage 2 is harder, as the current goal is also important for selecting an action. For example, the equality reflexivity A = A is usually not needed, except for some cases, due to the unification it triggers. So the system has to learn that this action is useful in particular situations. Stage 3 is much harder, because some of the “rare” actions are tied to global progress in the proof, for example, when we move focus from the left side of the equation to the right side." }, { "heading": "6 EXPERIMENTS", "text": "In this Section we present six experiments. Experiment 1 demonstrates that FLoP can learn when no proof is provided, only the training problems. In Experiment 2 we compare FLoP with other provers and show that it performs very well on the arithmetic datasets. In Experiment 3 we show that FLoP tends to solve problems in fewer steps than rlCoP and also solves problems that require significantly longer proofs. Experiment 4 shows that FLoP can generalize from a single training problem that is provided with a proof. In Experiment 5 we show that proofs of harder problems provide more valuable training signal and we obtain the best generalization when we learn from some longer proofs. Finally Experiment 6 shows that FLoP is more robust than supervised learning on training proofs.\nIn total we used around 2.5M core-hours on Xeon E5-2680v3 processors, approximately 250-300 core-years. Our hyperparameters were selected using small grid searches. We checked standard\nRL parameters (e.g., the discount factor) parameters related to curriculum scheduling (e.g., local vs. global), neural network architectures (1–5 layers with 128–1024 neurons), feature sizes (64–1024) and training steps (105 – 108). Parameters used in the experiments are described in configuration files which are accessible along with the shared codebase.\nEach model was trained to achieve 100% accuracy on the training set. During evaluation, the system was allowed to make 100 attempts per problem, each with 60 sec. time limit, without backtracking. We report two evaluation metrics: 1) Succ.: percentage of proofs found and 2) Len.: average length of proofs found. We have trained 5 models per a set of hyperparameters (unless otherwise noted). Reported numbers are means, with standard deviations in parenthesis. We discuss some failure modes in Appendix A.\nExperiment 1: Learning without proofs. We train FLoP with the training problems defined in Section 5, without proofs. The system can find training proofs through exploration and learn from them as we show in Table 3. Curriculum learning performs well for Stage 1 and 2, however in Stage 3 it only solves 3%. This massive overfitting, addressed in Experi-\nments 4 and 5, happens because the system tends to overuse equality congruence axioms, i.e. when trying to prove a+ b = c+ d or a · b = c · d, it often proceeds by reducing the problem to proving a = c and b = d, which does not work in general. In the training set, all numbers are 0 and 1 and this approach works more often. We also compare curriculum learning with learning based on exploration only. As Figure 5 in the Appendix shows, curriculum learning yields more rewards. This makes no difference in the setting of Stage 1, helps greatly in Stage 2 and results in overfitting in Stage 3.\nExperiment 2: Comparison with other Provers. We compare FLoP with two state-of-the-art saturation-style theorem provers (E, Vampire), a strong connection tableau prover (leanCoP (Otten &\nBibel, 2003)) and one connection tableau prover using learning-based guidance (rlCoP (Kaliszyk et al., 2018)).\nVampire, E and leanCoP use human-designed strategies instead of learning. In our tests we use the casc mode for Vampire, the auto and auto-schedule modes for E and the default collection of 40 strategies for leanCoP (the standard casc setup), each with a timeout of 60 sec. per problem. For rlCoP we used the same hyperpa-\nrameters as those described in Kaliszyk et al. (2018), only modifying the policy temperature from 2.5 to 1.5. The number of inferences in the MCTS was limited to 200000. For Stage 1 and 2 rlCoP was run directly on the evaluation set. For Stage 3 all problems were too hard to solve without guidance within the inference limit, so we started with the version trained on the solutions of Stage 2. For FLoP we report the best models trained without proofs.\nThe success ratios are given in Table 4. E’s auto-schedule tries multiple strategies and finds one with the left-to-right ordering of all the addition and multiplication axioms. This solves all of our problems immediately without any proof search by only using rewriting to a normal form (Baader & Nipkow, 1998). This demonstrates the power of equational theorem proving when a suitable term ordering exists and can be found by human-designed heuristics. This is however far from guaranteed in general and nontrivial even in such simple domains, as witnessed by Vampire’s failure to find this ordering. To evaluate E without access to its built-in rewriting capability, we have renamed the equality to a new predicate ‘eq’ axiomatized exactly in the same way as in fCoP. The auto-schedule mode then solves 54% problems in Stage 1, comparable to the auto mode. Overall, FLoP solves the most problems in all stages if we count systems that rely on search space exploration.\nExperiment 3: FLoP vs. rlCoP with Respect to Proof Lengths. Due to the same underlying calculus, the proofs found by rlCoP and FLoP are directly comparable and it is insightful to compare them with respect to the length\nof proofs. Figure 4 shows that FLoP manages to solve more problems, and even finds some very long proofs. This is, however, not because FLoP’s proofs are unnecessarily long: we demonstrate in Table 5 that FLoP tends to find shorter proofs for problems solved by both systems. It is interesting to note that out of the 351 problems solved by both, none had the same length, which suggests that the provers acquired different strategies.\nExperiment 4: Learning from a Single Problem with Proof Provided. When the proof of training problems is available, FLoP can use it to make learning more efficient. In this experiment, FLoP is restricted to learn from a single, rather simple training problem, but we also provide its\nproof. Table 6 shows that FLoP generalizes very well in this setup, expecially in Stage 1 and 2.\nTraining problem Succ. Len.\n1 · 2 + 1 + 1 = (1 + 1) · 1 · 2 0.32(0.05) 566(14) 1 · 2 + 1 + 1 = (1 + 1) · 1 · 2 (1 + 1 + 1) · 2 = 2 · 1 + 2 + 2 0.51 (0.03) 590(54)\nby reducing the problem to proving a = c and b = d, which does not work in general. In the training set, all numbers are 0 and 1 and this approach works more often. The harder the problems, the less likely they can be solved with such heuristic approaches, hence harder training problems promise more valuable training signal. We demonstrate this by training FLoP on a few selected harder problems with proofs provided. A single longer training proof (113 steps) is sufficient to discourage the overuse of equality axioms. Adding one more training problem (108 steps) helps even more and\nwe obtain the best model for Stage 3, see Table 7. Also note the huge increase in the length of proofs: this is partly because we solve new hard problems and partly because the system resorts to longer but safer proof strategies.\nStage Proof Lengths Supervised Curriculum\nSucc. Len. Succ. Len.\n1 5, 9 0.98(0.04) 327(58) 1 (0.01) 363(5) 1 7, 10 1 (0) 359 (0) 0.98(0.01) 327(18) 1 9, 11 0.52(0.08) 54(11) 0.98 (0.01) 340(18) 2 5, 9, 23 0.85 (0.04) 377(47) 0.76(0.02)? 291(16)? 2 7, 10, 24 0.74 (0.04) 433(110) 0.71(0.01)? 311(61)? 2 9, 11, 25 0.59(0.08) 193(49) 0.76 (0.01)? 267(109)?\nTable 8 that it greatly depends on the quality of the given proof. For the three problems in the training set of Stage 1 and 2, we take the shortest proofs (5, 9 and 23 steps) and construct variants with 1-3 extra steps added. We observe that supervised learning degrades as superfluous steps are introduced, while FLoP’s exploration allows the system to recover and find the original proofs." }, { "heading": "7 CONCLUSION AND FUTURE WORK", "text": "We have built FLoP, a proof guidance system based on reinforcement learning addressing the problem of finding long proofs in an exponential search space. Previous work (Kinyon et al., 2013; Veroff, 1996) focused on finding long proofs with the help of human-designed heuristics. We find that curriculum learning is a suitable approach as it strikes a good balance between exploration and memorization on a suite of arithmetic RL problems introduced in Section 5, allowing our system to generalize from small training problems to larger ones with similar structure. We have created a suite of RL environments based on Robinson arithmetic that contains problems of highly related structure.\nOverfitting in Reinforcement Learning is a a well known and largely unsolved problem and we believe that our work offers an interesting new angle on the problem. We show that the greater reward efficiency provided by curriculum learning can result in catastrophic overfitting. Figure 5 and Table 3 show that allowing or disallowing the curriculum can be considered as a trade-off between more efficient training and a higher risk of overfitting. Furthermore, we also show in Table 7 that when we have some training proofs of harder problems, we can greatly reduce overfitting.\nThis work uses a human-designed representation of formulas similar to one used earlier in Kaliszyk & Urban (2015b); Kaliszyk et al. (2018); Urban et al. (2011) and a straightforward encoding of actions. We believe that learned embeddings as well as more sophisticated RL ideas employed before in the context of text games (Côté et al., 2018; Fulda et al., 2017; Haroush et al., 2018; He et al., 2015; Kostka et al., 2017; Narasimhan et al., 2015) will positively impact the performance of FLoP. We also see potential in exploring other ways of curriculum learning: while in this paper curriculum is on the number of steps to the end of the proof, one could order training problems from easier to more complex ones.\nWe believe that the learning loop implemented in Algorithm 1 can benefit from integration with memory-based meta-learning methods (Ortega et al., 2019; Santoro et al., 2016). One can also look for inspiration from robotics (Florensa et al., 2017) with regard to automatic discovery of curricula of easier problems. In mathematical practice it is quite often the case that instead of proving a conjecture, a similar, but simpler problem is solved, which eventually – maybe over a longer span of time – contributes to the solution of the main conjecture. This encourages us to combine FLoP with Hindsight Experience Replay (Andrychowicz et al., 2017), which utilizes a similar strategy of allowing easier goals in order to ultimately solve a harder goal in the domain of robotics.\nFinally we find it interesting to instrument the Bolzano-Weierstrass theorem and its 252 auxiliary lemmas as an RL challenge where the system would be supposed to derive the theorem and all lemmas from scratch using in each derivation only basic axioms, hence forcing long proofs. This can be considered as an RL follow-up to the earlier MPTP Challenge (Urban, 2006b)." }, { "heading": "APPENDIX A FAILURE MODES", "text": "Despite the apparent simplicity of our arithmetic learning environments, a learning system aiming to solve them has to overcome some hard challenges. We have decided to describe these challenges in detail as they are present in other domains as well, even if it may be harder to detect.\nFailure type 1. The reward mechanism of our RL system is biased towards shorter proofs. However, many problems have “shortcuts” that allow for shorter proofs, but that do not generalize well. Consider formula (1 + 1) + (2 · 2) = (0 + 2) + 4. There are two ways to prove this equality: 1) compute the values of the expressions on both sides of the equation and notice that they are the same or 2) show that 1 + 1 = 0 + 2 and 2 · 2 = 4. The former generalizes better, but the latter results in a shorter proof. Hence, training on this problem might negatively affect the performance of the prover. This is what causes the failure in Experiment 3: through manual inspections of discovered proofs we have concluded that curriculum learning is more efficient at finding and learning shorter proofs of the training problems and it overfits to them.\nFailure mode 2. fCoP features do not take into account the order of the arguments of a function, hence f(a, b) and f(b, a) have the same features. This is particularly problematic for Stage 3, since A = B and B = A require different inferences. We addressed this problem by 1) extending state features with those of the preceding action as a substitute of a memory, 2) modified the features to include argument order.\nFailure mode 3. Some ”rare” events are hard to generalize, because the system sees very few relevant samples during training. This is the case with applying commutativity of equality (replacing A = B with B = A), which is only required in Stage 3 and ideally only once per proof, when we move focus from one side of the equation to the other. In Experiment 4, when we trained on a single longer proof, we have noticed that the system was very unsure about this action which resulted in many failed proof attempts. Adding another training proof as enough to overcome this and success score increased from 32% to 51%." }, { "heading": "APPENDIX B THE EFFECT OF CURRICULUM LEARNING", "text": "Figure 5 shows that curriculum learning yields more rewards. This makes no difference in the simple setting of Stage 1, helps greatly in Stage 2 and results in fatal overfitting in Stage 3." } ]
2,019
null
SP:6a4302d604b03b5c7ce0c30450808705348d4e9c
[ "The paper presents an approach to learning user representations based on activity patterns on e-commerce websites and a user profile. The method turns activity patterns into a sequence of discrete tokens based on the action type and attributes that correspond to a certain action. A self-supervised transformer is trained on this data with a masked language modeling (MLM) objective. Data is compartmentalized as long-term patterns such as a purchase or the use of reward points or short-term such as clickthrough data or user profile information such as user age, gender, or location. Separate segment and position embeddings are used within each compartment. Since each masked token is a high-level action type that may have many attributes, predicting a masked-token is cast as a multi-label classification problem over attributes." ]
This paper extends the BERT model to user data for pretraining user representations in a self-supervised way. By viewing actions (e.g., purchases and clicks) in behavior sequences (i.e., usage history) in an analogous way to words in sentences, we propose methods for the tokenization, the generation of input representation vectors and a novel pretext task to enable the pretraining model to learn from its own input, omitting the burden of collecting additional data. Further, our model adopts a unified structure to simultaneously learn from long-term and short-term user behavior as well as user profiles. Extensive experiments demonstrate that the learned representations result in significant improvements when transferred to three different real-world tasks, particularly in comparison with task-specific modeling and representations obtained from multi-task learning.
[]
[ { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework", "venue": "perspectives. IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "Heng-Tze Cheng", "Levent Koc", "Jeremiah Harmsen", "Tal Shaked", "Tushar Chandra", "Hrishi Aradhye", "Glen Anderson", "Greg Corrado", "Wei Chai", "Mustafa Ispir", "Rohan Anil", "Zakaria Haque", "Lichan Hong", "Vihan Jain", "Xiaobing Liu", "Hemal Shah" ], "title": "Wide & deep learning for recommender systems", "venue": "In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems,", "year": 2016 }, { "authors": [ "Paul Covington", "Jay Adams", "Emre Sargin" ], "title": "Deep neural networks for youtube recommendations", "venue": "In Proceedings of the 10th ACM Conference on Recommender Systems,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A. Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "Decaf: A deep convolutional activation feature for generic visual recognition", "venue": "In Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Deepti Ghadiyaram", "Matt Feiszli", "Du Tran", "Xueting Yan", "H. Wang", "D. Mahajan" ], "title": "Large-scale weakly-supervised pre-training for video action recognition", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Mihajlo Grbovic", "Haibin Cheng" ], "title": "Real-time personalization using embeddings for search ranking at airbnb", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Michelle Guo", "Albert Haque", "De-An Huang", "Serena Yeung", "Li Fei-Fei" ], "title": "Dynamic task prioritization for multitask learning", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Philip Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Zhouhan Lin", "Minwei Feng", "C.D. Santos", "Mo Yu", "B. Xiang", "Bowen Zhou", "Yoshua Bengio" ], "title": "A structured self-attentive sentence embedding", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Xiaodong Liu", "Jianfeng Gao", "Xiaodong He", "Li Deng", "Kevin Duh", "Ye-Yi Wang" ], "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-task deep neural networks for natural language understanding", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Proceedings of the 26th International Conference on Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Yabo Ni", "Dan Ou", "Shichen Liu", "Xiang Li", "Wenwu Ou", "Anxiang Zeng", "Luo Si" ], "title": "Perceive your users in depth: Learning universal user representations from multiple e-commerce tasks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representions by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Maxime Oquab", "Leon Bottou", "Ivan Laptev", "Josef Sivic" ], "title": "Learning and transferring mid-level image representations using convolutional neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2014 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Jeff Donahue", "Trevor Darrell", "Alexei A. Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Changhua Pei", "Yi Zhang", "Yongfeng Zhang", "Fei Sun", "Xiao Lin", "Hanxiao Sun", "Jian Wu", "Peng Jiang", "Junfeng Ge", "Wenwu Ou", "Dan Pei" ], "title": "Personalized re-ranking for recommendation", "venue": "In Proceedings of the 13th ACM Conference on Recommender Systems,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew Peters", "Sebastian Ruder", "Noah Smith" ], "title": "To tune or not to tune? adapting pretrained representations to diverse tasks", "venue": "In Proceedings of the 4th Workshop on Representation Learning for NLP,", "year": 2019 }, { "authors": [ "Di Qi", "Lin Su", "Jia Song", "Edward Cui", "Taroon Bharti", "Arun" ], "title": "Sacheti. Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data", "venue": null, "year": 2001 }, { "authors": [ "Matthew Richardson", "Ewa Dominowska", "Robert Ragno" ], "title": "Predicting clicks: Estimating the click-through rate for new ads", "venue": "In Proceedings of the 16th International World Wide Web Conference(WWW-2007),", "year": 2007 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning", "venue": "in deep neural networks. ArXiv,", "year": 2017 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Yevgen Chebotar", "Jasmine Hsu", "Eric Jang", "Stefan Schaal", "Sergey Levine" ], "title": "Time-contrastive networks: Self-supervised learning from video", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Ali Sharif Razavian", "Hossein Azizpour", "Josephine Sullivan", "Stefan Carlsson" ], "title": "CNN features off-the-shelf: an astounding baseline for recognition", "venue": "In CVPR DeepVision workshop,", "year": 2014 }, { "authors": [ "Edgar Simo-Serra", "Eduard Trulls", "Luis Ferraz", "Iasonas Kokkinos", "Pascal Fua", "Francesc MorenoNoguer" ], "title": "Discriminative Learning of Deep Convolutional Feature Point Descriptors", "venue": "In Proceedings of the International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Trevor Scott Standley", "Amir Roshan Zamir", "Dawn Chen", "Leonidas J. Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning? ArXiv", "venue": null, "year": 1905 }, { "authors": [ "Weijie Su", "Xizhou Zhu", "Yue Cao", "Bin Li", "Lewei Lu", "Furu Wei", "Jifeng Dai" ], "title": "Vl-bert: Pretraining of generic visual-linguistic representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Fei Sun", "Jun Liu", "Jian Wu", "Changhua Pei", "Xiao Lin", "Wenwu Ou", "Peng Jiang" ], "title": "Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiawei Wu", "Xin Wang", "William Yang Wang" ], "title": "Self-supervised dialogue learning", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Guorui Zhou", "Xiaoqiang Zhu", "Chenru Song", "Ying Fan", "Han Zhu", "Xiao Ma", "Yanghui Yan", "Junqi Jin", "Han Li", "Kun Gai" ], "title": "Deep interest network for click-through rate prediction", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Guorui Zhou", "Na Mou", "Ying Fan", "Qi Pi", "Weijie Bian", "Chang Zhou", "Xiaoqiang Zhu", "Kun Gai" ], "title": "Deep interest evolution network for click-through rate prediction", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Zeyuan Allen Zhu", "Weizhu Chen", "Tom Minka", "Chenguang Zhu", "Zheng Chen" ], "title": "A novel click model and its applications to online advertising", "venue": "In Proceedings of the Third ACM International Conference on Web Search and Data Mining,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "The choice of data representations, i.e., how to create meaningful features, imposes tremendous impact on the performance of machine learning applications (Bengio et al., 2013). Therefore, data processing and feature engineering have been the decisive steps in developing machine learning models. To extend the applicability of the models, recent research on representation learning aims to discover the underlying explanatory factors hidden in raw data. With rapid advances in this direction, we have witnessed many breakthroughs in the areas of computer vision (CV) (Doersch et al., 2015; Sharif Razavian et al., 2014; Simo-Serra et al., 2015) and natural language processing (NLP) (Mikolov et al., 2013; Pennington et al., 2014; Lin et al., 2017).\nSimilarly, for building user-oriented industrial applications like next purchase prediction and recommendation, much effort has been spent on understanding business models and user behavior for creating useful features (Richardson et al., 2007; Covington et al., 2016). This is a time-consuming and application-specific process. Also, it is challenging to reuse these features or share gained knowledge between different services and applications.\nTo solve the issues of isolated feature engineering and task-oriented pipeline design, the pretrainingtransfer learning paradigm has been explored. For example, multi-task learning (MTL) has shown promising results (Ni et al., 2018). Nevertheless, MTL has its intrinsic challenges, e.g., deciding which tasks to learn jointly (Standley et al., 2019), or how to weigh tasks (Kendall et al., 2018), to achieve optimal performance. More importantly, the learning still hinges on large amounts of well-annotated user labels.\nInspired by the BERT model and its variations (Devlin et al., 2019; Lan et al., 2020), this paper explores the feasibility of understanding users in a similar way to how language is understood. We think it is conceptually intuitive to make such an analogy since understanding language and users share a similar goal, i.e., understanding a conveyed message, but with different mediums. The former models what is said (sentences) while the latter learns from what is done (behavior). The syntax and semantics of a sentence are comparable with the behavioral patterns and the characteristics of a user. Hence, we hypothesize the learning procedure can be consistent in methodology as well, and propose to build upon BERT for pretraining user representations on unlabeled behavior data.\nOur proposal, UserBERT, simultaneously learns from three categories of user data, i.e., long-term and short-term behavior as well as user profiles, via a unified architecture. In particular, different action types (e.g., page views, clicks and purchases) and attributes (e.g., shop and item genre)\nare chosen to represent long-term and short-term user behavior. For these two behavior types, we first present distinct strategies to discretize them into a sequence of behavioral words. Instead of modeling single user actions sequentially, the applied discretization leads to better generalization. The token representation of these behavioral words is computed by the concatenation and mean calculation of the word embeddings of the attribute IDs in each action, and this is followed by the summation of token, position and segment embeddings. These representation vectors are finally aligned with the word embeddings of user categorical profiles as the input to UserBERT. With this input, we design a novel pretext task, masked multi-label classification, and the UserBERT model is pretrained via optimizing the multi-label classifications of the multiple attributes in the masked behavioral words.\nDespite the parallels between user behavior and sentences, there are substantial differences and challenges in designing the learning procedure in a coherent way. Our model is able to deal with heterogeneous user behavior data, and achieve generalization via effective tokenization and the pretraining task. While there is prior work applying BERT to task-specific user modeling (Sun et al., 2019b), this paper is built upon the assumption that behavioral patterns can be understood like the structure of a language. The UserBERT model explores integrating various types of user data in a unified architecture and learning generic representations with self-supervised signals. In our experiments, the pretrained model is fine-tuned on three different real-world tasks, and the results show that UserBERT outperforms task-specific modeling and multi-task learning based pretraining.\nOur contributions are summarized as follows:\n• We propose UserBERT, a self-supervised learning model, to pretrain user representations via analogizing actions in a user behavior sequence to words in sentence. It eliminates the needs of previous approaches for collecting additional user annotated labels.\n• We design the discretization of user raw data sequences, the generation of the input representation and a novel pretext task for pretraining.\n• UserBERT adopts a unified model architecture to enable the simultaneous learning from heterogeneous data including long, short-term behavior as well as demographics.\n• We demonstrate the empirical power of UserBERT with extensive experiments. Our model is compared with task-specific models without pretraining and multi-task learning based pretraining models, and achieves performance gains on three real-world applications." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 PRETRAINING AND TRANSFER LEARNING", "text": "Recent studies have demonstrated that pretraining on large, auxiliary datasets followed by finetuning on target tasks is a promising paradigm for boosting performance (Oquab et al., 2014; Donahue et al., 2014; Hendrycks et al., 2019; Ghadiyaram et al., 2019). Multi-task learning has been one of the commonly adopted approaches for pretraining due to its ability to improve generalization (Zhang & Yang, 2017; Ruder, 2017). It is shown that the pretrained MTL models can boost performance even when transferred to unseen tasks (Liu et al., 2015; Ni et al., 2018). Despite its success, MTL still has many challenges, such as negative transfer and the learning adjustment between different tasks (Guo et al., 2018). Also, MTL requires large amounts of well-annotated labels to produce satisfying outputs. There are two common forms of adaptation when transferring the pretrained models to a given target task, i.e., feature-based in which the pretrained weights are frozen, and directly fine-tuning the pretrained model (Peters et al., 2019). We fine-tune pretrained models in our experiments." }, { "heading": "2.2 SELF-SUPERVISED LEARNING", "text": "Deep learning models can already compete with humans on challenging tasks like semantic segmentation in the CV area (He et al., 2015) as well as a few language understanding tasks (Liu et al., 2019). However, such success relies on adequate amounts of quality training data, which can be extremely expensive or even impossible to obtain (Kolesnikov et al., 2019). As a result, a lot of\nresearch efforts aim to liberate learning from the heavy dependency on supervised signals. Selfsupervised learning (SSL), a subclass of unsupervised learning, has been drawing more attention since the recent advances in the NLP field. Instead of using supervision signals, SSL only requires unlabeled data and trains models via formulating a pretext learning task. There are two main types of pretext tasks: context-based (Pathak et al., 2016; Noroozi & Favaro, 2016; Sermanet et al., 2018; Wu et al., 2019) and contrastive-based (Hjelm et al., 2019; Chen et al., 2020)." }, { "heading": "2.3 USER MODELING", "text": "To build user-oriented machine learning applications, the key challenge is finding an expressive representation of user data so that the followed modeling can effectively extract useful information to produce good performance. For that reason, much effort has been going towards data preprocessing and transformations, such as converting user categorical attributes to embeddings and aggregating user activities like total number of visits, clicks or amount of money spent over certain time interval or a particular product genre (Richardson et al., 2007; Zhu et al., 2010). Deep learning models have successfully mitigated the dependency on human efforts due to its ability to capture underlying representations in raw data (Cheng et al., 2016; Covington et al., 2016; Zhou et al., 2018). However, these models need massive supervision signals for training, and they are mostly designed for specific tasks like recommendation (Pei et al., 2019) and click-through rate prediction (Zhou et al., 2019).\nDespite the success of these deep learning models, they fail to generate promising results for realworld industrial tasks with limited labeled data. To deal with this issue, the methodology that pretraining universal user representations on massive user data, and then fine-tuning them for downstream tasks is explored. The goal is to learn a universal and effective representation for each user which can be transferred to new tasks (Ni et al., 2018). However, MTL-based pretraining still requires the collection of user labels. Also, it is limited by inherent shortcomings to achieve optimal results (Kendall et al., 2018; Guo et al., 2018). It is highly desirable for user applications to have a learning paradigm that does not require large amounts of manually annotated data. Our work is inspired by the BERT model which pretrains representations for language understanding. We aim to pretrain universal user representations by analogizing actions in a user behavior sequence to words in sentence, and apply transfer learning to downstream tasks, especially those with few labeled data, for boosting performance." }, { "heading": "3 THE PROPOSED APPROACH", "text": "In this section, we first review the BERT model in brief, and then elaborate on how to extend it to user data including behavior sequences and demographic profiles." }, { "heading": "3.1 THE BERT MODEL", "text": "BERT is a language representation model that pretrains deep bidirectional representations by jointly conditioning on both left and right contexts in all encoding layers (Devlin et al., 2019). The input of the BERT model is a sequence of tokens that can represent both a single text sentence and a pair of sentences. These discrete tokens consist of words and a set of special tokens: separation tokens (SEP), classifier tokens (CLS) and tokens for masking values (MASK). For a token in the sequence, its input representation is a sum of a word embedding, the embeddings for encoding position and segment.\nThe BERT model is pretrained with two tasks, masked language modeling (MLM) and next sentence prediction. In MLM, the input tokens are randomly masked and the BERT model is trained to reconstruct these masked tokens. In detail, a linear layer is learned to map the final output features of the masked tokens to a distribution over the vocabulary and the model is trained with a crossentropy loss. In next sentence prediction, the inputs are two sampled sentences with a separator token SEP between them. The model learns to predict whether the second sentence is the successor of the first. A linear layer connecting the final output representations of the CLS token is trained to minimize a cross-entropy loss on binary labels. Many recent research works focus on extending the BERT model to areas beyond NLP, and successfully achieved state-of-the-art results (Sun et al., 2019a; Lu et al., 2019; Su et al., 2020; Qi et al., 2020)." }, { "heading": "3.2 USERBERT", "text": "Tokenization of user behavior sequences. Our goal is to learn generic user representations that characterize users based on their preferences and recent interests. We decide not to sequentially model single actions in long-term and short-term user data. While such modeling is suitable for certain tasks, it is susceptible to overfitting when learning generic user representations. Instead, we learn from a sequence of clustered user actions, in which a cluster represents a routine or a spontaneous interest. Customers often make online purchases with specific intentions, e.g., shopping for a shirt, cartoon books or a gift for Mother’s Day. Also, many customers have long-standing preferences for particular stores and sales are heavily impacted by seasonality. These continuous or related actions form a ‘word’ in a behavior sequence. Similarly, we consider the same regarding short-term user behavior. Users commonly browse web content, moving between pages on an ecommerce site. During this time period, in order to capture the user’s interest, we aim to estimate the theme or product genre rather than the specific order of individual actions.\nTherefore, we first need to segment raw action data into a sequence of ’behavioral words’ for each user, analogous to words in a sentence. In detail, we adopt different approaches for long-term and short-term data. Data representing long-standing user preferences is discretized into 24-hour intervals from 4 AM of one day to 4 AM of the next day. Short-term data is discretized if there is a time interval larger than 30 minutes between two actions, similar to the processing steps in Grbovic & Cheng (2018).\nInput representations. In order to enable bidirectional representation learning, we transform the behavioral word sequence into a sequence of input embeddings. We first introduce the concept of action type and attribute in user actions: The action type indicates what a user does, e.g., making a purchase or obtaining points for using a service, while the attribute of an action includes the shop name, the item genre and price range, etc, as shown in Figure 1. We choose different action types and attributes in our dataset to represent long-term and short-term user behavior, and propose separate tokenization strategies for them since we expect to extract inherent user preferences from regular routines over longer time periods, and short-term interests from recent, temporary interactions. In combination with demographic data, we consider the learned representations comprehensive and expressive.\nTo generate input representations, all attribute IDs are first mapped to fixed-length word embeddings via look-up tables. Then, the attribute embeddings of each action are concatenated. Subsequently, the token representation is constructed by the mean of all action embeddings. Finally, the input embedding vector is obtained by summing the token embeddings and the embeddings for encoding\nposition and segment. Long and short-term user data share the same processing steps above, but each has their own definitions for token position. While the position of a token in long-term sequences is the number of days counted from the starting point of the collected training data, for short-term data it is the number of hours. The segment embedding is used to differentiate the given types of user behavior. In order to incorporate non-temporal user profile data to our modeling, we consider categorical attributes like gender as tokens in the user input sequence. For the continuous-valued attributes like age, we segment them by heuristics and convert them to categorical attributes. After mapping attributes to word embedding vectors, these are summed to the segment embedding. Note that there is no position embedding for profile embeddings since no order information needs to be captured for these user attributes. The input sequence for each user is formed by aligning the generated representation vectors of user behavior as well as the embeddings of user profiles, see Figure 1 for illustration.\nPretraining tasks. The generated input sequences allow us to make minimal changes to the BERT architecture and follow the practice in Devlin et al. (2019). We then pretrain our model to learn bidirectional representations. While the MLM task seems to naturally apply to our modeling, reconstructing the masked ‘behavioral words’ requires modification since these words contain an assembly of user actions rather than individual words used in the original BERT model. We implement masked multi-label classification to predict the multiple attributes in the masked behavioral words. More precisely, for each target attribute in a masked token, a linear layer is connected to the final representations and learned to map a distribution over the vocabulary of the attribute, as illustrated in Figure 2. For one masked token, the training loss is the sum of cross-entropy losses of all the attribute predictions, e.g., the prediction of the shop IDs and genre IDs, etc. The final loss for one input sequence is the sum of the losses of all masked tokens.\nFor masking input tokens, we follow a similar process as BERT: 15% of tokens are selected uniformly, where 80% of the time the token is zeroed-out and remains unchanged otherwise. We distinguish between three segments of behavioral words from the three types of user data, i.e., long-term, short-term and user profiles. For long and short-term segments, we apply the masking-prediction for pretraining our model, while we do not mask user profiles. To pretrain UserBERT, we first randomly sample a mini-batch of raw user sequences. Then, they are tokenized and transformed to input representations, which is followed by the masking step. In the end, the masked sequences are passed through the model, and the model is trained by minimizing the prediction error for reconstructing what attributes are inside the masked tokens. For each attribute type, a linear layer is learned to map the hidden representations of masked tokens to distributions over its vocabulary for conducting the multi-label classification.\nLet i be a randomly sampled index for masking, wi and w\\i be the masked behavioral word and the input after masking to the UserBERT. Also, let n be the number of target attributes for reconstruction prediction, and f k(w\\i|θ) be the final output vector after softmax layer for k-th attribute in the masked wi. The loss of the UserBERT model is:\nL(θ) = −Ew∼D,i∼{1,..,t} n∑\nk=1\nLCE(y k i , f k(w\\i|θ)), (1)\nwhere w is a uniformly sampled input representation sequence from the training dataset D, yki is the ground truth binary vector for the k-th attribute with its corresponding vocabulary size in the masked wi and LCE is the cross entropy loss for the multi-label classification. Note that long-term and short-term user behavior have different types and numbers of attribute in actions. With the pretrained models, we leverage them for fine-tuning on downstream tasks." }, { "heading": "4 EXPERIMENTS", "text": "We experimentally verify whether the proposed UserBERT model is able to yield generic user representations, and evaluate the performance when applying to different tasks via transfer learning." }, { "heading": "4.1 DATASETS", "text": "Datasets are collected from a multitude of online ecosystem of services, including an e-commerce platform, a travel booking service, a golf booking service and others. Customers can access all services via their unique ID, and their activities across the ecosystem are linked together.\nWe consider two action types as long-term user behavior. The first one is the purchase action on the e-commerce platform, and the second one is the point usage history. Points are earned whenever purchases are made or when certain services are used and can be spent on any service within the ecosystem. The ‘channel’ attribute represents from which service users obtain points or where they spend points. We collected the purchase and point history data over a time period of three months for our experiments. For short-term behavior, we mainly focus on recent customer activities on the e-commerce website, i.e., browsing and search history. The collected actions are clicks, page views and searches over a shorter time period of seven days. The detailed information on action types and attributes in the experimental data are shown in Table 1.\nThe user profile data is registered customer information such as age and gender. The unique number of users in the dataset is 22.5 million, the number of daily purchase and point usage samples is approximately 5 million, and the number of short-term data samples is approximately 50 million. The data is preprocessed to generate user action sequences." }, { "heading": "4.2 TARGET TASKS", "text": "We transfer pretrained models to three downstream tasks that aim to improve the customer experience. The user targeting task is to identify potential new customers for certain services or products, and it is formulated as a binary classification problem. The seed users who responded positively to the target service/product are positive labels, while negative ones are uniformly sampled from the rest of the users with a 3:1 ratio. The dataset is collected after the time period of the data used for pretraining. The second task, next genre prediction, is a multi-class prediction problem with\nthe aim to predict the next genre that a user will purchase from. The dataset is created from the one-month user history following the pretraining time period. The final attribute prediction task is predicting different user attributes such as whether a customer owns a pet. It is also a classification problem, where ground truth labels are obtained through questionnaires. The datasets of the three target tasks are split 80-20 to create training and testing datasets for fine-tuning." }, { "heading": "4.3 BASELINES", "text": "The UserBERT is compared to direct modeling without pretraining and to MTL-based pretraining. The MTL models apply a multi-tower architecture in which each tower encodes one type of user data in our experiments. For the MTL-based baselines, different types of user data are passed through corresponding encoders, and the encoded representations are combined at the last layer before connecting to multiple training tasks. The dimension of the combined representations is set to 128 for all MTL models.\nWe collect user labels across the services in the ecosystem and pretrain MTL models with 12 multiclass classification tasks. These pretraining tasks classify the categories of user activities such as the usage frequency of certain services or attributes like type of occupation. By learning and sharing across multiple tasks, the yielded user representations are considered to be generalized and applicable for transferring to downstream tasks.\nWide&Deep+MTL: We generate fixed-length (1130-d) embeddings by aggregating behavior data and input them to the deep part of the model (Cheng et al., 2016). Categorical user profile data is mapped to word embeddings and concatenated before feeding it into the wide part of the model. The wide part is a linear model, while the dimensions of the 4 hidden layers for the deep side are 512, 256, 256 and 128, respectively.\nLSTM+MTL: The same discretization and input generation is applied to long-term and short-term user behavior for this model. It is a 3-tower model, in which two LSTMs model the two types of user behavior and user profiles are modeled in the same way as the Wide&Deep model. The dimension of the hidden state in all LSTM encoders and the length limitation of both long-term and short-term data are set to 128.\nTransformer+MTL: The architecture is the same as the LSTM+MTL model above but with two different Transformer encoders (Vaswani et al., 2017) to model long and short-term user data separately. The length of input user behavior sequence to the encoders is limited to 128 as well. We pretrain the model via minimizing the summed cross-entropy loss of the multiple training tasks.\nUserBERT: The proposed self-supervised learning based pretraining model. It enables a simultaneous learning from long, short user behavior and user profiles. Its pretraining is done by reconstructing attributes in masked tokens via multi-label classifications." }, { "heading": "4.4 EXPERIMENTAL SETUP", "text": "For UserBERT, we use the same notations as BERT, and set the number of Transformer blocks L to 4, the hidden sizeH to 128 and the number of self-attention headsA to 4. The input sequence length of both long-term and short-term data is limited to 128 in the experiments. For fair comparison, we pretrain all models using the Adam optimizer with a learning rate of 1e-4 and a batch size of 16. We fine-tune models using the same learning rate and a batch size of 128. Pretraining of 400K batches of the UserBERT model takes approximately 12 hours using our PyTorch implementation, run on two GeForce RTX 2080 Ti GPUs.\nFor fine-tuning each target task, the combined encoder representations of the MTL-based models are fed to an output layer, while the fine-tuning of UserBERT is done by connecting the hidden representations of the first token to an output layer for each task. After plugging in task-specific inputs and outputs, we fine-tune the parameters of pretrained models in an end-to-end way." }, { "heading": "4.5 EXPERIMENT RESULTS", "text": "User Targeting. We show the results for two different services. The sizes of the datasets are 30,204 samples and 31,106, respectively. Compared to the size of the pretraining dataset, the use cases of\nFigure 4: ROC AUC comparison between Transformer-based MTL models with different numbers of labeled data.\nFigure 5: Performance comparison between UserBERT with and without pretraining on user targeting task\nthis task only have few labeled data. Classification performance in terms of accuracy and ROC AUC are shown in Figure 3. The LSTM model, which sequentially models user behavior, has relatively low accuracy. One possible explanation is that the sequential order of user actions does not provide useful information for this task. From our experience the user targeting task focuses on patterns from relatively static user preferences. The Wide&Deep model shows competitive performance, which is reasonable since our exploratory analysis indicates that user profiles are important features. The performance of the Transformer-based models reveal that the underlying explanatory factors for this task can be captured by attention networks. UserBERT outperforms other models in both use cases by a substantial margin. We hypothesize that, compared to Transformer-based MTL, the learning of the UserBERT is not limited by the multiple training tasks and is able to learn more expressive and generic representations from the input.\nTo further demonstrate the advantage of the proposed method over MTL-based pretraining, we pretrain Transformer-based MTL models with different numbers of labels before fine-tuning. We evaluate three models: without pretraining, trained on 30% of the available labels and trained using all labels. The comparison indicates that the performance of MTL is significantly affected by the number of training samples. As shown in Figure 4, more annotated training data contributes to performance gain. The model without the pretraining step shows the worst performance. In contrast, the pretraining of the UserBERT does not require the additional collection of supervision signals, and therefore is not impacted by either the quantity or the quality of user annotations.\nWe also directly apply UserBERT to these two use cases without pretraining to verify whether the user targeting task benefits from the pretraining step. The ROC AUC comparison between UserBERT with and without pretraining is shown in Figure 5. The pretrained models outperform the direct modeling significantly. This indicates that the pretraining step can extract useful information and enables the followed fine-tuning to boost performance for downstream tasks. From the error curves during training, we also observe that models tend to overfit quickly without pretraining. The pretrained UserBERT model achieves more generic user representations and yields significant accuracy improvements when adapted to new downstream tasks.\nNext Genre Prediction. The test dataset contains 586,130 users, and we run 10 epochs of finetuning for each pretrained model. The mean average precision (mAP) comparison is shown in Table 2. The UserBERT model outperforms baseline models by a large margin. This task requires understanding of both long-term preferences as well as recent interests of customers. Prediction models should be able to pick out candidate genres from user habits over a longer time range,\nTable 2: mAP@10 comparison after 10-epoch fine-tuning on next genre prediction task.\nModel mAP(%) Wide&Deep+MTL 7.65 LSTM+MTL 6.90 Transformer+MTL 7.10 UserBERT 8.97 Figure 6: ROC AUC comparison on attribute pre-\ndiction task.\nand then identify likely ones as prediction results from latest interest trends. More specifically, a model should understand how users typically use services in the ecosystem as well as what they are currently interested in. The architecture of the baseline models learns from different types of user data separately and combines the last-layer representations for training. It fails to sufficiently capture the correlations. On the contrary, UserBERT benefits from the unified structure of the user data and captures more accurate correlations, not only within certain types of user behavior, but also between different behavior types via attention networks.\nSince it is common that users make purchases from only a subset of genres, we also built an intuitive but strong baseline that sets predictions as the most popular genres ranked in descending order by the total number of purchases, and compared it against all pretrained models. The mAP@10 is 4.22%, demonstrating the effectiveness of the pretrained models.\nAttribute Prediction. In general, it is challenging to predict user attributes because the predictive signals in the behavior data are very sparse. In other words, the target user attributes may not be strongly correlated to behavior data. Therefore, this prediction task evaluates the model’s ability to discover hidden explanatory factors in the raw data. We show experimental results of two use cases: one is to predict whether a user has a car while the other one is to predict if a user is a parent. These two tasks are denoted as has car and is parent.\nThe dataset for the has car task contains 448,501 samples and the one for the is parent task contains 400,268. The classification results of 10-epoch fine-tuning are shown in Figure 6. From the has car results, we observe that the Wide&Deep model shows good performance, although other models eventually reach similar accuracy. We believe this is due to the fact that user demographics like age and living area are important features for this task. It seems challenging for models to extract other decisive patterns from either long-term or short-term user behavior. On the other hand, whether a user is a parent or not seems to present different characteristics in terms of how they behave on an ecommerce or travel booking platform. These patterns can be captured by deep learning models like UserBERT and Transformer-based models. UserBERT is able to match and eventually outperform the baseline models." }, { "heading": "5 CONCLUSIONS", "text": "This paper introduces a novel paradigm to understand user behavior by using the analogy to language understanding. We present UserBERT, an extension of the BERT model to user data, for pretraining user representations in a self-supervised way. It explores and demonstrates the possibility for useroriented machine learning tasks to alleviate the dependency on large annotated datasets. Extensive experiments show that a well-designed pretrained model with self-supervision is able to outperform fully supervised learning models when transferred to downstream applications." } ]
2,020
null
SP:4c72c81f76d16b52fbef2e1804d913d0fbd61b2c
[ "The authors study how to improve the prediction and pruning performance with additional information generated by labels in the shared-label classification problem. As a starting point, the authors consider a simple scenario where side information can be extracted from the same labeled batch. To train the neural network, the authors use a balanced loss consisting of a weighted sum of general cross-entropy and cross-entropy of average batch prediction. The authors also suggest a new CNN-LSTM architecture to improve predictive performance to exploit the side information. The experiments section shows the proposed method performs well and achieves a high compression rate. " ]
Pruning of neural networks, also known as compression or sparsification, is the task of converting a given network, which may be too expensive to use (in prediction) on low resource platforms, with another ’lean’ network which performs almost as well as the original one, while using considerably fewer resources. By turning the compression ratio knob, the practitioner can trade off the information gain versus the necessary computational resources, where information gain is a measure of reduction of uncertainty in the prediction. In certain cases, however, the practitioner may readily possess some information on the prediction from other sources. The main question we study here is, whether it is possible to take advantage of the additional side information, in order to further reduce the computational resources, in tandem with the pruning process? Motivated by a real-world application, we distill the following elegantly stated problem. We are given a multi-class prediction problem, combined with a (possibly pre-trained) network architecture for solving it on a given instance distribution, and also a method for pruning the network to allow trading off prediction speed with accuracy. We assume the network and the pruning methods are state-of-the-art, and it is not our goal here to improve them. However, instead of being asked to predict a single drawn instance x, we are being asked to predict the label of an n-tuple of instances (x1, . . . xn), with the additional side information of all tuple instances share the same label. The shared label distribution is identical to the distribution on which the network was trained. One trivial way to do this is by obtaining individual raw predictions for each of the n instances (separately), using our given network, pruned for a desired accuracy, then taking the average to obtain a single more accurate prediction. This is simple to implement but intuitively sub-optimal, because the n independent instantiations of the network do not share any information, and would probably waste resources on overlapping computation. We propose various methods for performing this task, and compare them using extensive experiments on public benchmark data sets for image classification. Our comparison is based on measures of relative information (RI) and n-accuracy, which we define. Interestingly, we empirically find that i) sharing information between the n independently computed hidden representations of x1, .., xn, using an LSTM based gadget, performs best, among all methods we experiment with, ii) for all methods studied, we exhibit a sweet spot phenomenon, which sheds light on the compression-information trade-off and may assist a practitioner to choose the desired compression ratio.
[]
[ { "authors": [ "Madhu Advani", "Andrew Saxe" ], "title": "High-dimensional dynamics of generalization error in neural networks", "venue": null, "year": 2017 }, { "authors": [ "M. Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Davis Blalock", "Jose Javier Gonzalez Ortiz", "Jonathan Frankle", "John Guttag" ], "title": "What is the state of neural network", "venue": "pruning? arXiv,", "year": 2020 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "G Karolina Dziugaite", "DM Roy", "M Carbin" ], "title": "Stabilizing the lottery ticket hypothesis", "venue": null, "year": 2020 }, { "authors": [ "M. Geiger", "S. Spigler", "Stéphane d’Ascoli", "Levent Sagun", "M. Baity-Jesi", "G. Biroli", "M. Wyart" ], "title": "The jamming transition as a paradigm to understand the loss landscape of deep neural networks", "venue": "Physical review. E,", "year": 2019 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": null, "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Max Jaderberg", "A. Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "ArXiv, abs/1405.3866,", "year": 2014 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "Advances in Neural Information Processing Systems", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip Torr" ], "title": "Snip: Single-shot network pruning based on connection sensitivity", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Stephen Gould", "Philip H.S. Torr" ], "title": "A signal propagation perspective for pruning neural networks at initialization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H. Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Michael C Mozer", "Paul Smolensky" ], "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "venue": "Advances in Neural Information Processing Systems", "year": 1989 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Alexander Novikov", "Dmitrii Podoprikhin", "Anton Osokin", "Dmitry P Vetrov" ], "title": "Tensorizing neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "David Page" ], "title": "How to train your resnet. 2018", "venue": "URL https://myrtle.ai/ how-to-train-your-resnet-4-architecture/", "year": 2018 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Y. Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "30th International Conference on Machine Learning, ICML 2013,", "year": 2012 }, { "authors": [ "R. Reed" ], "title": "Pruning algorithms-a survey", "venue": "IEEE Transactions on Neural Networks,", "year": 1993 }, { "authors": [ "F. Scarselli", "M. Gori", "A.C. Tsoi", "M. Hagenbuchner", "G. Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "V. Sze", "Y. Chen", "T. Yang", "J.S. Emer" ], "title": "Efficient processing of deep neural networks: A tutorial and survey", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Hidenori Tanaka", "Daniel Kunin", "Daniel L.K. Yamins", "Surya Ganguli" ], "title": "Pruning neural networks without any data by iteratively conserving synaptic flow", "venue": null, "year": 2020 }, { "authors": [ "Chaoqi Wang", "Guodong Zhang", "Roger Grosse" ], "title": "Picking winning tickets before training by preserving gradient flow", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "C. Zhang", "Philip S. Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2020 }, { "authors": [ "Haoran You", "Chaojian Li", "Pengfei Xu", "Yonggan Fu", "Yue Wang", "Xiaohan Chen", "Richard G. Baraniuk", "Zhangyang Wang", "Yingyan Lin" ], "title": "Drawing early-bird tickets: Toward more efficient training of deep networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Jie Zhou", "Ganqu Cui", "Zhengyan Zhang", "Cheng Yang", "Zhiyuan Liu", "Maosong Sun" ], "title": "Graph neural networks: A review of methods and applications", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Pruning Neural networks, the task of compressing a network by removing parameters, has been an important subject both for practical deployment and theoretical research. Some pruning algorithms have focused on manipulating pre-trained models, (Mozer & Smolensky, 1989; LeCun et al., 1990; Reed, 1993; Han et al., 2015) while recent work have identified that there exist sparse subnetwork (also called winning tickets) in randomly-initialized neural networks that, when trained in isolation, can match and often even surpass the test accuracy of the original network (Frankle & Carbin, 2019;\nFrankle et al., 2020). There is a vast literature on network pruning, and we refer the reader to Blalock et al. (2020); Sze et al. (2017); Reed (1993) for an excellent survey. In this work, we adopt the pruning methods of Tanaka et al. (2020); Lee et al. (2019); Wang et al. (2020); Han et al. (2015) which have been influential in our experiments.\nMore crucially, most literature on pruning has been focused on designing a machine that converts a fixed deep learning solution to a prediction problem, to a more efficient version thereof. The pruning machine has a compression knob which trades off the level of pruning with accuracy of the prediction. The more resources we are willing to expend in prediction (measured here using floating-point operations (FLOPs)), the more information we can obtain, where information here is measured as prediction accuracy, or as reduction of uncertainty (defined below).\nWe now ask what happens when we want to prune a network, but also possess information on the prediction coming from another source. Intuitively, given some form of additional side information, we should be able to prune our network with a higher compression ratio to reach the same level of accuracy for the prediction task, compared with a scenario with no additional side information. But how can we take the side information into account when pruning?" }, { "heading": "1.1 MOTIVATION", "text": "This question was motivated by an actual real-life scenario. We describe the scenario in detail, although the actual problem we thoroughly study in what follows is much simpler.\nImagine a database retrieval system with a static space of objects X . Given a query object q, the goal is to return an object x from X that maximizes a ground-truth retrieval value function fq(x). We have access to a function f̃q(x) expressed as a deep network, which approximates fq , and was trained using samples thereof. The function f̃q is very expensive to compute. (Note that we keep q fixed here, as part of the definition of fq(·), although in an actual setting both q and x would be input to a bivariate retrieval function f̃ .) Computing f̃q(x) for all x ∈ X is infeasible. One way to circumvent this is by computing a less accurate, but efficient function ˜̃fq(·), defined by the network resulting in a pruning of the network defining f̃q. Then compute ˜̃ fq(·) on all x ∈ X to obtain a shortlist of candidates X ′, and then compute f̃q(x) on x ∈ X ′ only. This idea can also be bootstrapped, using rougher, more aggresively pruned estimates ˜̃̃ fq, f̃ (4) q , f̃ (5) q ... and increasingly shorter shortlist. However, an important point is ignored in this approach: The space X is structured, and we expect there to be prior connections between its elements. This is the side information. Such connections can be encoded, for example, as a similarity graph over X where it is expected that fq(x1) is close to fq(x2) whenever there is an edge between x1, x2. There is much work on deep networks over graphs (Zhou et al., 2018; Kipf & Welling, 2017; Wu et al., 2020). But how can the extra information, encoded as a graph, be used in conjunction with the pruning process?\nLet us simplify the information retrieval scenario. First, assume that we are in a classification and not in a regression scenario, so that fq(x) can take a finite set of discrete values, and f̃q(x) returns a vector of logits, one coordinate per class. Second, assume the side information on X is a partitioning of X into cliques, or clusters X1...Xk where on each clique the value of fq(·) is fixed, and written as fq(Xi), i = 1..k. Now the problem becomes that of estimating the fq(Xi)’s using n random samples xi1...xin ∈ Xi, i = 1..k. 1\nFixing the cluster Xi, one obvious thing to do in order to estimate fq(Xi) is to take an average of the logit vectors f̃q(xi1)...f̃q(xin), where f̃q is some fixed (possibly pruned) network, and use the argmax coordinate as prediction. Assuming each pruned network f̃q outputs a prediction vector with a certain level of uncertainty, the averaged vector should have lower uncertainty, and this can be quantified using simple probabilistic arguments. This will henceforth be called the baseline method. Intuitively the baseline method, though easy to do using out-of-the-box pruning libraries, cannot possibly be optimal given the side information of same label across Xi. Indeed, the baseline method feeds all the examples xi1...xin independently through separate instantiations of f̃q, and nothing\n1Continuing the retrieval story , the practitioner would now find the Xi that maximizes fq , and then further focus the search in that cluster.\nprevents the different instantiations to learn overlapping pieces of information. Hence it makes sense to somehow interconnect these networks as a meta-network, and possibly do the pruning on the meta-network. In this work, we experiment with several methods for performing this task, and compare our results with the baseline." }, { "heading": "1.2 THE SHARED-LABEL PREDICTION PROBLEM", "text": "We depart from the original motivating information retrieval scenario, and henceforth consider a simpler, toy problem which we call the shared-label prediction problem. We are given an underlying space of instances X and an unknown ground truth labelling function f : X 7→ Y for some discrete set Y of labels. The goal is to train a classifier that, given a random n-tuple of instances x1...xn ∈ Xn sharing the same unknown label y (so that f(x1) = · · · = f(xm) = y), outputs a prediction of y. This is the shared prediction problem.\nOur work is empirical, and the goal is to develop general methods for the shared prediction problem, given a base network, designed for the standard (non-shared) prediction problem, and a base pruning method, we ask: How do we reuse and rewire these readily available tools to effectively solve the shared-label prediction problem on tuples of n-instances?" }, { "heading": "2 OUR CONTRIBUTION", "text": "Below in Section 2.1 we present four methods. Each method uses a baseline CNN model, together with a pruning method with a compression ratio knob ρ, and creates a meta-network that is parameterized by the information size n and by ρ, designed to solve the shared classification problem. To measure our success, we will both use a measure of accuracy as well as a measure of relative information which we define below. We will compute these measures extensively over a grid of possible pairs (n, ρ), for each method. Visualization of the results highlights an interesting invariant that is worth studying.\nIntuitively, the measure of relative information tells us how efficiently each method uses its computational resource, without wasting time on computing the same pieces of information over and over on the n-tuple of instances. Therefore, it allows us to obtain a quantitative comparison between the methods. To define the measure, we first recall some information theory.\nGiven a random variable Y over a discrete space, the Shannon entropy, or uncertainty of Y is H[Y ] = − ∑ Pr[Y = y] log Pr[Y = y], where the sum ranges over possible values of Y . In our case, we will use H(Y ) to measure the uncertainty in the label of a randomly drawn instance, which is also the uncertainty in the label of a randomly drawn n-tuple in the shared label setting.\nGiven a random variable Ỹ (an estimate of Y ), the information gain measures the difference between the entropy of Y and the expectation with respect to Ỹ of H(Y |Ỹ ). More precisely, IG(Y ; Ỹ ) = H(Y ) − EỸ [ − ∑ y Pr[Y = y|Ỹ ] log Pr[Y = y|Ỹ ] ] . Note that information gain is symmetrical,\nthat is IG(Ỹ ;Y ) = IG(Y ; Ỹ ). Therefore it is also called mutual information and denoted I(Ỹ ;Y ). In our setting, Ỹ will be an estimator of Y obtained using the output of the network on an n-tuple of instances in the shared label setting, and I(Ỹ ;Y ) will measure the expected amount of information we learn about Y using the network output on that tuple. For a given network, we will be computing IG(Ỹ ;Y ) empirically in what follows, by taking Ỹ to be the prediction obtained by selecting the argmax coordinate (logit) of the output of a network.\nGiven a method for the shared-label scenario, we define the relative information (RI) to be\nRI(Ỹ,Y,n, ρ) = IG(Ỹ ;Y )\nn/ρ .\nIn words, this is a measure of information that the network learns, per computational cost. The denominator n/ρ is a reasonable measure of computational cost for the methods we study, because for these methods, the amount of computational effort we expend for shared label instance x1...xn\nis proportional to n, and inverse proportional to the compression ratio ρ. We believe it is also a reasonable measure of computational cost for other natural methods.\nFor all methods we study, fixing the information size n, our experiments suggest that there exists a sweet spot phenomenon, or a \"compression threshold\" in the sense that RI, as a function of ρ, has a global maximum ρ∗. If the compression ratio ρ is smaller than ρ∗, than we are at the undercompressed regime, where we can still save computational resources without relatively deteriorating the results, or the information, to a large extent. On the other side, if the compression ratio ρ is bigger than ρ∗, than we are at the over-compressed regime, where we can gain a lot more information, by using a relatively mere amount of computational resources. We believe that a better understanding of this phenomenon can shed light on the interaction between different compression ratios, information sizes, and the information gains achieved by the methods (which is equivalent to test performances, as our experiments show). We show that the above is a robust phenomenon that occurs in a variety of settings." }, { "heading": "2.1 OUR METHODS", "text": "1. Baseline method (Section 4.1) - Use a fixed model, with a fixed pruning method. For prediction, run the pruned model on the n instances x1...xn, and use the average of the corresponding logit vectors for the shared prediction.\n2. Balanced method (Section 4.2) - The same as the baseline method, except that the training set is organized such that each batch of images consists of k random n-tuples, such that instances of each n-tuple share the same unknown label y. The model is trained using the balanced loss, which is a convex combination of a loss defined for n-tuples, and the standard loss on the individual instances.\n3. Graph method (Section 4.3) - Inspired by work on Graph neural networks (GNNs), we propose an architecture consisting of n duplicates of a base CNN, with information passage between neurons of the different copies of the CNN. The training set is organized in the same way as in the balanced method.\n4. Unified CNN-LSTM method (Section 4.4) - We propose a model that combines a truncated version of a base CNN, giving a latent representation of the inputs, and then connecting the n representations to each other, sequentially, using LSTM (Long Short-Term Memory) gadgets. Intuitively, this architecture uses information learned from instances x1...xi−1, encoded inside the LSTM, to assist in predicting xi for i = 2, . . . n. The training set is organized in the same way as in the balanced method.\nIn Section 5.1 we validate the above sweet spot phenomenon under a variety of benchmark datasets, architectures, compression ratios and information lengths. We report the results of the baseline method for its simplicity (the same results hold for all other shared-label prediction methods as well).\nIn Section 5.2 we compare the differences between the baseline methods and the balanced method both qualitatively and quantitatively.\nIn Section 5.3 we compare all the proposed methods for shared-label prediction across different benchmark data sets for image classification and different evaluation metrics. The proposed unified CNN-LSTM method achieves significantly better performance compared to the other methods." }, { "heading": "3 RELATED WORK", "text": "There is a variety of approaches to compressing neural networks, such as neural network pruning (Mozer & Smolensky, 1989; LeCun et al., 1990; Sze et al., 2017; Reed, 1993; Han et al., 2015; Blalock et al., 2020; Frankle & Carbin, 2019; Frankle et al., 2020), training of dynamic sparse networks (Bellec et al., 2018; Mocanu et al., 2018) dimensionality reduction of network parameters (Jaderberg et al., 2014; Novikov et al., 2015), and many more. Nonetheless, these results do not mention how the new compressed, efficient network, benefit from additional side information.\nMoreover, there is much work on the \"double-descent\" phenomenon (Belkin et al., 2019; Advani & Saxe, 2017; Geiger et al., 2019). In a work by Nakkiran et al. (2020), it is shown that a variety of modern deep learning tasks exhibit a \"double-descent\", and that it occurs not just as a function\nof model size. Therefore, it is an interesting question to ask whether this also occurs in the case of relative information, and our experimental results validate that this is not the case.\nThe concept of graph neural network (GNN) was first proposed by Scarselli et al. (2009), who extended existing neural networks for processing the data represented in graph domains graph papers. The first motivation of GNNs roots in convolutional neural networks (CNNs) (Lecun et al., 1998). Recent works on GNNs (Zhou et al., 2018; Kipf & Welling, 2017; Wu et al., 2020) inspired us to extend this idea as one of our methods for the task of shared-label prediction.\nLastly, RNNs are interesting for our purposes because they equip neural networks with memory, and the introduction of gating units such as LSTM and GRU (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) has greatly helped in making the learning of these networks manageable. The LSTM based architecture has yielded the most promising results throughout our experiments." }, { "heading": "4 OUR METHODS", "text": "In order to describe our four methods in details, we will need to present some standard terminology from the network pruning literature.\nLayer-collapse - Pruning neural networks is usually done in two steps: The first step scores the parameters of a network according to some metric and the second step eliminates parameters based on their scores. This process can be applied both globally (on the network as a whole) and locally (separately on each layer). Recent work (Wang et al., 2020; Lee et al., 2020; You et al., 2020) has identified a key failure mode, layer-collapse, for the global version. Layer-collapse occurs when an algorithm prunes all parameters in a layer, rendering the network disconnected (and untrainable).\nCompression ratio (ρ) - Logarithm to the base 10 of the number of parameters in the original network divided by the number of parameters remaining after pruning. In our experiments we use (not necessarily integer) powers of 10 for the compression ratios. For example, compression = 2.5 means that the number of parameters in the original network divided by the number of parameters remaining after pruning equals to 102.5\nMax compression (ρmax) - The maximal possible compression ratio for a network that doesn’t lead to layer-collapse.\nWe further define an accuracy-based evaluation metric for a shared-label prediction method, naccuracy, which highly correlates with information gain, as our experiments show. We denote ` to be the number of classes in the data set and without loss of generality let the labels be {1, ..., `}. Moreover, ỹi is a vector of size ` that contains raw, unnormalized scores for each class, predicted by a given model.\nn-accuracy - the percentage of correctly classified n-tuples. Formally, n-accuracy of a shared-label prediction method is defined to be\n1\nT T∑ i=1 Xi\nwhere Xi is an indicator for the event that the i’th n-tuple was classified correctly. In our experiments we take T = ` · 100. Namely, we test on 100 random n-tuples from each class and report the average accuracy." }, { "heading": "4.1 BASELINE METHOD", "text": "In the baseline method, each model is simply trained in a standard fashion for image classification, with a randomly shuffled training set with batch size B and the Cross Entropy Loss defined as:\nCross entropy Loss(ỹ, class) = −log(exp(ỹ[class])∑ j exp(ỹ[j]) ) = −ỹ[class] + log( ∑ j exp(ỹ[j]))\nFinally, the losses are averaged over observations for each batch:\nStandard Loss(batch) = 1\nB · B∑ i=1 Cross entropy Loss(ỹi, classi)\nwhere ỹi contains raw, unnormalized scores for each class, predicted by the model for the ith data point in the batch and classi is its corresponding label. For this method, evaluation is done by simply taking an average of the predicted logit vectors ỹ1...ỹn, and then taking the argmax as the shared label prediction. In this way, we can take advantage of the probability scores in each logit vector." }, { "heading": "4.2 BALANCED METHOD", "text": "Recall that our task is to classify n different data points that share the same class with their corresponding label. Thus, the motivation will be to optimize directly for that purpose. The training set is organized such that each batch of images of size B consists k n-tuples, such that each n-tuple share the same unknown label y (B = k · n). For a batch of size B, the average batch prediction is defined to be:\nAverage Batch Prediction(batch) = 1\nB · B∑ i=1 ỹi\nFurthermore, let batchi be the subset of the current batch that only contains data points corresponding to label i (batchi is of size k). Denote ȳi ≡ Average Batch Prediction(batchi). Then, the Average Same Label Loss is defined to be:\nAverage Same Label Loss(batch) = 1 ` · ∑̀ j=i Cross entropy Loss(ȳi, i)\nIntuitively, the loss function encourages the model to do well on each n-tuple rather than doing well on each specific data point. As a result, using this loss as it is in our experiments, does not lead to a model that generalizes well. Therefore, we offer a natural trade-off between the Standard Loss and the Average Same Label Loss, as the first is often used for standard multi-class classification, and the latter may help to perform better at the shared-label prediction task. With that in mind, the idea is to balance between these two losses using a hyper-parameter λ. The balanced loss is defined to be:\nBalanced Loss(batch) = (1− λ) · Average Same Label Loss(batch) + λ · Standard Loss(batch)\nIntuitively, the loss function encourages the model to do well on both the n-tuple as a whole, and on each specific image (controlled by the hyper-parameter λ). When λ = 1, this is equivalent to the baseline method. Throughout our experiments, we use λ = 12 . For this method, evaluation is done in the same way as in the baseline method." }, { "heading": "4.3 GRAPH METHOD", "text": "In the graph method, we propose a duplicated convolutional neural network architecture with information passage between the different copies of the CNN. This is inspired by recent work on Graphical Neural Networks (GNNs) (Zhou et al., 2018; Kipf & Welling, 2017; Wu et al., 2020). We investigate two kinds of architecture, see Appendix B for further information." }, { "heading": "4.4 UNIFIED CNN-LSTM METHOD", "text": "" }, { "heading": "4.4.1 ARCHITECTURE", "text": "We propose a unified CNN-LSTM architecture, which effectively learns both the embedding of the data points into low-dimensional vectors and the dependency between the embedding of different data points in the same sequence. An illustration of this architecture is shown in Appendix A.1. The CNN part extracts semantic representations from images, whereas the shared-label dependency between data points in the same sequence in this low-dimensional space is modeled with the long short-term memory (LSTM) recurrent neurons, which maintain the information of label context in their internal memory states (for more information on LSTM, see Appendix A.2). The LSTM part computes the probability of a shared-label prediction sequentially as an ordered prediction path, where the a posteriori probability of the single true label can be computed again at each time step, based on the image embedding at the current time step and the output of the recurrent neuron from the previous time step. The proposed CNN-LSTM model is a unified framework combining the advantages of both learning an effective image embedding using a deep CNN, while also taking into account the label sharing." }, { "heading": "4.4.2 TRAINING", "text": "Training with the unified CNN-LSTM method is done by using the cross entropy loss on the soft-max normalization of the average of the outputs of the linear layer following the LSTM (see Figure 10), and employing back-propagation through time algorithm. Although it is possible to train the model in an end-to-end way, our experiments show that it is much more preferable to train the CNN part separately, using the balanced method (Section 4.2), and truncate the final classification layer to achieve the desired embedding. Although it is possible to fine-tune the convolutional neural network afterward, we keep it unchanged in our implementation for simplicity (we noticed that it doesn’t make any considerable differences). Both parts of the model are also pruned separately.\nWhen using this method, an important decision is to determine the order of the sequence (as the LSTM part is not symmetric). For further information on the different order techniques that we have experimented with and their motivation, please refer to Appendix A.3." }, { "heading": "5 EXPERIMENTS", "text": "In this section we report the results of our experiments based on the ideas presented in Sections 1.2 and 2. Full experimental details are in Appendix C." }, { "heading": "5.1 COMPARISON USING THE BASELINE METHOD", "text": "In this section, we use the baseline method and a Conv model on the CIFAR-10 data set (see Appendix C for more information). We study how variation in the floating-point operations (FLOPs) due to altering values of n ∈ {1, 2, 3, 4, 5, 7, 10, 15, 40, 60} and ρ effect on different evaluation measures. The results are presented in Figures 1 - 7. Similar results for different combinations of models and data sets are presented in Appendix E.1." }, { "heading": "5.2 BASELINE AND BALANCED METHODS COMPARISON", "text": "In this section, we compare the performances of the baseline method and the balanced method, and report the n-accuracy on various data sets, models, and values of n and ρ. In all cases, it is\nobserved that the balanced method (with λ = 12 ) outperforms the baseline method in the shared-label prediction task for sufficiently large values of n. Nevertheless, it is interesting to observe that the baseline method still almost always outperforms the balanced method in the normal classification task (equivalent to shared-label prediction with n = 1). The results are summarized in Table 1 in Appendix F. For further discussion on the comparison between the two methods, please see Appendix D." }, { "heading": "5.3 SHARED-LABEL PREDICTION METHODS COMPARISON", "text": "In this section, we compare the performances of all the shared-label prediction baseline methods discussed above, and report the n-accuracy on various data sets, models, and values of n and ρ. In all cases, it is observed that the unified CNN-LSTM with balanced trained CNN highly outperforms all the other methods for every value of n ∈ (2, 5, 7, 15, 40) , even though it has less remaining parameters, and uses fewer FLOPs. The results for the Conv model and Tiny-Imagenet data set in measures of n-accuracy and relative information are presented in Figure 8 and Figure 9 respectively. A similar figure for the MNIST data set is presented in Appendix E.2. Further results for the higher n-accuracy methods, with different combinations of models and data sets are presented in Table 2 in Appendix F." }, { "heading": "6 CONCLUSION AND DISCUSSION", "text": "We introduce a real-world motivated problem and investigate how to take advantage of additional side information in order to reduce computational efforts. We study a simple scenario which we coined as the shared-label prediction problem, and suggest various methods, based on different architectures in deep learning, to perform it. We conduct extensive experiments to improve our understanding of i) the and advantages or disadvantage of each method, and the differences between them, ii) the vast connection between measures of accuracy, information, compression ratio, and FLOPs in our settings, and how they interact with each other, and (iii) introduce relative information as a generalized measure of information that the network learns, per computational cost, which, to the best of our knowledge, has not been previously proposed. We further suggest that it enjoys a sweet spot phenomenon, that leads to a regime, where in certain scenarios increasing or decreasing the compression ratio knob ρ can deteriorate the relative information. Therefore, we also believe our characterization of the sweet spot provides a useful way of thinking for practitioners.\nThroughout our research, we have used common pruning algorithms as a black box. Is it an interesting future research question to ask whether it is possible to design a pruning algorithm (or somehow incorporate an existing one as part of the prediction method) that is better suited for the task of shared-label prediction, namely, one that takes advantage of the side information scenario, in order to gain higher performances." }, { "heading": "A UNIFIED CNN-LSTM", "text": "A.1 ARCHITECTURE ILUSTRATION\nThe architecture of the proposed unified CNN-LSTM model for shared-label prediction is presented in Figure 10. The convolutional neural network is employed as the image representation, and the recurrent layer captures the information of the previously predicted labels. The outputs of the LSTM are fed through a linear classifier to compute the output label probability.\nA.2 LONG SHORT TERM MEMORY NETWORKS (LSTM)\nAs mentioned earlier, since the objective is to characterize the high-order label dependency in the same sequence (data points embedding in the same sequence share the same label), we employ long short term memory (LSTM) neurons (Hochreiter & Schmidhuber, 1997) as our recurrent neurons. This approach has been demonstrated to be a powerful model of long-term dependency. RNN is a class of neural network that maintains internal hidden states to model the dynamic temporal behavior of sequences with arbitrary lengths through directed cyclic connections between its units. It can be considered as a hidden Markov model extension that employs a nonlinear transition function and is capable of modeling long term temporal dependencies. LSTM extends RNN by adding three gates to an RNN neuron: a forget gate f to control whether to forget the current state; an input gate i to indicate if it should read the input; an output gate o to control whether to output the state. These gates enable LSTM to learn long-term dependency in a sequence, and make it is easier to optimize, because these gates help the input signal to effectively propagate through the recurrent hidden states r(t) without affecting the output. LSTM also effectively deals with the gradient vanishing/exploding issues that commonly appear during RNN training (Pascanu et al., 2012).\nA.3 SEQUENCE ORDER OF THE LSTM\nIn the experiments of this paper, we tested both random ordering and confidence based ordering - the sequence order during training (and inference) is determined according to the confidence of the corresponding data points by the CNN model. Data points that have higher confidence in the prediction by the CNN model (trained separately with the balanced method) appear earlier than the less confident ones. This corresponds to the intuition that easier data points should be predicted first to help predict more difficult data points (one data points is classified with higher confidence than the other data point if the largest entry in its logit vector is higher than the largest entry in the other data point logit vector). In particular, the first data point in the sequence will not have a prediction from earlier time to rely on, and we would like this prediction to be as easy as possible. Otherwise, we may face a problem - if the first predicted label is wrong, it is possible that the whole sequence will not be correctly predicted. In our experiments, confidence based ordering usually gained better performances than random ordering, especially for lower values of n. We further attempted to randomly permute the label orders in each mini-batch, repeat multiple times, and then taking the average prediction, but this does not have notable effects on the performance and it makes the training harder to converge." }, { "heading": "B THE GRAPH METHOD", "text": "In the graph method, we investigate the following two architectures:\n1. Each of the n data points in the n-tuple goes through a copy of the CNN, and their n corresponding embeddings are fed through another classifier. The whole architecture is trained end-to-end using the cross entropy loss.\n2. Each of the n data points in the n-tuple goes through a copy of the CNN, but now, information passes between different copies of each neuron. This is similar to the architecture used in GNN’s (Graphical Neural Networks)." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "C.1 MODELS\nWe use the following architectures as the model/CNN for each method throughout our experiments.\nFC. Standard fully-connected network designed as follows for input x:\nx← Flatten[x] (Flattens a tensor of dimensions C ×H ×W to a vector of size C ·H ·W ) x← ReLU(Linear(C ·H ·W, 100)) x← ReLU(Linear(100, 100)[x]) (repeat 4 times) x← ReLU(Linear(100, `)[x]) Conv. Standard CNN. We consider a simple 5-layer CNN which is based on the “backbone” architecture from Page (2018), designed as follows for input x:\nx← Conv2d(in channels, out channels = 32, kernel size=(3, 3), stride=(1, 1), padding=(1, 1))[x] (in channels = 1 for MNIST, in channels = 3 for CIFAR10, CIFAR100, Tiny ImageNet) x← ReLU[x] x←Conv2d(in channels = 32, out channels = 32, kernel size=(3, 3), stride=(1, 1), padding=(1, 1))[x] x← Flatten(ReLU[x]) x← Linear(in features ,`)[x]) ResNet18. (He et al., 2015)\nWideResNet20. (Zagoruyko & Komodakis, 2016)\nC.2 OTHER EXPERIMENTAL SETUP\nOptimization. We used the Adam optimizer (Kingma & Ba, 2014), learning rate was set at constant to 10−4 and all other parameters were set to their default PyTorch values.\nData sets. We conducted our experiments on several public benchmark data sets for image classification:\n• MNIST (LeCun & Cortes, 2010) • CIFAR-10 (Krizhevsky, 2009) • CIFAR-100 (Krizhevsky, 2009) • Tiny-ImageNet (Tiny-ImageNet)\nPruning algorithms. All pruning algorithms considered here use the following two steps: (i) scoring parameters, and (ii) masking parameters globally across the network with the lowest scores. description of how we compute scores used in each of the pruning algorithms:\n• Random: We sampled independently from a standard Gaussian. • Magnitude: We computed the absolute value of the parameters. (Han et al., 2015) • SNIP: As done in (Lee et al., 2019) • GraSP: As done in (Wang et al., 2020) • SynFlow: As done in (Tanaka et al., 2020)\nWe report the results using the SynFlow pruning algorithm as it achieved the best results for all methods tested. We run the pruning algorithm for 100 iterations before the training phase (our comparisons hold for other pruners as well)." }, { "heading": "D EXTENDED DISCUSSION ON THE BALANCED METHOD", "text": "It is further observed from our research that it is possible to improve the results of the balanced method. Higher n-accuracy measures were achieved using the following procedure:\n• Train a model with the baseline method until no further improvement is gained. • Freeze all the layers in the model, except the last few layers. • Retrain the mode with the balanced method until no further improvement is gained.\nThe method achieves even higher n-accuracy than the baseline method in the shared-label prediction task for sufficiently large values of n , which in turn is better than the baseline method, as reported in Table 1 in Appendix F.\nThe intuition behind this method is similar to standard transfer learning: Training the model initially with the baseline method yields a model with better representation of both the lower-level features and the higher-level features of the data. Then, tuning it at the end by retraining the final layers only with balanced training yields a model with more adequate high-level features for the shared-label prediction task in one hand, and a better understanding of how to use these features to make a better decision, based on various data points containing the same label.\nConsider the following example when classifying MNIST digits: A standard model in the highlyparametrized regime would learn to detect basic features such as small curved lines in the shallow part of the neural network, and at a deeper stage of the network it may learn to detect more complex features such as circles. When we are at the low-parametrized regime, we have to make a compromise, deteriorating the quality of the high-level features. Using this method may benefit the model by helping it to learn better \"low quality\" high-level features that may not be \"good enough\" for normal classification, yet are sufficient for the shared-label prediction task." }, { "heading": "E ADDITIONAL PLOTS", "text": "E.1 PLOTS FROM SECTION 5.1\nFigures 11-20 describe the results of the comparison done as part of the experiments in Section 5.1. The results presented in Figures 11-15 were generated using a Wide-ResNet20 model on the TinyImageNet data set (n ∈ {1, 2, 3, 4, 5, 7, 10, 15, 40}), and the results presented in Figures 16-20 were generated using a ResNet18 model on the CIFAR-100 data set (n ∈ {1, 2, 3, 4, 5, 7, 10, 15, 40, 60}).\nFigure 11: n-accuracy and FLOPs comparison. It is observed that different compression ratios are optimal (in terms of FLOPs) for different desired n-accuracy.\nFigure 12: Log error and FLOPs comparison, where the error is simply 1− n-accuracy/100 and the binary logarithm is used.\nE.2 PLOTS FROM SECTION 5.3\nFigures 21-22 describe the results of the comparison done as part of the experiments in Section 5.3. The results presented in figure were generated using a Conv model on the MNIST data set (n ∈ {1, 2, 3, 4, 5, 7, 10, 15, 40, 60}).\nF TABLES" } ]
2,020
null
SP:32040641c0cbdc186c2db90470bec7856c89cb38
[ "This paper identifies a problem in imitation learning when an expert has access to privileged information that is not available to the learner. When a decision has to be made based on the privileged information, the learner tends to choose average or uniformly random actions of the expert due to the lack of important information, which is called the \"imitation gap\". Therefore, in such cases, learning from the expert can actually harm the training of the learner.", "The paper approaches the problem of imitation gap. In situations where the teacher (an expert) has access to privileged information, the student may not have such comfort. Therefore, blindly imitating the teacher may lead to inefficient, unwanted or incorrect behavior of the student. The paper provides examples of such situations. Actually, sometimes the student may not even be able to imitate the teacher in some states as not all the information is available. This leads to an imitation gap. Performing classic imitation learning in states where the imitation gap is significant can be pointless. The authors propose to try existing RL methods in such situations. Specifically, the imitation loss and the RL loss are weighted by a normalized coefficient. One of the main contributions of the paper is how to dynamically determine the value of the coefficient. The idea is to in some sense 'measure' or 'quantify' the imitation gap in a given state. The authors follow the intuition that for states where the imitation gap is substantial, learning well the teacher's distribution over actions may not be achievable. It means the divergence between the teacher's distribution and the imitation distribution of actions will be relatively large. This quantity is used as an estimation of imitation gap. To compute that quantity an approximation to the imitation policy is introduced. The divergence between the teacher and this approximation is used to estimate the imitation gap. ", "This work introduces ADVISOR, a simple yet effective approach to adaptively combining RL with supervision from imitation learning. The approach is motivated by the issue of *information gaps,* which describes the case where the policy generating the supervision has more information about the state than does the policy being trained. The authors motivate their approach as a way to address this issue and contribute a thorough set of empirical analyses to demonstrate its effectiveness. In addition, this work provides a concrete demonstration that the information gap is an obstacle.", "This paper points out the existence of \"imitation gap\" when a teacher and student policy has different observations. Due to this imitation gap, the student policy cannot determine which action leads to a higher reward, and thus ends up outputting a uniformly random action or average actions of the teacher policy. The proposed method, ADVISOR, tackles such imitation gap by balancing between learning from reward and expert. When the limited observation is enough to predict the optimal action, the student policy imitates the demonstrations; otherwise, it learns from reward. As the weight function, ADVISOR uses the divergence between the teacher policy and the auxiliary policy, which solely learns from the demonstrations." ]
In practice, imitation learning is preferred over pure reinforcement learning whenever it is possible to design a teaching agent to provide expert supervision. However, we show that when the teaching agent makes decisions with access to privileged information that is unavailable to the student, this information is marginalized during imitation learning, resulting in an “imitation gap” and, potentially, poor results. Prior work bridges this gap via a progression from imitation learning to reinforcement learning. While often successful, gradual progression fails for tasks that require frequent switches between exploration and memorization. To better address these tasks and alleviate the imitation gap we propose ‘Adaptive Insubordination’ (ADVISOR). ADVISOR dynamically weights imitation and reward-based reinforcement learning losses during training, enabling on-the-fly switching between imitation and exploration. On a suite of challenging tasks set within gridworlds, multi-agent particle environments, and high-fidelity 3D simulators, we show that on-the-fly switching with ADVISOR outperforms pure imitation, pure reinforcement learning, as well as their sequential and parallel combinations.
[ { "affiliations": [], "name": "Luca Weihs" }, { "affiliations": [], "name": "Unnat Jain" }, { "affiliations": [], "name": "Iou-Jen Liu" }, { "affiliations": [], "name": "Jordi Salvador" }, { "affiliations": [], "name": "Svetlana Lazebnik" }, { "affiliations": [], "name": "Aniruddha Kembhavi" }, { "affiliations": [], "name": "Alexander Schwing" } ]
[ { "authors": [ "P. Anderson", "A. Chang", "D.S. Chaplot", "A. Dosovitskiy", "S. Gupta", "V. Koltun", "J. Kosecka", "J. Malik", "R. Mottaghi", "M. Savva" ], "title": "On evaluation of embodied navigation agents", "venue": "arXiv preprint arXiv:1807.06757,", "year": 2018 }, { "authors": [ "M. Bain", "C. Sammut" ], "title": "A framework for behavioural cloning", "venue": "Machine Intelligence,", "year": 1995 }, { "authors": [ "M.G. Bellemare", "G. Ostrovski", "A. Guez", "P.S. Thomas", "R. Munos" ], "title": "Increasing the action gap: New operators for reinforcement learning", "venue": "AAAI,", "year": 2016 }, { "authors": [ "T. Brys", "A. Harutyunyan", "H.B. Suay", "S. Chernova", "M.E. Taylor", "A. Nowé" ], "title": "Reinforcement learning from demonstration through shaping", "venue": "Q. Yang and M. J. Wooldridge, editors, IJCAI,", "year": 2015 }, { "authors": [ "K.-W. Chang", "A. Krishnamurthy", "A. Agarwal", "H. Daume", "J. Langford" ], "title": "Learning to search better than your teacher", "venue": "ICML,", "year": 2015 }, { "authors": [ "J. Chemali", "A. Lazaric" ], "title": "Direct policy iteration with demonstrations", "venue": "Q. Yang and M. J. Wooldridge, editors, IJCAI,", "year": 2015 }, { "authors": [ "D. Chen", "B. Zhou", "V. Koltun", "P. Krähenbühl" ], "title": "Learning by cheating", "venue": "Conference on Robot Learning,", "year": 2020 }, { "authors": [ "M. Chevalier-Boisvert", "D. Bahdanau", "S. Lahlou", "L. Willems", "C. Saharia", "T.H. Nguyen", "Y. Bengio" ], "title": "Babyai: A platform to study the sample efficiency of grounded language learning", "venue": "ICLR,", "year": 2018 }, { "authors": [ "M. Chevalier-Boisvert", "L. Willems", "S. Pal" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "F. Codevilla", "M. Müller", "A.M. López", "V. Koltun", "A. Dosovitskiy" ], "title": "End-to-end driving via conditional imitation learning", "venue": "ICRA,", "year": 2018 }, { "authors": [ "A. Das", "S. Kottur", "J.M. Moura", "S. Lee", "D. Batra" ], "title": "Learning cooperative visual dialog agents with deep reinforcement learning", "venue": "ICCV,", "year": 2017 }, { "authors": [ "A. Das", "S. Datta", "G. Gkioxari", "S. Lee", "D. Parikh", "D. Batra" ], "title": "Embodied Question Answering", "venue": "CVPR,", "year": 2018 }, { "authors": [ "A. Das", "G. Gkioxari", "S. Lee", "D. Parikh", "D. Batra" ], "title": "Neural Modular Control for Embodied Question Answering", "venue": "CoRL,", "year": 2018 }, { "authors": [ "M. Deitke", "W. Han", "A. Herrasti", "A. Kembhavi", "E. Kolve", "R. Mottaghi", "J. Salvador", "D. Schwenk", "E. VanderBilt", "M. Wallingford", "L. Weihs", "M. Yatskar", "A. Farhadi" ], "title": "RoboTHOR: An Open Simulation-to-Real Embodied AI Platform", "venue": "CVPR,", "year": 2020 }, { "authors": [ "J. Dodge", "S. Gururangan", "D. Card", "R. Schwartz", "N.A. Smith" ], "title": "Show your work: Improved reporting of experimental results", "venue": "EMNLP,", "year": 2019 }, { "authors": [ "T. Gangwani", "J. Peng" ], "title": "State-only imitation with transition dynamics mismatch", "venue": "ICLR,", "year": 2020 }, { "authors": [ "T. Gangwani", "J. Lehman", "Q. Liu", "J. Peng" ], "title": "Learning belief representations for imitation learning in pomdps", "venue": "A. Globerson and R. Silva, editors, UAI,", "year": 2019 }, { "authors": [ "A. Gupta", "C. Devin", "Y. Liu", "P. Abbeel", "S. Levine" ], "title": "Learning invariant feature spaces to transfer skills with reinforcement learning", "venue": "ICLR,", "year": 2017 }, { "authors": [ "S. Gupta", "J. Davidson", "S. Levine", "R. Sukthankar", "J. Malik" ], "title": "Cognitive Mapping and Planning for Visual Navigation", "venue": "CVPR,", "year": 2017 }, { "authors": [ "J. Hawke", "R. Shen", "C. Gurau", "S. Sharma", "D. Reda", "N. Nikolov", "P. Mazur", "S. Micklethwaite", "N. Griffiths", "A. Shah", "A. Kendall" ], "title": "Urban driving with conditional imitation learning", "venue": "ICRA,", "year": 2020 }, { "authors": [ "D. He", "Y. Xia", "T. Qin", "L. Wang", "N. Yu", "T.-Y. Liu", "W.-Y. Ma" ], "title": "Dual learning for machine translation", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "N. Heess", "D. TB", "S. Sriram", "J. Lemmon", "J. Merel", "G. Wayne", "Y. Tassa", "T. Erez", "Z. Wang", "S. Eslami" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "T. Hester", "M. Vecerik", "O. Pietquin", "M. Lanctot", "T. Schaul", "B. Piot", "D. Horgan", "J. Quan", "A. Sendonaris", "I. Osband" ], "title": "Deep q-learning from demonstrations", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "J. Ho", "S. Ermon" ], "title": "Generative adversarial imitation learning", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "M. Jaderberg", "V. Mnih", "W.M. Czarnecki", "T. Schaul", "J.Z. Leibo", "D. Silver", "K. Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "ICLR,", "year": 2017 }, { "authors": [ "N. Jiang" ], "title": "On value functions and the agent-environment boundary", "venue": "arXiv preprint arXiv:1905.13341,", "year": 2019 }, { "authors": [ "M. Jing", "X. Ma", "W. Huang", "F. Sun", "C. Yang", "B. Fang", "H. Liu" ], "title": "Reinforcement learning from imperfect demonstrations under soft expert guidance", "venue": "AAAI,", "year": 2020 }, { "authors": [ "B. Kang", "Z. Jie", "J. Feng" ], "title": "Policy optimization with demonstrations", "venue": "ICML,", "year": 2018 }, { "authors": [ "D. Kingma", "J. Ba" ], "title": "A method for stochastic optimization", "venue": "CVPR,", "year": 2017 }, { "authors": [ "J. Kober", "J.R. Peters" ], "title": "Policy search for motor primitives in robotics", "venue": "NeurIPS,", "year": 2009 }, { "authors": [ "E. Kolve", "R. Mottaghi", "W. Han", "E. VanderBilt", "L. Weihs", "A. Herrasti", "D. Gordon", "Y. Zhu", "A. Gupta", "A. Farhadi" ], "title": "AI2-THOR: an interactive 3d environment for visual AI", "venue": "arXiv preprint arXiv:1712.05474,", "year": 2019 }, { "authors": [ "I. Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https://github. com/ikostrikov/pytorch-a2c-ppo-acktr-gail,", "year": 2018 }, { "authors": [ "H. Le", "N. Jiang", "A. Agarwal", "M. Dudik", "Y. Yue", "H. Daumé" ], "title": "Hierarchical imitation and reinforcement learning", "venue": "ICML,", "year": 2018 }, { "authors": [ "S. Levine", "C. Finn", "T. Darrell", "P. Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "JMLR,", "year": 2016 }, { "authors": [ "T.P. Lillicrap", "J.J. Hunt", "A. Pritzel", "N. Heess", "T. Erez", "Y. Tassa", "D. Silver", "D. Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "ICLR,", "year": 2016 }, { "authors": [ "I.-J. Liu", "R. Yeh", "A.G. Schwing" ], "title": "PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning", "venue": "CORL,", "year": 2019 }, { "authors": [ "R. Lowe", "Y. Wu", "A. Tamar", "J. Harb", "P. Abbeel", "I. Mordatch" ], "title": "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "R. Lowe", "A. Gupta", "J.N. Foerster", "D. Kiela", "J. Pineau" ], "title": "On the interaction between supervision and self-play in emergent communication", "venue": "ICLR,", "year": 2020 }, { "authors": [ "A.R. Mahmood", "H.P. van Hasselt", "R.S. Sutton" ], "title": "Weighted importance sampling for off-policy learning with linear function approximation", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "P. Mirowski", "R. Pascanu", "F. Viola", "H. Soyer", "A. Ballard", "A. Banino", "M. Denil", "R. Goroshin", "L. Sifre", "K. Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G. Ostrovski", "S. Petersen", "C. Beattie", "A. Sadik", "I. Antonoglou", "H. King", "D. Kumaran", "D. Wierstra", "S. Legg", "D. Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature,", "year": 2015 }, { "authors": [ "V. Mnih", "A.P. Badia", "M. Mirza", "A. Graves", "T. Lillicrap", "T. Harley", "D. Silver", "K. Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "ICML,", "year": 2016 }, { "authors": [ "I. Mordatch", "P. Abbeel" ], "title": "Emergence of Grounded Compositional Language in Multi-Agent Populations", "venue": "AAAI,", "year": 2018 }, { "authors": [ "A. Nair", "B. McGrew", "M. Andrychowicz", "W. Zaremba", "P. Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "ICRA,", "year": 2018 }, { "authors": [ "T. Osa", "J. Pajarinen", "G. Neumann", "J.A. Bagnell", "P. Abbeel", "J. Peters" ], "title": "An algorithmic perspective on imitation learning", "venue": "Foundations and Trends in Robotics,", "year": 2018 }, { "authors": [ "D. Pathak", "P. Agrawal", "A.A. Efros", "T. Darrell" ], "title": "Curiosity-driven exploration by selfsupervised prediction", "venue": "ICML,", "year": 2017 }, { "authors": [ "X.B. Peng", "P. Abbeel", "S. Levine", "M. van de Panne" ], "title": "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills", "venue": "ACM Trans. Graph.,", "year": 2018 }, { "authors": [ "J. Peters", "S. Schaal" ], "title": "Reinforcement learning of motor skills with policy gradients", "venue": "Neural networks,", "year": 2008 }, { "authors": [ "D.A. Pomerleau" ], "title": "Efficient training of artificial neural networks for autonomous navigation", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "A. Rajeswaran", "V. Kumar", "A. Gupta", "G. Vezzani", "J. Schulman", "E. Todorov", "S. Levine" ], "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations", "venue": "RSS,", "year": 2018 }, { "authors": [ "S. Ross", "D. Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "AISTATS,", "year": 2010 }, { "authors": [ "S. Ross", "G. Gordon", "D. Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "AISTATS,", "year": 2011 }, { "authors": [ "C. Sammut", "S. Hurst", "D. Kedzier", "D. Michie" ], "title": "Learning to fly", "venue": "Machine Learning Proceedings,", "year": 1992 }, { "authors": [ "M. Savva", "A. Kadian", "O. Maksymets", "Y. Zhao", "E. Wijmans", "B. Jain", "J. Straub", "J. Liu", "V. Koltun", "J. Malik", "D. Parikh", "D. Batra" ], "title": "Habitat: A Platform for Embodied AI Research", "venue": "ICCV,", "year": 2019 }, { "authors": [ "T. Schaul", "J. Quan", "I. Antonoglou", "D. Silver" ], "title": "Prioritized Experience Replay", "venue": "ICLR,", "year": 2016 }, { "authors": [ "J. Schulman", "S. Levine", "P. Abbeel", "M. Jordan", "P. Moritz" ], "title": "Trust region policy optimization", "venue": "ICML,", "year": 2015 }, { "authors": [ "J. Schulman", "P. Moritz", "S. Levine", "M. Jordan", "P. Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "T.T. Shi", "A. Karpathy", "L.J. Fan", "J. Hernandez", "P. Liang" ], "title": "World of bits: An open-domain platform for web-based agents", "venue": "ICML,", "year": 2017 }, { "authors": [ "M. Shridhar", "J. Thomason", "D. Gordon", "Y. Bisk", "W. Han", "R. Mottaghi", "L. Zettlemoyer", "D. Fox" ], "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "venue": "CVPR,", "year": 2020 }, { "authors": [ "D. Silver", "A. Huang", "C.J. Maddison", "A. Guez", "L. Sifre", "G. van den Driessche", "J. Schrittwieser", "I. Antonoglou", "V. Panneershelvam", "M. Lanctot", "S. Dieleman", "D. Grewe", "J. Nham", "N. Kalchbrenner", "I. Sutskever", "T. Lillicrap", "M. Leach", "K. Kavukcuoglu", "T. Graepel", "D. Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree search. Nature, 2016", "venue": null, "year": 2016 }, { "authors": [ "W. Sun", "J.A. Bagnell", "B. Boots" ], "title": "Truncated horizon policy search: Combining reinforcement learning & imitation learning", "venue": "ICLR,", "year": 2018 }, { "authors": [ "M. van der Laan", "S. Gruber" ], "title": "One-step targeted minimum loss-based estimation based on universal least favorable one-dimensional submodels", "venue": "The international journal of biostatistics,", "year": 2016 }, { "authors": [ "A. van der Vaart" ], "title": "Asymptotic Statistics. Asymptotic Statistics", "venue": "URL https://books.google.com/books?id=UEuQEM5RjWgC", "year": 2000 }, { "authors": [ "H. van Hasselt", "A. Guez", "D. Silver" ], "title": "Deep reinforcement learning with double q-learning", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "M. Vecerik", "T. Hester", "J. Scholz", "F. Wang", "O. Pietquin", "B. Piot", "N. Heess", "T. Rothörl", "T. Lampe", "M. Riedmiller" ], "title": "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards", "venue": "arXiv preprint arXiv:1707.08817,", "year": 2017 }, { "authors": [ "X. Wang", "W. Chen", "J. Wu", "Y.-F. Wang", "W. Yang Wang" ], "title": "Video captioning via hierarchical reinforcement learning", "venue": "CVPR,", "year": 2018 }, { "authors": [ "Z. Wang", "V. Bapst", "N. Heess", "V. Mnih", "R. Munos", "K. Kavukcuoglu", "N. de Freitas" ], "title": "Sample efficient actor-critic with experience replay", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "A. Warrington", "J.W. Lavington", "A. Ścibior", "M. Schmidt", "F. Wood" ], "title": "Robust asymmetric learning in pomdps", "venue": "CoRR, abs/2012.15566,", "year": 2020 }, { "authors": [ "L. Wasserman" ], "title": "All of Nonparametric Statistics (Springer Texts in Statistics)", "venue": "Springer-Verlag, Berlin, Heidelberg,", "year": 2006 }, { "authors": [ "F. Xia", "A.R. Zamir", "Z. He", "A. Sax", "J. Malik", "S. Savarese" ], "title": "Gibson env: Real-world perception for embodied agents", "venue": "CVPR,", "year": 2018 }, { "authors": [ "Y. Zhu", "Z. Wang", "J. Merel", "A. Rusu", "T. Erez", "S. Cabi", "S. Tunyasuvunakool", "J. Kramár", "R. Hadsell", "N. de Freitas", "N. Heess" ], "title": "Reinforcement and imitation learning for diverse visuomotor skills", "venue": "In Proceedings of Robotics: Science and Systems,", "year": 2018 } ]
[ { "heading": "1 Introduction", "text": "Imitation learning (IL) can be remarkably successful in settings where reinforcement learning (RL) struggles. For instance, IL has been shown to succeed in complex tasks with sparse rewards [8, 47, 44], and when the observations are high-dimensional, e.g., in visual 3D environments [31, 54]. To succeed, IL provides the agent with consistent expert supervision at every timestep, making it less reliant on the agent randomly attaining success. To obtain this expert supervision, it is often convenient to use “privileged information,” i.e., information that is unavailable to the student at inference time. This privileged information takes many forms in practice. For instance, in navigational tasks, experts are frequently designed using shortest path algorithms which access the environment’s connectivity graph [e.g., 19]. Other forms of privilege include semantic maps [e.g., 60, 13], the ability to see into “the future” via rollouts [61], and ground-truth world layouts [7]. The following example shows how this type of privileged information can result in IL dramatically failing. Example 1 (Poisoned Doors). Suppose an agent is presented with N 3 doors d1, . . . , dN . As illustrated in Fig. 1 (for N = 4), opening d1 requires entering an unknown fixed code of length M . Successful code entry results in a reward of 1, otherwise the reward is 0. Since the code is unknown to the agent, it would need to learn the code by trial and error. All other doors can be opened without a code. For some randomly chosen 2 j N (sampled each episode), the reward behind dj is 2 but for all i 2 {2, . . . , N} \\ {j} the reward behind di is 2. Without knowing j, the optimal policy is to always enter the correct code to open d1 obtaining an expected reward of 1. In contrast, if the expert\n⇤denotes equal contribution by LW and UJ; †work done, in part, as an intern at Allen Institute for AI\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nis given the privileged knowledge of the door dj with reward 2, it will always choose to open this door immediately. It is easy to see that an agent without knowledge of j attempting to imitate such an expert will learn to open a door among d2, . . . , dN uniformly at random obtaining an expected return of 2 · (N 3)/(N 1). In this setting, training with reward-based RL after a ‘warm start’ with IL is strictly worse than starting without it: the agent needs to unlearn its policy and then, by chance, stumble into entering the correct code for door d1, a practical impossibility when M is large.\nTo characterize this imitation failure, we show that training a student to imitate a teacher who uses privileged information results in the student learning a policy which marginalizes out this privileged information. This can result in a sub-optimal, even uniformly random, student policy over a large collection of states. We call the discrepancy between the teacher’s and student’s policy the imitation gap. To bridge the imitation gap, we introduce Adaptive Insubordination (ADVISOR). ADVISOR adaptively weights imitation and RL losses. Specifically, throughout training we use an auxiliary actor which judges whether the current observation is better treated using an IL or a RL loss. For this, the auxiliary actor attempts to reproduce the teacher’s action using the observations of the student at every step.\nIntuitively, the weight corresponding to the IL loss is large when the auxiliary actor can reproduce the teacher’s action with high confidence.\nWe study the benefits of ADVISOR on thirteen tasks, including ‘POISONEDDOORS’ from Ex. 1, a 2D “lighthouse” gridworld, a suite of tasks set within the MINIGRID environment [8, 9], Cooperative Navigation with limited range (COOPNAV) in the multi-agent particle environment (MPE) [43, 38], and two navigational tasks set in 3D, high visual fidelity, simulators of real-world living environments (POINTNAV in AIHABITAT [54] and OBJECTNAV in ROBOTHOR [31, 14]). Our results show that, • the imitation gap’s size directly impacts agent performance when using modern learning methods, • ADVISOR is performant (outperforming IL and RL baselines), robust, and sample efficient, • ADVISOR can succeed even when expert supervision is partially corrupted, and • ADVISOR can be easily integrated in existing pipelines spanning diverse observations (grids and pixels), actions spaces (discrete and continuous), and algorithms (PPO and MADDPG)." }, { "heading": "2 Related Work", "text": "A series of methods [e.g., 41, 65, 3, 55] have made off-policy deep Q-learning stable for complex environments like Atari Games. Several high-performance (on-policy) policy-gradient methods for deep-RL have also been proposed [56, 42, 34, 68, 61]. For instance, Trust Region Policy Optimization (TRPO) [56] improves sample-efficiency by safely integrating larger gradient steps. Proximal Policy Optimization (PPO) [58] employs a clipped variant of TRPO’s surrogate objective and is widely adopted in the deep RL community. We use PPO as a baseline in our experiments.\nAs environments get more complex, navigating the search space with only deep RL and simple heuristic exploration (such as ✏-greedy) is increasingly difficult. Therefore, methods that imitate expert (i.e., teacher) supervision were introduced. A popular approach to imitation learning (IL) is Behaviour Cloning (BC), i.e., use of a supervised classification loss between the policy of the student and expert agents [53, 2]. However, BC suffers from compounding errors. Namely, a single mistake of the student may lead to settings that have never been observed in training [51]. To address this, Data Aggregation (DAgger) [52] trains a sequence of student policies by querying the expert at states beyond those that would be reached by following only expert actions. IL is further enhanced by, e.g., hierarchies [33], improving over the expert [5, 4, 27], bypassing any intermediate reward function inference [24], and/or learning from experts that differ from the student [18, 26, 16]. Importantly, a sequential combination of IL and RL, i.e., pre-training a model on expert data before letting the agent interact with the environment, performs remarkably well. This strategy has been applied in a wide range of applications – the game of Go [61], robotic and motor skills [49, 30, 48, 50], navigation in visually realistic environments [19, 12], and web & language based tasks [21, 11, 59, 67].\nMore recent methods mix expert demonstrations with the agent’s own rollouts instead of using a sequential combination of IL followed by RL. Chemali and Lazaric [6] perform policy iteration from expert and on-policy demonstrations. DQfD [23] initializes the replay buffer with expert episodes\nand adds rollouts of (a pretrained) agent. They weight experiences based on the previous temporal difference errors [55] and use a supervised loss to learn from the expert. For continuous action spaces, DDPGfD [66] analogously incorporates IL into DDPG [35]. POfD [28] improves by adding a demonstration-guided exploration term, i.e., the Jensen-Shannon divergence between the expert’s and the learner’s policy (estimated using occupancy measures). THOR uses suboptimal experts to reshape rewards and then searches over a finite planning horizon [62]. Zhu et al. [72] show that a combination of GAIL [24] and RL can be highly effective for difficult manipulation tasks.\nCritically, the above methods have, implicitly or explicitly, been designed under certain assumptions (e.g., the agent operates in an MDP) which imply the expert and student observe the same state. Different from the above methods, we investigate the difference of privilege between the expert policy and the learned policy. Contrary to a sequential, static, or rule-based combination of supervised loss or divergence, we train an auxiliary actor to adaptively weight IL and RL losses. To the best of our knowledge, this hasn’t been studied before. In concurrent work, Warrington et al. [69] address the imitation gap by jointly training their teacher and student to adapt the teacher to the student. For our applications of interest, this work is not applicable as our expert teachers are fixed.\nOur approach attempts to reduce the imitation gap directly, assuming the information available to the learning agent is fixed. An indirect approach to reduce this gap is to enrich the information available to the agent or to improve the agent’s memory of past experience. Several works have considered this direction in the context of autonomous driving [10, 20] and continuous control [17]. We expect that these methods can be beneficially combined with the method that we discuss next." }, { "heading": "3 ADVISOR", "text": "We first introduce notation to define the imitation gap and illustrate how it arises due to ‘policy averaging.’ Using an ‘auxiliary policy’ construct, we then propose ADVISOR to bridge this gap. Finally, we show how to estimate the auxiliary policy in practice using deep networks. In what follows we will use the terms teacher and expert interchangeably. Our use of “teacher” is meant to emphasize that these policies are (1) designed for providing supervision for a student and (2) need not be optimal among all policies." }, { "heading": "3.1 Imitation gap", "text": "We want an agent to complete task T in environment E . The environment has states s 2 S and the agent executes an action a 2 A at every discrete timestep t 0. For simplicity and w.l.o.g. assume both A and S are finite. For example, let E be a 1D-gridworld in which the agent is tasked with navigating to a location by executing actions to move left or right, as shown in Fig. 2a. Here and below we assume states s 2 S encapsulate historical information so that s includes the full trajectory of the agent up to time t 0. The objective is to find a policy ⇡, i.e., a mapping from states to distributions over actions, which maximizes an evaluation criterion. Often this policy search is restricted to a set of feasible policies ⇧feas., for instance ⇧feas. may be the set {⇡(·; ✓) : ✓ 2 RD} where ⇡(·; ✓) is a deep neural network with D-dimensional parameters ✓. In classical (deep) RL [41, 42], the evaluation criterion is usually the expected -discounted future return.\nWe focus on the setting of partially-observed Markov decision processes (POMDPs) where an agent makes decisions without access to the full state information. We model this restricted access by defining a filtration function f : S ! Of and limiting the space of feasible policies to those policies ⇧feas.f for which the value of ⇡(s) depends on s only through f(s), i.e., so that f(s) = f(s\n0) implies ⇡(s) = ⇡(s0). We call any ⇡ satisfying this condition an f -restricted policy and the set of feasible f -restricted policies ⇧feas.f . In a gridworld example, f might restrict s to only include information local to the agent’s current position as shown in Figs. 2c, 2d. If a f -restricted policy is optimal among all other f -restricted policies, we say it is f -optimal. We call o 2 Of a partial-observation and for any f -restricted policy ⇡f we write ⇡f (o) to mean ⇡f (s) if f(s) = o. It is frequently the case that, during training, we have access to a teacher policy which is able to successfully complete the task T . This teacher policy may have access to the whole environment state and thus may be optimal among all policies. Alternatively, the teacher policy may, like the student, only make decisions given partial information (e.g., a human who sees exactly the same inputs as the student). For flexibility we will define the teacher policy as ⇡teachf teach , denoting it is an f teach-restricted policy for some filtration function\nA A Goal Boundary Agent 1. 2.\nA A\nA A\n0. Random start\n1. Move right\n· · ·\nAn. Goal reached\nHypothetical episode\n2. Move right\n3. Move left\n0.0 1.0\n0.0 1.0\n0.0 1.0\n0.0 1.0\n(a) (b) (c) (d)\nEnvironment E start states\nActions A = {left, right}\n= {L, R}\nL R\nA A\nA A·\n· ·\nA\nf 2-partial obs.\n0.5 0.5\n0.5 0.5\n0.0 1.0\n0.0 1.0\nL R ⇡\nIL f2\nA\nA A\nA A·\n· ·\nf 1-partial obs.\n0.5 0.5\n0.5 0.5\n0.5 0.5\n0.5 0.5\nL R ⇡\nIL f1⇡ teach\n<latexit sha1_base64=\"Vl/iaP1zJc4vIB4jjoXAxI2rjqE=\">AAACEXicbVDLSsNAFJ3UV62vWpdugkVwVRJRdFl047KCfUAby2R60w6dZMLMjbSEfIV7t/oL7sStX+Af+BlOHwvbeuDC4Zx7OZfjx4JrdJxvK7e2vrG5ld8u7Ozu7R8UD0sNLRPFoM6kkKrlUw2CR1BHjgJasQIa+gKa/vB24jefQGkuowccx+CFtB/xgDOKRuoWS52YP6YdhBGmCJQNsqxbLDsVZwp7lbhzUiZz1LrFn05PsiSECJmgWrddJ0YvpQo5E5AVOomGmLIh7UPb0IiGoL10+ntmnxqlZwdSmYnQnqp/L1Iaaj0OfbMZUhzoZW8i/ue1EwyuvZRHcYIQsVlQkAgbpT0pwu5xBQzF2BDKFDe/2mxAFWVo6lpI8aUcIvV1VjDVuMtFrJLGecW9qFzeX5SrN/OS8uSYnJAz4pIrUiV3pEbqhJEReSGv5M16tt6tD+tztpqz5jdHZAHW1y8vtp5g</latexit>\nFigure 2: Effect of partial observability in a 1-dimensional gridworld environment. (a) The two start states and actions space for 1D-Lighthouse with N = 4. (b) A trajectory of the agent following a hypothetical random policy. At every trajectory step we display output probabilities as per the shortest-path expert (⇡teach) for each state. (c/d) Using the same trajectory from (b) we highlight the partial-observations available to the agent (shaded gray) under different filtration function f1, f2. Notice that, under f1, the agent does not see the goal within its first four steps. The policies ⇡ILf1 , ⇡ IL f2 , learned by imitating ⇡teach, show that imitation results in sub-optimal policies, i.e., ⇡ILf1 , ⇡ IL f2 6= ⇡ teach.\nf teach. For simplicity, we will assume that ⇡teachf teach is f teach-optimal. Subsequently, we will drop the subscript f teach unless we wish to explicitly discuss multiple teachers simultaneously.\nIn IL [45, 52], ⇡f is trained to mimic ⇡teach by minimizing the (expected) cross-entropy between ⇡f and ⇡teach over a set of sampled states s 2 S:\nmin ⇡f2⇧feas.f\nEµ[CE(⇡teach, ⇡f )(S)] , (1)\nwhere CE(⇡teach, ⇡f )(S) = ⇡teach(S) log ⇡f (S), denotes the usual dot-product, and S is a random variable taking values s 2 S with probability measure µ : S ! [0, 1]. Often µ(s) is chosen to equal the frequency with which an exploration policy (e.g., random actions or ⇡teach) visits state s in a randomly initialized episode. When it exists, we denote the policy minimizing Eq. (1) as ⇡µ,⇡ teach\nf .\nWhen µ and ⇡teach are unambiguous, we write ⇡ILf = ⇡ µ,⇡teach f .\nWhat happens when there is a difference of privilege (or filtration functions) between the teacher and the student? Intuitively, if the information that a teacher uses to make a decision is unavailable to the student then the student has little hope of being able to mimic the teacher’s decisions. As we show in our next example, even when optimizing perfectly, depending on the choice of f and f teach, IL may result in ⇡ILf being uniformly random over a large collection of states. We call the phenomenon that ⇡\nIL f 6= ⇡ teach the imitation gap. Example 2 (1D-Lighthouse). We illustrate the imitation gap using a gridworld spanning { N, . . . , N}. The two start states correspond to the goal being at either N or N , while the agent is always initialized at 0 (see Fig. 2a). Clearly, with full state information, ⇡teach maps states to an ‘always left’ or ‘always right’ probability distribution, depending on whether the goal is on the left or right, respectively. Suppose now that the agent’s visibility is constrained to a radius of i (Fig. 2c shows i = 1), i.e., an f i-restricted observation is accessible. An agent following an optimal policy with a visibility of radius i will begin to move deterministically towards any corner, w.l.o.g. assume right. When the agent sees the rightmost edge (from position N i), it will either continue to move right if the goal is visible or, if it’s not, move left until it reaches the goal (at N ). Now we may ask: what is the best f i-restricted policy that can be learnt by imitating ⇡teach (i.e., what is ⇡ILfi )? Tragically, the cross-entropy loss causes ⇡ IL\nfi to be uniform in a large number of states. In particular, an agent following policy ⇡ILfi will execute left (and right) actions with probability 0.5, until it is within a distance of i from one of the corners. Subsequently, it will head directly to the goal. See the policies highlighted in Figs. 2c, 2d. The intuition for this result is straightforward: until the agent observes one of the corners it cannot know if the goal is to the right or left and, conditional on its observations, each of these events is equally likely under µ (assumed uniform). Hence for half of these events the teacher will instruct the agent to go right. For the other half the instruction is to go left. See App. A.1 for a rigorous treatment of this example. In Sec. 4 and Fig. 6, we train f i-restricted policies with f\nj-optimal teachers for a 2D variant of this example. We empirically verify that a student learns a better policy when imitating teachers whose filtration function is closest to their own.\nThe above example shows: when a student attempts to imitate an expert that is privileged with information not available to the student, the student learns a version of ⇡teach in which this privileged information is marginalized out. We formalize this intuition in the following proposition. Proposition 1 (Policy Averaging)." }, { "heading": "In the setting of Section 3.1, suppose that ⇧feas. contains all f -restricted policies. Then, for any s 2 S", "text": "with o = f(s), we have that ⇡ILf (o) = Eµ[⇡teach(S) | f(S) = o].\nGiven our definitions, the proof of this proposition is quite straightforward, see Appendix A.2.\nThe imitation gap provides theoretical justification for the common practical observation that an agent trained via IL can often be significantly improved by continuing to train the agent using pure RL (e.g., PPO) [38, 13]. Obviously training first with IL and then via pure RL is ad hoc and potentially sub-optimal as discussed in Ex. 1 and empirically shown in Sec. 4. To alleviate this problem, the student should imitate the teacher’s policy only in settings where the teacher’s policy can, in principle, be exactly reproduced by the student. Otherwise the student should learn via ‘standard’ RL. To achieve this we introduce ADVISOR." }, { "heading": "3.2 Adaptive Insubordination (ADVISOR) with Policy Gradients", "text": "To close the imitation gap, ADVISOR adaptively weights reward-based and imitation losses. Intuitively, it supervises a student by asking it to imitate a teacher’s policy only in those states s 2 S for which the imitation gap is small. For all other states, it trains the student using reward-based RL. To simplify notation, we denote the reward-based RL loss via Eµ[L(✓, S)] for some loss function L.2 This loss formulation is general and spans all policy gradient methods, including A2C and PPO. The imitation loss is the standard cross-entropy loss Eµ[CE(⇡teach(S), ⇡f (S; ✓))]. Concretely, the ADVISOR loss is:\nL ADV(✓) = Eµ[w(S) · CE(⇡teach(S), ⇡f (S; ✓)) + (1 w(S)) · L(✓, S)] . (2)\nOur goal is to find a weight function w : S ! [0, 1] where w(s) ⇡ 1 when the imitation gap is small and w(s) ⇡ 0 otherwise. For this we need an estimator of the distance between ⇡teach and ⇡ILf at a state s and a mapping from this distance to weights in [0, 1].\nWe now define a distance estimate d0(⇡, ⇡f )(s) between a policy ⇡ and an f -restricted policy ⇡f at a state s. We can use any common non-negative distance (or divergence) d between probability distributions on A, e.g., in our experiments we use the KL-divergence. While there are many possible strategies for using d to estimate d0(⇡, ⇡f )(s), perhaps the simplest of these strategies is to define d 0(⇡, ⇡f )(s) = d(⇡(s), ⇡f (s)). Note that this quantity does not attempt to use any information about the fiber f 1(f(s)) which may be useful in producing more holistic measures of distances.3 Appendix A.3 considers how those distances can be used in lieu of d0. Next, using the above, we need to estimate the quantity d0(⇡teach, ⇡ILf )(s).\nUnfortunately it is, in general, impossible to compute d0(⇡teach, ⇡ILf )(s) exactly as it is intractable to compute the optimal minimizer ⇡ILf . Instead we leverage an estimator of ⇡ IL f which we term ⇡ aux f , and which we will define in the next section.\nGiven ⇡auxf we obtain the estimator d 0(⇡teach, ⇡auxf ) of d 0(⇡teach, ⇡ILf ). Additionally, we make use of the monotonically decreasing function m↵ : R 0 ! [0, 1], where ↵ 0. We define our weight function w(s) for s 2 S as:\nw(s) = m↵(d 0(⇡teach, ⇡auxf )(s)) with m↵(x) = e ↵x . (3)\n2For readability, we implicitly make three key simplifications. First, computing the expectation Eµ[. . .] is generally intractable, hence we cannot directly minimize losses such as Eµ[L(✓, S)]. Instead, we approximate the expectation using rollouts from µ and optimize the empirical loss. Second, recent RL methods adjust the measure µ over states as optimization progresses while we assume it to be static for simplicity. Our final simplification regards the degree to which any loss can be, and is, optimized. In general, losses are often optimized by gradient descent and generally no guarantees are given that the global optimum can be found. Extending our presentation to encompass these issues is straightforward but notationally dense.\n3Measures using such information include maxs02f 1(f(s) d(⇡(s 0),⇡f (s)) or a corresponding expectation\ninstead of the maximization, i.e., Eµ[d(⇡(S),⇡f (S)) | f(S) = o].\n<latexit sha1_base64=\"8UvOLcMINhElun+CCUIeebrMvgg=\">AAACFHicbVC7SgNBFJ2NrxhfUbGyWQyCVdiViJbBNBYWEcwDkhDuTibJkNmZZeZuMCz5DXtb/QU7sbX3D/wMJ4/CJB64cDjnXs7lBJHgBj3v20mtrW9sbqW3Mzu7e/sH2cOjqlGxpqxClVC6HoBhgktWQY6C1SPNIAwEqwWD0sSvDZk2XMlHHEWsFUJP8i6ngFZqZ0+ayJ7Q0OQehlDSyhgue+N2NuflvSncVeLPSY7MUW5nf5odReOQSaQCjGn4XoStBDRyKtg404wNi4AOoMcalkoImWkl0/fH7rlVOm5XaTsS3an69yKB0JhRGNjNELBvlr2J+J/XiLF700q4jGJkks6CurFwUbmTLtwO14yiGFkCVHP7q0v7oIGibWwhJVBqgBCYccZW4y8XsUqql3m/kL96KOSKt/OS0uSUnJEL4pNrUiR3pEwqhJKEvJBX8uY8O+/Oh/M5W00585tjsgDn6xdtOJ+Q</latexit>\n<latexit sha1_base64=\"D97nbtV/wTSVptYAF1oc2Zwyvao=\">AAACEHicbVDLSsNAFJ3UV62vVJdugkVwVRKp6LLoxpVUsA9oQ5lMJ+3QyUyYuamW0J9w71Z/wZ249Q/8Az/DaZuFbT1w4XDOvZzLCWLONLjut5VbW9/Y3MpvF3Z29/YP7OJhQ8tEEVonkkvVCrCmnAlaBwactmJFcRRw2gyGN1O/OaJKMykeYBxTP8J9wUJGMBipaxc7QJ9Ak7QmmYA7PJp07ZJbdmdwVomXkRLKUOvaP52eJElEBRCOtW57bgx+ihUwwumk0Ek0jTEZ4j5tGypwRLWfzl6fOKdG6TmhVGYEODP170WKI63HUWA2IwwDvexNxf+8dgLhlZ8yESdABZkHhQl3QDrTHpweU5QAHxuCiWLmV4cMsMIETFsLKYGUQ8CBnhRMNd5yEaukcV72KuWL+0qpep2VlEfH6ASdIQ9doiq6RTVURwQ9ohf0it6sZ+vd+rA+56s5K7s5Qguwvn4BJN+dyw==</latexit>\n<latexit sha1_base64=\"eMqAanw3KMn9iyTavcp/3LmCrEY=\">AAACEXicbVDLTsJAFJ3iC/FVcemmkZi4Iq3B6JLoxpViIo8EGjIdpjAy7TQztwTS9Cvcu9VfcGfc+gX+gZ/hAF0IeJKbnJxzb+69x4s4U2Db30ZubX1jcyu/XdjZ3ds/MA+LDSViSWidCC5ky8OKchbSOjDgtBVJigOP06Y3vJn6zRGVionwESYRdQPcD5nPCAYtdc1iB+gYFEnuvSdK4A6P0q5Zssv2DNYqcTJSQhlqXfOn0xMkDmgIhGOl2o4dgZtgCYxwmhY6saIRJkPcp21NQxxQ5Saz21PrVCs9yxdSVwjWTP07keBAqUng6c4Aw0Ate1PxP68dg3/lJiyMYqAhmS/yY26BsKZBWD0m9cN8ogkmkulbLTLAEhPQcS1s8YQYAvZUWtDROMtBrJLGedmplC8eKqXqdRZSHh2jE3SGHHSJqugW1VAdETRGL+gVvRnPxrvxYXzOW3NGNnOEFmB8/QLLwZ4i</latexit>\n<latexit sha1_base64=\"FIwxYlpySiGWxQrQzv3L9dgW6Kg=\">AAACD3icbVDLSsNAFJ3UV62vqEs3g0VwVRJRdFnsxpVUsA9oQ5lMJ+3QSSbM3BRLyEe4d6u/4E7c+gn+gZ/htM3Cth64cDjnXs7l+LHgGhzn2yqsrW9sbhW3Szu7e/sH9uFRU8tEUdagUkjV9olmgkesARwEa8eKkdAXrOWPalO/NWZKcxk9wiRmXkgGEQ84JWCknm13gT2BpmlNyviejLOeXXYqzgx4lbg5KaMc9Z790+1LmoQsAiqI1h3XicFLiQJOBctK3USzmNARGbCOoREJmfbS2ecZPjNKHwdSmYkAz9S/FykJtZ6EvtkMCQz1sjcV//M6CQQ3XsqjOAEW0XlQkAgMEk9rwH2uGAUxMYRQxc2vmA6JIhRMWQspvpQjIL7OSqYad7mIVdK8qLiXlauHy3L1Ni+piE7QKTpHLrpGVXSH6qiBKBqjF/SK3qxn6936sD7nqwUrvzlGC7C+fgE2wp1I</latexit>" }, { "heading": "3.3 The Auxiliary Policy ⇡aux: Estimating ⇡ILf in Practice", "text": "In this section we describe how we can, during training, obtain an auxiliary policy ⇡auxf which estimates ⇡ILf . Given this auxiliary policy we estimate d\n0(⇡teach, ⇡ILf )(s) using the plug-in estimator d0(⇡teach, ⇡auxf )(s). While plug-in estimators are intuitive and simple to define, they need not be statistically efficient. In Appendix A.4 we consider possible strategies for improving the statistical efficiency of our plug-in estimator via prospective estimation.\nIn Fig. 3 we provide an overview of how we compute the estimator ⇡auxf via deep nets. As is common practice [42, 22, 25, 46, 40, 8], the policy net ⇡f (·; ✓) is composed via a⌫ r with ✓ = (⌫, ), where a⌫ is the actor head (possibly complemented in actor-critic models by a critic head v⌫) and r is called the representation network. Generally a⌫ is lightweight, for instance a linear layer or a shallow MLP followed by a soft-max function, while r is a deep, and possibly recurrent neural, net. We add another actor head a⌫0 to our existing network which shares\nthe underlying representation r , i.e., ⇡auxf = a⌫0 r . We experiment with the actors sharing their representation r and estimating ⇡ILf via two separate networks, i.e., ✓\n0 = (⌫0, 0). In practice we train ⇡f (·; ✓) and ⇡auxf (·; ✓) jointly using stochastic gradient descent, as summarized in Alg. A.1." }, { "heading": "4 Experiments", "text": "We rigorously compare ADVISOR to IL methods, RL methods, and popularly-adopted (but often ad hoc) IL & RL combinations. In particular, we evaluate 15 learning methods. We do this over thirteen tasks – realizations of Ex. 1 & Ex. 2, eight tasks of varying complexity within the fast, versatile MINIGRID environment [8, 9], Cooperative Navigation (COOPNAV) with reduced visible range in the multi-agent particle environment (MPE) [43, 37], PointGoal navigation (POINTNAV) using the Gibson dataset in AIHABITAT [71, 54], and ObjectGoal Navigation (OBJECTNAV) in ROBOTHOR [14].4 Furthermore, to probe robustness, we train 50 hyperparameter variants for each of the 15\n4The ROBOTHOR environment is a sub-environment of AI2-THOR [31].\nlearning methods for our MINIGRID tasks. We find ADVISOR-based methods outperform or match performance of all baselines.\nAll code to reproduce our experiments will be made public under the Apache 2.0 license.5 The environments used are public for academic and commercial use under the Apache 2.0 (MINIGRID and ROBOTHOR) and MIT licence (MPE and AIHABITAT)." }, { "heading": "4.1 Tasks", "text": "Detailed descriptions of our tasks (and teachers) are deferred to Appendix A.5. See Fig. 4 for a high-level overview of 5 representative tasks." }, { "heading": "4.2 Baselines and ADVISOR-based Methods", "text": "We briefly introduce baselines and variants of our ADVISOR method. Further details of all methods are in Appendix A.7. For fairness, the same model architecture is shared across all methods (recall Fig. 3, Sec. 3.3). We defer implementation details to Appendix A.8.\n• RL only. Proximal Policy Optimization [58] serves as the pure RL baseline for all our tasks with a discrete action space. For the continuous and multi-agent COOPNAV task, we follow prior work and adopt MADDPG [37, 36]. • IL only. IL baselines where supervision comes from an expert policy with different levels of teacher-forcing (tf), i.e., tf=0, tf annealed from 1!0, and tf=1. This leads to Behaviour Cloning (BC), Data Aggregation (DAgger or †), and BCtf=1, respectively [53, 2, 52]. • IL & RL. Baselines that use a mix of IL and RL losses, either in sequence or in parallel. These are popularly adopted in the literature to warm-start agent policies. Sequential combinations include BC then PPO (BC!PPO), DAgger then PPO († ! PPO), and BCtf=1 ! PPO. The parallel combination of BC + PPO(static) is a static analog of our adaptive combination of IL and RL losses. • Demonstration-based. These agents imitate expert demonstrations and hence get no supervision beyond the states in the demonstrations. We implement BCdemo, its combination with PPO (BCdemo + PPO), and Generative Adversarial Imitation Learning (GAIL) [24]. • ADVISOR-based (ours). Our Adaptive Insubordination methodology can learn from an expert policy and can be given a warm-start via BC or DAgger. This leads to ADVISOR (ADV), BCtf=1 ! ADV, and † ! ADV) baselines. Similarly, ADVdemo + PPO employs Adaptive Insubordination to learn from expert demonstrations while training with PPO on on-policy rollouts." }, { "heading": "4.3 Evaluation", "text": "Fair Hyperparameter Tuning. Often unintentionally done, extensively tuning the hyperparameters (hps) of a proposed method and not those of the baselines can introduce unfair bias into evaluations. We avoid this by considering two strategies. For PD and all MINIGRID tasks, we follow recent best practices [15]. Namely, we tune each method by randomly sampling a fixed number of hps and\n5See https://unnat.github.io/advisor/ for an up-to-date link to this code.\nreporting, for each baseline, an estimate of\nRobustReward@K = E[Val. reward of best model from k random hyperparam. evaluations] (4) for 1 k 45. For this we must train 50 models per method, i.e., 750 for each of these nine tasks. In order to show learning curves over training steps we also report RobustReward@10 at 5 points during training. More details in Appendix A.9. For 2D-LH, we tune the hps of a competing method and use these hps for all other methods. Training. For the eight MINIGRID tasks, we train each of the 50 training runs for 1 million steps. For 2D-LH/PD, models saturate much before 3 · 105 steps. POINTNAV, OBJECTNAV, and COOPNAV are trained for standard budgets of 50Mn, 100Mn, and 1.5Mn steps. Details are in Appendix A.10. Metrics. We record standard metrics for each task. This includes avg. rewards (PD, MINIGRID tasks, and OBJECTNAV), and avg. episode lengths (2D-LH). Following visual navigation works [1, 54, 14], we report success rates and success-weighted path length (SPL) for POINTNAV and OBJECTNAV. In the following, we report a subset of the above and defer additional plots to Appendix A.11." }, { "heading": "4.4 Results", "text": "In the following, we include takeaways based on the results in Fig. 5, Fig. 6, Tab. 1, and Tab. 2.\nSmaller imitation gap =) better performance. A central claim of our paper is that the imitation gap is not merely a theoretical concern: the degree to which the teacher is privileged over the student has significant impact on the student’s performance. To study this empirically, we vary the degree to which teachers are privileged over its students in our 2D-LH\ntask. In particular, we use behavior cloning to train an f i-restricted policy (i.e., an agent that can see i grid locations away) using an f j-optimal teacher 25 times. Each policy is then evaluated on 200 random episodes and the average episode length (lower being better) is recorded. For select i, j pairs we show boxplots of the 25 average episode lengths in Fig. 6. See our appendix for similar plots when using other training routines (e.g., ADVISOR).\nGrey vertical lines show optimal average episode lengths for f i-restricted policies. We find that training an f i-restricted policy with an f\nj-expert results in a near optimal policy when i = j but even small increases in j dramatically decrease performance. While performance tends to drop with increasing j, the largest i, j gaps do not consistently correspond to the worst performing models. While this seems to differ from our results in Ex. 2, recall that there the policy µ was fixed while here it varies through training, resulting in complex learning dynamics. Surprisingly we also find that, even when there is no imitation gap (e.g., the i = j case), ADVISOR can outperform BC, see App. A.6.\nADVISOR outperforms, even in complex visual environments. Across all of our tasks, ADVISOR-based methods perform as well or better than competing methods. In particular, see Tab. 1 for our results on the POISONEDDOORS (PD) and MINIGRID tasks and Tab. 2 for our results on the POINTNAV, OBJECTNAV, and COOPNAV tasks. 2D-LH results are deferred to the Appendix.\nWhile the strong performance of ADVISOR is likely expected on our PD, MINIGRID, and 2D-LH tasks (indeed we designed a subset of these with the explicit purpose of studying the imitation gap), it is nonetheless surprising to see that in the PD and LC ONCE SWITCH tasks, all non-ADVISOR methods completely fail. Moreover, it is extremely promising to see that ADVISOR can provide substantial benefits in a variety of standard tasks, namely OBJECTNAV, POINTNAV, and COOPNAV with limited visible range. Note that OBJECTNAV and POINTNAV are set in 3D high-fidelity visual environments while COOPNAV requires multi-agent collaboration in a continuous space.\nADVISOR is sample efficient. To understand the sample efficiency of ADVISOR, we plot validation set performance over training of select tasks (see Figures 5a to 5d) and, in Table 2 we show performance of our models after 10% of training has elapsed for the OBJECTNAV, POINTNAV, and COOPNAV tasks. Note that in Table 2, ADVISOR trained models frequently reach better performance after 10% of training than other methods manage to reach by the end of training.\nADVISOR is robust. Rigorously studying sensitivity to hyperparameter choice requires retraining every method under consideration tens to hundreds of times. This computational task can make evaluating our methods on certain tasks infeasible (training a single POINTNAV or OBJECTNAV model can easily require a GPU-week of computation). Because of these computational constraints, we limit our study of robustness to the PD and MINIGRID tasks. In Figures 5e to 5h (additional results in Appendix) we plot, for each of the 15 evaluated methods, how the expected performance of each method behaves as we increase the budget of random hyperparameter evaluations. In general, relatively few hyperparameter evaluations are required for ADVISOR before a high performance model is expected to be found.\nExpert demonstrations can be critical to success. While it is frequently assumed that on-policy expert supervision is better than learning from off-policy demonstrations, we found several instances in our MINIGRID experiments where demonstration-based methods outperformed competing methods. See, for example, Figures 5b and 5f. In such cases our demonstration-based ADVISOR variant (see Appendix A.7 for details) performed very well.\nADVISOR helps even when the expert is corrupted. In LC CORRUPT EXPERT and WC CORRUPT EXPERT, where the expert is designed to be corrupted (outputting random actions as supervision) when the agent gets sufficiently close to the goal. While ADVISOR was not designed with the possibility of corrupted experts in mind, Figures 5d and 5h (see also Table 1) show that ADVISOR can succeed despite this corruption." }, { "heading": "5 Conclusion", "text": "We propose the imitation gap as one explanation for the empirical observation that imitating “more intelligent” teachers can lead to worse policies. While prior work has, implicitly, attempted to bridge this gap, we introduce a principled adaptive weighting technique (ADVISOR), which we test on a suite of thirteen tasks. Due to the fast rendering speed of MINIGRID, PD, and 2D-LH, we could undertake a study where we trained over 6 billion steps, to draw statistically significant inferences." }, { "heading": "6 Limitations and Societal Impact", "text": "While we have attempted to robustly evaluate our proposed ADVISOR methodology, we have primarily focused our experiments on navigational tasks where shortest path experts can be quickly computed. Further work is needed to validate that ADVISOR can be successful in other domains, e.g., imitation in interactive robotic tasks or natural language applications.\nWhile the potential for direct negative societal impact of this work is small, it is worth noting that, in enabling agents to learn more effectively from expert supervision, this work makes imitation learning a more attractive option to RL researchers. If expert supervision is obtained from humans, RL agents trained with such data will inevitably reproduce any (potentially harmful) biases of these humans." }, { "heading": "Acknowledgements", "text": "This material is based upon work supported in part by the National Science Foundation under Grants No. 1563727, 1718221, 1637479, 165205, 1703166, 2008387, 2045586, 2106825, MRI #1725729, NIFA award 2020-67021-32799, Samsung, 3M, Sloan Fellowship, NVIDIA Artificial Intelligence Lab, Allen Institute for AI, Amazon, AWS Research Awards, and Siebel Scholars Award. We thank Nan Jiang and Tanmay Gangwani for feedback on this work." } ]
2,021
Bridging the Imitation Gap by Adaptive Insubordination
SP:40b48e4e0455356fe1dd476f4515a1811af9d0bf
[ "The authors propose a beta-VAE network to learn EEG representation as biomarkers for diagnosing depression from EEG data. They show improved performance compared to an off-the shelf linear classifier. The paper is well-written but lacks a description of related work in the field and also a detailed analysis of the results to support the claims. " ]
Despite extensive standardization, diagnostic interviews for mental health disorders encompass substantial subjective judgment. Previous studies have demonstrated that EEG-based neural measures can function as reliable objective correlates of depression, or even predictors of depression and its course. However, their clinical utility has not been fully realized because of 1) the lack of automated ways to deal with the inherent noise associated with EEG data at scale, and 2) the lack of knowledge of which aspects of the EEG signal may be markers of a clinical disorder. Here we adapt an unsupervised pipeline from the recent deep representation learning literature to address these problems by 1) learning a disentangled representation usingβ-VAE to denoise the signal, and 2) extracting interpretable features associated with a sparse set of clinical labels using a Symbol–Concept Association Network (SCAN). We demonstrate that our method is able to outperform the canonical baseline classification method on a number of factors, including participant age and depression diagnosis. Furthermore, our method recovers a representation that can be used to automatically extract denoised Event Related Potentials (ERPs) from novel, single EEG trajectories, and supports fast supervised re-mapping to various clinical labels, allowing clinicians to re-use a single EEG representation regardless of updates to the standardized diagnostic system. Finally, single factors of the learned disentangled representations often correspond to meaningful markers of clinical factors, as automatically detected by SCAN, allowing for human interpretability and post-hoc expert analysis of the recommendations made by the model.
[ { "affiliations": [], "name": "Garrett Honke" }, { "affiliations": [], "name": "Irina Higgins" }, { "affiliations": [], "name": "Nina Thigpen" }, { "affiliations": [], "name": "Vladimir Miskovic" }, { "affiliations": [], "name": "Sunny Duan" }, { "affiliations": [], "name": "Pramod Gupta" }, { "affiliations": [], "name": "Julia Klawohn" } ]
[ { "authors": [ "Pierre Baldi" ], "title": "Autoencoders, unsupervised learning, and deep architectures", "venue": "In Proceedings of ICML workshop on Unsupervised and transfer learning,", "year": 2012 }, { "authors": [ "Pouya Bashivan", "Irina Rish", "Mohammed Yeasin", "Noel Codella" ], "title": "Learning representations from EEG with deep recurrent-convolutional neural networks", "venue": null, "year": 2016 }, { "authors": [ "Yoshua Bengio", "Grégoire Mesnil", "Yann Dauphin", "Salah Rifai" ], "title": "Better mixing via deep representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "K.H. Brodersen", "C.S. Ong", "K.E. Stephan", "J.M. Buhmann" ], "title": "The balanced accuracy and its posterior distribution", "venue": "In 2010 20th International Conference on Pattern Recognition,", "year": 2010 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Christopher J Brush", "Peter J Ehmann", "Greg Hajcak", "Edward A Selby", "Brandon L Alderman" ], "title": "Using multilevel modeling to examine blunted neural responses to reward in major depression", "venue": "Biological Psychiatry: Cognitive Neuroscience and Neuroimaging,", "year": 2018 }, { "authors": [ "Christopher P Burgess", "Loic Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner" ], "title": "MoNET: Unsupervised scene decomposition and representation", "venue": null, "year": 1901 }, { "authors": [ "Henry Carrillo", "Kay H Brodersen", "José A Castellanos" ], "title": "Probabilistic performance evaluation for multiclass classification using the posterior balanced accuracy", "venue": "In ROBOT2013: First Iberian Robotics Conference,", "year": 2014 }, { "authors": [ "Avshalom Caspi", "Renate M Houts", "Daniel W Belsky", "Sidra J Goldman-Mellor", "HonaLee Harrington", "Salomon Israel", "Madeline H Meier", "Sandhya Ramrakha", "Idan Shalev", "Richie Poulton" ], "title": "The p factor: one general psychopathology factor in the structure of psychiatric disorders", "venue": "Clinical Psychological Science,", "year": 2014 }, { "authors": [ "Hubert Cecotti", "Axel Graser" ], "title": "Convolutional neural networks for P300 detection with application to brain-computer interfaces", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2011 }, { "authors": [ "Hubert Cecotti", "Miguel P Eckstein", "Barry Giesbrecht" ], "title": "Single-trial classification of event-related potentials in rapid serial visual presentation tasks using supervised spatial filtering", "venue": "IEEE transactions on Neural Networks and Learning Systems,", "year": 2030 }, { "authors": [ "Mark Chen", "Alec Radford", "Rewon Child", "Jeff Wu", "Heewoo Jun", "Prafulla Dhariwal", "David Luan", "Ilya Sutskever" ], "title": "Generative pretraining from pixels", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Arnaud Delorme", "Scott Makeig" ], "title": "EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis", "venue": "Journal of Neuroscience Methods,", "year": 2004 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Sunny Duan", "Loic Matthey", "Andre Saraiva", "Nicholas Watters", "Christopher P Burgess", "Alexander Lerchner", "Irina Higgins" ], "title": "Unsupervised model selection for variational disentangled representation learning", "venue": null, "year": 1905 }, { "authors": [ "Amr Farahat", "Christoph Reichert", "Catherine M Sweeney-Reed", "Hermann Hinrichs" ], "title": "Convolutional neural networks for decoding of covert attention focus and saliency maps for EEG feature visualization", "venue": "Journal of Neural Engineering,", "year": 2019 }, { "authors": [ "Dan Foti", "Doreen M Olvet", "Daniel N Klein", "Greg Hajcak" ], "title": "Reduced electrocortical response to threatening faces in major depressive disorder", "venue": "Depression and Anxiety,", "year": 2010 }, { "authors": [ "Eiko I Fried", "Randolph M Nesse" ], "title": "Depression is not a consistent syndrome: an investigation of unique symptom patterns in the STAR* D study", "venue": "Journal of Affective Disorders,", "year": 2015 }, { "authors": [ "Gabriele Gratton", "Michael GH Coles", "Emanuel Donchin" ], "title": "A new method for off-line removal of ocular artifact", "venue": "Electroencephalography and Clinical Neurophysiology,", "year": 1983 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Inan Güler", "Elif Derya Übeyli" ], "title": "Multiclass support vector machines for EEG-signals classification", "venue": "IEEE transactions on Information Technology in Biomedicine,", "year": 2007 }, { "authors": [ "Nihal Fatma Güler", "Elif Derya Übeyli", "Inan Güler" ], "title": "Recurrent neural networks employing Lyapunov exponents for EEG signals classification", "venue": "Expert Systems with Applications,", "year": 2005 }, { "authors": [ "Greg Hajcak", "Julia Klawohn", "Alexandria Meyer" ], "title": "The utility of event-related potentials in clinical psychology", "venue": "Annual Review of Clinical Psychology,", "year": 2019 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher P Burgess", "Matko Bosnjak", "Murray Shanahan", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "SCAN: Learning hierarchical compositional visual concepts", "venue": null, "year": 2018 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Thomas Insel", "Bruce Cuthbert", "Marjorie Garvey", "Robert Heinssen", "Daniel S Pine", "Kevin Quinn", "Charles Sanislow", "Philip Wang" ], "title": "Research domain criteria (RDoC): toward a new classification framework for research on mental disorders", "venue": null, "year": 2010 }, { "authors": [ "Tzyy-Ping Jung", "Scott Makeig", "Colin Humphries", "Te-Won Lee", "Martin J Mckeown", "Vicente Iragui", "Terrence J Sejnowski" ], "title": "Removing electroencephalographic artifacts by blind source separation", "venue": null, "year": 2000 }, { "authors": [ "Ioannis Karakis", "Keith H Chiappa", "Marta San Luciano", "Kenneth C Sassower", "John W Stakes", "Andrew J Cole" ], "title": "The utility of routine EEG in the diagnosis of sleep disordered breathing", "venue": "J Clin Neurophysiol,", "year": 2012 }, { "authors": [ "Ronald C Kessler", "Katherine A McGonagle", "Marvin Swartz", "Dan G Blazer", "Christopher B Nelson" ], "title": "Sex and depression in the national comorbidity survey i: Lifetime prevalence, chronicity and recurrence", "venue": "Journal of affective disorders,", "year": 1993 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "Proceedings of the 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Julia Klawohn", "Kreshnik Burani", "Alec Bruchnak", "Nicholas Santopetro", "Greg Hajcak" ], "title": "Reduced neural response to reward and pleasant pictures independently relate to depression", "venue": "Psychological Medicine,", "year": 2020 }, { "authors": [ "Peter J Lang", "Margaret M Bradley", "Bruce N Cuthbert" ], "title": "International affective picture system (IAPS): Technical manual and affective ratings. The Center for Research in Psychophysiology", "venue": "University of Florida.,", "year": 2008 }, { "authors": [ "Grace Y Lim", "Wilson W Tam", "Yanxia Lu", "Cyrus S Ho", "Melvyn W Zhang", "Roger C Ho" ], "title": "Prevalence of depression in the community from 30 countries", "venue": "Scientific Reports,", "year": 1994 }, { "authors": [ "Jill Lobbestael", "Maartje Leurgans", "Arnoud Arntz" ], "title": "Inter-rater reliability of the Structured Clinical Interview for DSM-IV Axis I disorders (SCID I) and Axis II disorders", "venue": "(SCID II). Clinical psychology & psychotherapy,", "year": 2011 }, { "authors": [ "Rafael Lozano", "Mohsen Naghavi", "Kyle Foreman", "Mohammad A AlMazroa", "Ziad A Memish" ], "title": "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease", "venue": "Study 2010 (vol 380,", "year": 2012 }, { "authors": [ "Steven J Luck" ], "title": "Event-related potentials", "venue": "American Psychological Association,", "year": 2012 }, { "authors": [ "Annmarie MacNamara", "Roman Kotov", "Greg Hajcak" ], "title": "Diagnostic and symptom-based predictors of emotional processing in generalized anxiety disorder and major depressive disorder: An event-related potential study", "venue": "Cognitive Therapy and Research,", "year": 2016 }, { "authors": [ "Scott Makeig", "Stefan Debener", "Julie Onton", "Arnaud Delorme" ], "title": "Mining event-related brain dynamics", "venue": "Trends in Cognitive Sciences,", "year": 2004 }, { "authors": [ "Sally McManus", "Howard Meltzer", "Traolah Brugha", "Paul E Bebbington", "Rachel Jenkins" ], "title": "Adult Psychiatric Morbidity in England 2007: results of a household survey. NHS Information Centre for Health and Social Care", "venue": "NHS Information Centre for Health and Social Care,", "year": 2009 }, { "authors": [ "Sally McManus", "Paul E Bebbington", "Rachel Jenkins", "Traolah Brugha" ], "title": "Mental health and wellbeing in England: Adult Psychiatric Morbidity Survey 2014", "venue": "Leeds: NHS Digital,", "year": 2016 }, { "authors": [ "Piotr Mirowski", "Deepak Madhavan", "Yann LeCun", "Ruben Kuzniecky" ], "title": "Classification of patterns of EEG synchronization for seizure prediction", "venue": "Clinical Neurophysiology,", "year": 1927 }, { "authors": [ "Gernot Müller-Putz", "Reinhold Scherer", "Clemens Brunner", "Robert Leeb", "Gert Pfurtscheller" ], "title": "Better than random: a closer look on bci results", "venue": "International Journal of Bioelectromagnetism,", "year": 2008 }, { "authors": [ "Hugh Nolan", "Robert Whelan", "Richard B Reilly" ], "title": "Faster: fully automated statistical thresholding for eeg artifact rejection", "venue": "Journal of neuroscience methods,", "year": 2010 }, { "authors": [ "Hossein Parvar", "Lauren Sculthorpe-Petley", "Jason Satel", "Rober Boshra", "Ryan CN D’Arcy", "Thomas P Trappenberg" ], "title": "Detection of event-related potentials in individual subjects using support vector machines", "venue": "Brain Informatics,", "year": 2015 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Yannick Roy", "Hubert Banville", "Isabela Albuquerque", "Alexandre Gramfort", "Tiago H Falk", "Jocelyn Faubert" ], "title": "Deep learning-based electroencephalography analysis: a systematic review", "venue": "Journal of neural engineering,", "year": 2019 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning representations by backpropagating", "venue": "errors. Nature,", "year": 1986 }, { "authors": [ "SJM Smith" ], "title": "EEG in the diagnosis, classification, and management of patients with epilepsy", "venue": "Journal of Neurology, Neurosurgery & Psychiatry,", "year": 2005 }, { "authors": [ "Amelia J Solon", "Vernon J Lawhern", "Jonathan O Touryan", "Jonathan R McDaniel", "Anthony J Ries", "Stephen M Gordon" ], "title": "Decoding P300 variability using convolutional neural networks", "venue": "Frontiers in Human Neuroscience,", "year": 2019 }, { "authors": [ "Kees J Stam", "Dénes LJ Tavy", "Brechtje Jelles", "Herbert AM Achtereekte", "Joris PJ Slaets", "Ruud WM Keunen" ], "title": "Non-linear dynamical analysis of multichannel EEG: clinical applications in dementia and Parkinson’s disease", "venue": "Brain Topography,", "year": 1994 }, { "authors": [ "Abdulhamit Subasi", "M Ismail Gursoy" ], "title": "EEG signal classification using PCA, ICA, LDA and support vector machines", "venue": "Expert Systems with Applications,", "year": 2010 }, { "authors": [ "Ryota Tomioka", "Kazuyuki Aihara", "Klaus-Robert Müller" ], "title": "Logistic regression for single trial EEG classification", "venue": "In NeurIPS, pp", "year": 2007 }, { "authors": [ "Theo Vos", "Ryan M Barber", "Jashua A Salomon", "Christopher J L Murray" ], "title": "Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study", "venue": null, "year": 2013 }, { "authors": [ "Haofei Wang", "Bertram E. Shi", "Yiwen Wang" ], "title": "Convolutional neural network for target face detection using single-trial EEG signal", "venue": "40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),", "year": 2018 }, { "authors": [ "Lilian A Weber", "Andreea O Diaconescu", "Christoph Mathys", "André Schmidt", "Michael Kometer", "Franz Vollenweider", "Klaas E Stephan" ], "title": "Ketamine affects prediction errors about statistical regularities: a computational single-trial analysis of the mismatch negativity", "venue": "Journal of Neuroscience,", "year": 2020 }, { "authors": [ "Anna Weinberg", "Stewart A Shankman" ], "title": "Blunted reward processing in remitted melancholic depression", "venue": "Clinical Psychological Science,", "year": 2017 }, { "authors": [ "Anna Weinberg", "Greg Perlman", "Roman Kotov", "Greg Hajcak" ], "title": "Depression and reduced neural response to emotional images: Distinction from anxiety, and importance of symptom dimensions and age of onset", "venue": "Journal of abnormal psychology,", "year": 2016 }, { "authors": [ "Harvey A Whiteford", "Louisa Degenhardt", "Christopher J L Murray", "Theo Vos" ], "title": "Global burden of disease attributable to mental and substance use disorders: findings from the Global Burden of Disease", "venue": "Study", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Mental health disorders make up one of the main causes of the overall disease burden worldwide (Vos et al., 2013), with depression (e.g., Major Depressive Disorder, MDD) believed to be the second leading cause of disability (Lozano et al., 2013; Whiteford et al., 2013), and around 17% of the population experiencing its symptoms at some point throughout their lifetime (McManus et al., 2016; 2009; Kessler et al., 1993; Lim et al., 2018). At the same time diagnosing mental health disorders has many well-identified limitations (Insel et al., 2010). Despite the existence of diagnostic manuals ∗Equal contribution †{ghonk,irinah,nthigpen,vmiskovic,katielink,sunnyd,pramodg}@google.com ‡JK is now at Humboldt-Universität zu Berlin, Berlin, Germany; julia.klawohn@hu-berlin.de §greg.hajcak@med.fsu.edu\nlike Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders (SCID) (DSM-V, 2013), diagnostic consistency between expert psychiatrists and psychologists with decades of professional training can be low, resulting in different diagnoses in upwards of 30% of the cases (Cohen’s Kappa = 0.66) (Lobbestael et al., 2011). Even if higher inter-rater reliability was achieved, many psychological disorders do not have a fixed symptom profile, with depression alone having many hundreds of possible symptom combinations (Fried & Nesse, 2015). This means that any two people with the same SCID diagnosis can exhibit entirely different symptom expressions. This is a core challenge for developing an objective, symptom-driven diagnostic tool in this domain.\nElectroencephalography (EEG) is a measurement of post-synaptic electrical potentials that can be taken non-invasively at the scalp. EEG signals can function as important biomarkers of clinical disorders (Hajcak et al., 2019) but they are difficult to clean and interpret at scale. For example, components of the EEG signal can often significantly overlap or interfere with each other. Furthermore, nearby electronics, line noise, hardware quality, signal drift and other variations in the electrode–scalp connection can all distort the recorded EEG signal. Hence, the extraction of EEG data of sufficient quality is usually a laborious, semi-automated process executed by lab technicians with extensive training. A typical EEG analysis pipeline consists of collecting EEG recordings evoked from a large number of stimulus presentations (trials) in order to have sufficient data to average out the noise. Independent Components Analysis (ICA) is often used to visually identify and remove the component that corresponds to eye blinks (Delorme & Makeig, 2004; Makeig et al., 2004; Jung et al., 2000) (although see Weber et al. (2020); Nolan et al. (2010) as examples of fully automated artifact removal pipelines) . This can be followed by a trial rejecton stage where anomalous trials are identified and removed from the EEG data scroll, sometimes also through visual examination. The cleaned up EEG recordings from a large number of trials are then averaged to produce an Event Related Potential (ERP) (Luck, 2012). This allows a clinician to extract specific ERP components relevant to the clinical factor of interest, average out the event-locked activity within them, and then either perform a statistical group comparison, or—in\nthe case of the diagnosis classification goal—apply an off-the-shelf classifier, like Logistic Regression (LR) to obtain the final diagnostic results. Some more advanced classification approaches might include Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), or Random Forest (RF) (Parvar et al., 2015; Güler & Übeyli, 2007; Subasi & Gursoy, 2010; Tomioka et al., 2007; Bashivan et al., 2016).\nTo summarise, EEG recordings are noisy measures of electric activity from across the brain. There is evidence that these signals are useful as markers of depression, but we lack understanding of what aspects of depression they index. Furthermore, the field of clinical psychopathology still lacks consensus on the etiopathogenesis of mental health disorders, which means that there is no such thing as the “ground truth” diagnostic labels. Hence, while EEG is routinely used to diagnose conditions like epilepsy (Smith, 2005), memory (Stam et al., 1994) or sleep disorders (Karakis et al., 2012), its promise for being a reliable diagnostic tool for clinical conditions like depression has not been fully realised so far. In order to make EEG a viable diagnostic tool for a broader set of clinical conditions it is important to have an automated pipeline for extracting the relevant interpretable biomarker correlates from the (preferably individual trial) EEG data in a robust manner. Furthermore, this process should not depend fully on diagnostic labels which are often subjective and at best represent highly heterogeneous classes.\nRecent advances in deep learning have prompted research into end-to-end classification of EEG signal using convolutional and/or recurrent neural networks (Bashivan et al., 2016; Mirowski et al., 2009; Cecotti & Graser, 2011; Güler et al., 2005; Wang et al., 2018; Farahat et al., 2019; Solon et al., 2019; Cecotti et al., 2014), holding the promise of automated extraction of relevant biomarker correlates. However, deep classifiers operate best in the big data regime with clean, well-balanced ground truth classification targets. In contrast, even the largest of EEG datasets typically contain only a few hundred datapoints, and the classification labels are subjective, noisy and unbalanced, with the majority of the data coming from healthy control participants. Hence, in order to utilise the benefits of deep learning but avoid the pitfalls of over-reliance on the classification labels, we propose a two-step pipeline consisting of unsupervised representation learning, followed by supervised mapping of the pre-trained representation to the latest version of the available diagnostic labels. The hope is that the unsupervised learning step would denoise the input signal and extract the broad statistical regularities hidden in it thus serving as an alternative for the existing automatic EEG pre-processing pipelines (Weber et al., 2020; Nolan et al., 2010) while minimising the need for a priori knowledge , resulting in a representation that can continue to be useful even if the label taxonomy evolves.\nRecently great progress has been made in the field of deep unsupervised representation learning (Roy et al., 2019; Devlin et al., 2018; Brown et al., 2020; Chen et al., 2020b; Grill et al., 2020; Chen et al., 2020a; Higgins et al., 2017; Burgess et al., 2019). Disentangled representation learning is a branch of deep unsupervised learning that produces interpretable factorised low-dimensional representations of the training data (Bengio et al., 2013; Higgins et al., 2017). Given the requirement for model interpretability in our use-case, we use Beta Variational Autoencoders (β-VAE) (Higgins et al., 2017)—one of the state of the art unsupervised disentangled representation learning methods—to discover low-dimensional disentangled representations of the EEG data. We then train the recently proposed Symbol–Concept Association Network (SCAN) (Higgins et al., 2018) to map the available classification labels to the representations learnt by β-VAE (see Fig. 1). We demonstrate that our proposed pipeline results in better classification accuracy than the typical approach for extracting a known ERP pattern for use as a biomarker—a process that is often heavily influenced by a priori knowledge. This holds true when predicting a number of factors, including age, gender, and depression diagnosis. Furthermore, SCAN is able to produce arguably interpretable classification recommendations, whereby its decisions on different clinical factors are grounded in a small number (often single) latent dimensions of the β-VAE, allowing the clinicians an opportunity to interpret the recommendations produced by SCAN, and visualise what aspects of the EEG signal are associated with the classification decision post-hoc. This opens up the opportunity to use our proposed pipeline as a tool for discovering new EEG biomarkers. We validate this by “re-discovering” a known biomarker for depression. Finally, we demonstrate that once a β-VAE is pre-trained on ERP signals, it can often produce ERP-like reconstructions even when presented with single noisy EEG trajectories. Furthermore, the representations inferred from single EEG trials produce good classification results, still outperforming the canonical baseline method. This suggests that once a good disentangled representation is learnt, the model can be used online as new EEG data is being recorded, thus lowering the burden of keeping potentially vulnerable participants in the lab for extended recording sessions." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 DATA", "text": "Anhedonia. This work targets one of the two cardinal symptoms of depression—anhedonia. Anhedonia is the lack of pleasure and/or interest in previously pleasurable stimuli and activities (DSM-V, 2013). One established approach for objectively quantifying this symptom is the use of EEG to measure neural responses elicited by emotionally salient visual stimuli. Research in this domain has uncovered a stereotyped neural activation pattern in healthy control participants, where emotionally-salient stimuli evoke a Late Positive Potential (LPP) in ERPs—the averaged timeseries of stimulus time-locked EEG recordings. This pattern has been identified as a potential biomarker for depression because (on average) this positive deflection in amplitude is attenuated or absent in individuals who exhibit symptoms of anhedonia (Brush et al., 2018; Foti et al., 2010; Klawohn et al., 2020; MacNamara et al., 2016; Weinberg et al., 2016; Weinberg & Shankman, 2017).\nParticipants. The data were collected as part of a mental health study across multiple laboratory sites. The multi-site aspect of the study meant that more data could be pooled together, however, it also meant that the data was noisier. Participants (N=758, ageX̄=16.7, agerange=[11.0, 59.8], 398 female) were samples of healthy controls (nHC=485) and people diagnosed with depression (among other mental illnesses) (see Sec. A.1.1 and Tbl. A1 for further breakdown).\nStimuli and Experimental Design. Participants were shown a series of 80 images from the International Affective Picture System (IAPS) (Lang et al., 2008) presented in random order up to 40 times each. The images varied in valence: either positive (affiliative scenes or cute animals designed to elicit the LPP ERP component), or neutral (objects or scenes with people). Each image trial consisted of a white fixation cross presented for a random duration between 1000-2000 ms (square window) followed by a black and white image presented for 1000 ms.\nEEG Preprocessing. While participants completed the picture viewing task, EEG was continuously recorded. Each picture trial was then segmented to contain a 200 ms pre-stimulus baseline and a 1200 ms post-stimulus interval. The raw EEG signal was digitized, bandpass filtered and cleared of the eye movement artifacts and anomalous trials as described in Sec. A.1.2.\nClassification labels. The following classification labels were used in this study: age (adult or child), gender (male or female), study site, and the presence or absence of two clinical conditions: depression diagnosis and a broader Axis 1 disorder diagnosis. All classification labels were binary, apart from study site, which contained four possible values corresponding to four different sites where the data were collected. Participants 18 years of age and older were classified as adults. Gender was classified based on self-reported values. Positive depression labels (n=110) include all participants that were diagnosed with Major Depressive Disorder (MDD), Persistent Depressive Disorder (PDD), and depressive disorder NOS (not-otherwise-specified) by expert clinicians through a clinical interview (e.g., SCID for adults, KSADS for children). Axis 1 is a broad category consisting of the most prevalent psychological disorders in the population (now discontinued in DSM-V) that excludes intellectual disabilities and personality disorder (DSM-V, 2013). Positive Axis 1 labels (n=273) encompassed all participants with positive depression labels plus individuals diagnosed with Cyclothemia, Dysthemia, anxiety disorders, mood disorders (e.g., Bipolar Disorder), panic disorders, and substance and eating disorders (all of which are sparse). The Axis 1 class is provided to compare model behavior on a transdiagnostic measurement of psychopathology.1 While recruitment for the study was primarily focused on depression, the SCID produces a large set of diagnostic decisions, and we collapsed this sparser set of positive diagnoses into the existing Axis 1 DSM-IV superordinate category. We include this label in modeling and analysis to maximize the number of positive labels for training and evaluation and to give the reader a sense of what the algorithm may have learned that is generalizable across disorders—akin to the P factor (Caspi et al., 2014).\n1Disorders in this class that are present in the data include Major Depressive Disorder, Persistent Depressive Disorder, Depression NOS, Cyclothemia, Dysthemia, Bipolar I, Bipolar II, Bipolar NOS, Mania, Hypomania, Agoraphobia, Social Phobia, Separation Anxiety, Generalized Anxiety Disorder, Panic Disorder, Panic Disorder with Agoraphobia, Anorexia, Bulimia, Eating disorder NOS, Alcohol Abuse, Alcohol Dependence, and Substance Abuse and Substance Dependence disorders." }, { "heading": "2.2 REPRESENTATION LEARNING", "text": "Canonical LPP analysis baseline The canonical approach for extracting the Late Positive Potential (LPP) effect serves as the baseline for this work. The LPP effect is calculated as the average amplitude difference between ERP waveforms (i.e., averaged stimulus time-locked EEG segments) evoked from emotionally-salient and neutral stimuli. Before the delta was calculated, each ERP was normalised with respect to the baseline average activity within the 100 ms window preceding stimulus onset. Finally, the normalised delta signal within the 300–700 ms window after stimulus onset was averaged to provide the canonical baseline LPP representation.\nAutoencoder An AutoEncoder (AE) is a deep neural network approach for non-linear dimensionality reduction (Hinton & Salakhutdinov, 2006; Baldi, 2012). A typical architecture consists of an encoder network, which projects the input high-dimensional data x into a low-dimensional representation z, and a decoder network that is the inverse of the encoder, projecting the representation z back into the reconstruction of the original high-dimensional data f(x; φ,θ), where φ and θ are the parameters of the encoder and decoder respectively (see Fig. 1 for model schematic). AE is trained through backpropagation (Rumelhart et al., 1986) using the reconstruction objective:\nLAE=Ep(x) ||f(x; φ,θ)−x||2\nThe input to the AE in our case is a 256x6 “image” of the EEG signal (see Fig. 1), where 256 corresponds to the 1024 ms of the recorded EEG trajectory sampled at 250 Hz and pre-processed as described in Sec. A.1.2, and 6 corresponds to three electrodes per each of the two image valence conditions (i.e., stimulus classes): neutral and positive. These input images were further normalised to the [0, 1] range across all channels before being presented to the AE.\nThe AE was parametrised to have two convolutional encoding layers with 32 filters each of size 6, with strides of step size 2 along the time axis, followed by a single fully connected layer of size 128, projecting into a 10-dimensional representation z. The decoder was the inverse of the encoder. Overall the model consisted of around 106,752 parameters. The model had ReLU activations throughout, and was optimised using Adam optimizer with learning rate of 1e-4 over 1 mln iterations of batch size 16.\nβ-Variational Autoencoder A β-Variational Autoencoder (β-VAE) (Higgins et al., 2017) is a generative model that aims to learn a disentangled latent representation z of input data x by augmenting the Variational Autoencoder (VAE) framework (Rezende et al., 2014; Kingma & Welling, 2014) (see Sec.A.2.1 for more details) with an additional β hyperparameter. Intuitively, a disentangled representation is a factorised latent distribution where each factor corresponds to an interpretable transformation of the training data (e.g. in a disentangled representation of a visual scene, individual factors may represent a change in lighting or object position). A neural network implementation of aβ-VAE consists of an inference network (equivalent to the AE encoder), that takes inputs x and parameterises the (disentangled) posterior distribution q(z|x), and a generative network (equivalent to the AE decoder) that takes a sample from the inferred posterior distribution ẑ∼N (µ(z|x),σ(z|x)) and attempts to reconstruct the original image (see Fig. 1). The model is trained through a two-part loss objective:\nLβ−V AE=Ep(x)[Eqφ(z|x)[log pθ(x|z)]−βKL(qφ(z|x) || p(z)) ]\nwhere p(x) is the probability of the input data, q(z|x) is the learnt posterior over the latent units given the data, p(z) is the unit Gaussian prior with a diagonal covariance matrix N (0,I), φ and θ are the parameters of the inference (encoder) and generative (decoder) networks respectively, and β is a hyperparameter that controls the degree of disentangling achieved by the model during training. Intuitively, the objective consists of the reconstruction term (which aims to increase the log-likelihood of the observations) and a compression term (which aims to reduce the Kullback-Leibler (KL) divergence between the inferred posterior and the prior). Typically a β > 1 is necessary to achieve good disentangling, however the exact value differs for different datasets. In order to find a good value of β to disentangle our EEG dataset, we perform a hyperparameter search, training ten models with different random initialisations for each of the ten values of β∈ [0.075, 2.0] sampled uniformly. Well disentangled β-VAE models were selected using the Unsupervised Disentanglement Ranking (UDR) score (Duan et al., 2019) described in Sec. A.4 (see Fig. A1 for a visualisation of the resulting UDR\nscores). All β-VAE models had the same architecture as the AE2, and were trained in the same manner and on the same data that was pre-processed in the same way to lie in the [0, 1] range ." }, { "heading": "2.3 CLASSIFICATION", "text": "Baseline classifiers To evaluate the quality of a representation in terms of how useful it is for classifying different clinical factors, we applied a range of baseline classification algorithms: Support Vector Machine (SVM), Random Forest (RF), Logistic Regression (LR) and Linear Discriminant Analysis (LDA) (see Sec. A.3 for details). For all classification results we report the posterior distribution of balanced accuracy (Carrillo et al., 2014; Brodersen et al., 2010). Balanced accuracy was chosen, because it correctly matches chance accuracy even for unbalanced datasets. Chance accuracy and its confidence bounds were calculated according to Müller-Putz et al. (2008).\nSymbol–Concept Association Network The baseline classifiers described above produced uninterpretable decisions. This is undesirable if we were to have a chance at discovering new clinical biomarkers in the EEG data. To address this interpretability challenge, we leverage a recent model proposed for visual concept learning in the machine learning literature—the Symbol–Concept Association Network (SCAN) (Higgins et al., 2018). While SCAN was not originally developed with the classification goal in mind, it has desirable properties to utilise for the current application. In particular, it is able to automatically discover sparse associative relationships between discrete symbols (5-hot classification labels in our case, see Fig. 1) and continuous probability distributions of the disentangled posterior representation discovered by a trainedβ-VAE model. Furthermore, the associative nature of the grounding used to train SCAN allows it to deal with noisy data gracefully, and to learn successfully from a small number of positive examples and from highly unbalanced datasets. SCAN is in effect another VAE model. In our case it takes 5-hot classification labels y as input, and aims to reconstruct them from the inferred posterior q(z|y) (see Fig. 1 for more details). To train SCAN the original VAE objective is augmented with an additional KL term that aims to ground the SCAN posterior in the posterior of a pre-trained β-VAE model:\nLSCAN =Ep(y)[Eqψ(zy|y)[log pγ(y|zy)]−KL(qψ(zy|y) || p(zy)) ]−KL(qφ(zx|x) || qψ(zy|y)) ] 2Note that the AE can be seen as a special case of the β-VAE with β=0.\nwhere ψ and γ are the parameters of the SCAN encoder and decoder respectively, qψ(zy|y) is the posterior of SCAN inferred from a 5-hot label y, and qφ(zx|x) is the posterior of the pre-trained β-VAE model inferred from an EEG “image” corresponding to the label presented to SCAN. Note that the β-VAE weights are not updated during SCAN training. The extra grounding term KL(qφ(zx|x) || qψ(zy|y)) allows SCAN to discover an associative relationship between a subset of disentangled factors and each 1-hot dimension of the given label.\nWe paramterised SCAN encoder and decoder as MLPs with two hidden layers of size 128, ReLU non-linearities and a 10-dimensional posterior to match the dimensionality of theβ-VAE representation. The model resulted in around 22,016 parameters. Like the other models, SCAN was trained with batch size 16, over 1 mln iterations and with Adam optimizer with learning rate of 1e-4." }, { "heading": "3 RESULTS", "text": "Deep representation learning improves classification accuracy. We evaluated the classification accuracy for predicting participant age, gender, study site, and the presence or absence of two clinical labels: depression and Axis 1. We first obtained the canonical LPP representations, and applied Support Vector Machine (SVM), Random Forest (RF), Logistic Regression (LR) and Linear Discriminant Analysis (LDA) to obtain balanced classification accuracy. Table A2 and Figure A2 show that LR produced some of the best results overall for the baseline, hence we report only LR results for the β-VAE and AE representations in the main text (see Tables A3-A4 and Figures A3-A4 for the other classifiers applied to the β-VAE and AE representations). Figures 2 and 5 (ERP/ERP train/test condition) demonstrate that representations extracted through β-VAE and AE pretraining resulted in higher overall classification accuracy than those obtained through the baseline canonical LPP pipeline (see also Table A5 ERP/ERP cells). This effect holds across all classification tasks, including depression and Axis 1 diagnoses. Furthermore, the pattern of classification results is in line with what might have been expected by the expert clinicians, whereby age and depression classification accuracy is significantly higher than chance, while gender is harder to decode from the EEG signal.\nA similar pattern appears to hold in most cases when all the models are trained on single EEG trials instead of ERPs (SMPL/ERP train/test in Figures 2 and 5, see also Table A5). While the maximum classification accuracy drops for all models when trained from the noisier single EEG trials rather than ERPs, the classification accuracy obtained from β-VAE and AE representations is still often higher than that obtained from the LPP baseline.\nClassification based on deep representations generalises better to single EEG trials. One possible clinical application of the proposed classification pipeline is to enable online diagnosis recommendations from single EEG trials. Hence, we tested how well the classification accuracy of the different representations generalises to novel single EEG trial trajectories (*/SMPL in Figures 2 and 5, see also SMPL columns in Tbl. A5). As expected, the performance often drops when classifying the noisy single EEG trajectories, regardless whether the representations were pre-trained using\nERPs (ERP/SMPL) or single EEG samples (SMPL/SMPL, note that we tested the models with different single EEG samples to those used for training). However, the classification accuracy is still often significantly higher for deep representations compared to the LPP baseline. This suggests that replacing the more manual canonical LPP pipeline with deep representation learning can allow for both better training data efficiency and a reduction in time that the (potentially vulnerable) participants have to spend in the lab by up to 37x, which is the average number of trials per condition that made it through EEG pre-processing in our dataset and were used for generating ERPs.\nDeep representation learning reconstructs ERPs from single EEG trials. Since β-VAE and AE had good classification accuracy when presented with single EEG samples, we tested whether they could also reconstruct ERPs from single EEG trials. Figure 3 shows that this is indeed the case— reconstructions produced by the pre-trained models from single noisy EEG samples look remarkably similar to those produced from the corresponding ERPs (also see Figs. A5-A7 for more examples). Note that this only holds true for models trained using ERP data and not those trained on single EEG samples.\nDisentangled representations allow for interpretable classification. To obtain SCAN classification results, we inferred the posterior qφ(zx|x) of a pre-trainedβ-VAE in response to an EEG “image”x. We then used the pre-trained SCAN decoder to obtain the reconstructed label logits pγ(y|µ(zx)) using the β-VAE posterior mean as input (Fig. 1, red path). Finally, we applied softmax over the produced logits to obtain the predicted 5-hot label for the EEG “image”. When SCAN was trained on top of a well disentangled β-VAE, it was often able to outperform the canonical LPP baseline in terms of classification accuracy (see Figure 2 and Table A5, SCAN+β-VAE). This is despite the fact that SCAN was not developed with the classification goal in mind. Note, however, that SCAN can only work when trained with disentangled representations. Indeed, SCAN classification performance was in most conditions not distinguishable from chance when trained on top of entangled AE representations (see Figure 2 and Table A5, SCAN+AE). To further confirm the role of disentanglement, we calculated Spearman correlation between the quality of disentanglement as measured by the UDR score and the final SCAN classification accuracy. Table 1 shows significant correlation for age, depression and Axis 1 diagnoses, suggesting that on average these factors were classified better if representations were more disentangled. The same, however, is not the case for gender or study site. This implies that some of the better disentangled models did not contain information necessary for classifying these factors. Such information loss is\nin fact a known trade-off ofβ-VAE training, with more disentangledβ-VAE models often compromising on the informativeness of the learnt representation due to the increased compression pressure induced by the higher β values necessary to achieve disentanglement (Higgins et al., 2017; Duan et al., 2019).\nWhen trained on top of well disentangled β-VAE models, SCAN decisions were not only accurate, but they were also based on a small number of disentangled dimensions. This is unlike the LR classifier, which obtained average sparsity of just 3.25% compared to the 87.5% for SCAN, even when L1 regularisation was used. Furthermore, the disentangled dimensions used by SCAN were often arguably interpretable, hence making SCAN classification decisions amenable to post-hoc analysis. Figure 4B visualises the inferred SCAN posterior when presented with 1-hot labels corresponding to the male or female gender, one of the four study sites, and the presence or absence of depression and Axis 1 diagnoses. In most cases, SCAN was able to associate the label with a single latent dimension in the pre-trained β-VAE (e.g., gender is represented by latent dimension z5), and the different values of the same class label corresponded to disjoint narrow Gaussians on those latents (e.g., male gender is represented with a Gaussian N (0.89, 0.49), while female gender is represented by a Gaussian N (−0.82, 0.52), both on z5). We can also visualise what the different β-VAE latents have learnt to represent by plotting their traversals as shown in Figure 4A (see more in Figs. A8-A10).\nFinally, we can also sample from the SCAN posterior ẑy∼N (µ(zy|y),σ(zy|y)) and reconstruct the resulting ERPs using the β-VAE decoder pθ(x|ẑy) (Fig. 1, purple path). Such samples for the depression label are visualised in Figure 4C. It is clear that the reconstructed ERP samples corresponding to participants with a positive depression diagnosis contain no difference between the positive and neutral trials (overlapping blue and red lines), while those corresponding to participants without a positive depression diagnosis do have an obvious gap between the responses to positive and neutral trials. Hence, we were able to “re-discover” an objective biomarker for the symptom of anhedonia (and depression generally), thus opening up the potential for discovering new biomarkers hidden in EEG data in the future.\n4 CONCLUSION\nThis work provides the first evidence that disentanglement-focused representation learning and the resulting models are powerful measurement tools with immediately applicability to applied and basic research for clinical psychology and electrophysiology. One caveat, however, should be considered: more work is needed to further explore how this approach could be used to produce truly meaningful clinical inferences. Toward that goal, this study is limited because of (1) the choice to focus analysis on the superordinate depression class as opposed its constituent disorders, (2) the use of Axis 1 as a point of comparison as opposed to a similarly represented comorbid disorder, and (3) the choice to train models without between-participant cross-validation. Clearly, there is a need for follow-up work to examine factors that have more clinical impact such as differences in types of depression (persistent vs. major depressive disorder) and treatment response.\nWe have demonstrated that disentangled representation learning can be successfully applied to\nEEG data, resulting in representations that can be successfully re-used to predict multiple clinical factors through fast supervised re-mapping, outperforming a baseline typical for the field. Our method recovers a representation that can be used to automatically extract denoised Event Related Potentials (ERPs) from novel, single EEG trajectories. Finally, single factors of the learned disentangled representations often correspond to meaningful markers of clinical factors (as automatically detected by SCAN), allowing for human interpretability and potentially providing novel insight into new biomarkers and the neurofunctional alterations underlying mental disorders. While SCAN does not always produce statistically-reliable accuracy advantages over more traditional methods (all using β-VAE encoding), the ability to show the user exactly how patterns in the raw data manifest across clinical groups of interest is a considerable advantage." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Sarah Laszlo, Gabriella Levine, Phil Watson, Obi Felten, Mustafa Ispir, Edward De Brouwer, the Amber team, Nader Amir and the Center for Understanding and Treating Anxiety, Dr. Kristen Schmidt, Alec Bruchnak, and Nicholas Santopetro." } ]
2,021
REPRESENTATION LEARNING FOR IMPROVED INTER- PRETABILITY AND CLASSIFICATION ACCURACY OF CLINICAL FACTORS FROM EEG
SP:c1089bb29c0bac6e75d163ef843098a1d8c008da
[ "To train a model with a noisy weakly supervised training set, this paper proposed a momentum prototypes method for label noise correction and OOD sample removal. Noise correction is done by a heuristic rule, that if the prediction is confident enough or the prediction on original label is higher than uniform probability, the label will be kept otherwise it is considered as OOD sample. For training the model, this paper jointly optimizes cross entropy loss on the corrected labels, as well as contrastive loss using prototypical examples and instances. " ]
We propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. Most existing works on weblysupervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. MoPro achieves state-of-the-art performance on WebVision, a weakly-labeled noisy dataset. MoPro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. It outperforms the ImageNet supervised pretrained model by +10.5 on 1-shot classification on VOC, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1% of ImageNet labeled samples. Furthermore, MoPro is more robust to distribution shifts. Code and pretrained models are available at https://github.com/ salesforce/MoPro.
[ { "affiliations": [], "name": "Junnan Li" }, { "affiliations": [], "name": "Caiming Xiong" }, { "affiliations": [], "name": "Steven C.H. Hoi" } ]
[ { "authors": [ "Eric Arazo", "Diego Ortego", "Paul Albert", "Noel E. O’Connor", "Kevin McGuinness" ], "title": "Unsupervised label noise modeling and loss correction", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Pengfei Chen", "Benben Liao", "Guangyong Chen", "Shengyu Zhang" ], "title": "Understanding and utilizing deep neural networks trained with noisy labels", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Abhinav Gupta" ], "title": "Webly supervised learning of convolutional networks", "venue": "In ICCV, pp", "year": 2015 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Fei-Fei Li" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR, pp", "year": 2009 }, { "authors": [ "Santosh Kumar Divvala", "Ali Farhadi", "Carlos Guestrin" ], "title": "Learning everything about anything: Webly-supervised visual concept learning", "venue": "In CVPR, pp", "year": 2014 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher K.I. Williams", "John M. Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (VOC) challenge", "venue": "International Journal of Computer Vision,", "year": 2010 }, { "authors": [ "Rong-En Fan", "Kai-Wei Chang", "Cho-Jui Hsieh", "Xiang-Rui Wang", "Chih-Jen Lin" ], "title": "LIBLINEAR: A library for large linear classification", "venue": "JMLR, 9:1871–1874,", "year": 2008 }, { "authors": [ "Priya Goyal", "Dhruv Mahajan", "Abhinav Gupta", "Ishan Misra" ], "title": "Scaling and benchmarking selfsupervised visual representation learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H. Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Koray Kavukcuoglu", "Rémi Munos", "Michal Valko" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Sheng Guo", "Weilin Huang", "Haozhi Zhang", "Chenfan Zhuang", "Dengke Dong", "Matthew R. Scott", "Dinglong Huang" ], "title": "Curriculumnet: Weakly supervised learning from large-scale web images", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor W. Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Steven Basart", "Norman Mu", "Saurav Kadavath", "Frank Wang", "Evan Dorundo", "Rahul Desai", "Tyler Zhu", "Samyak Parajuli", "Mike Guo" ], "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "venue": "arXiv preprint arXiv:2006.16241,", "year": 2020 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Lu Jiang", "Di Huang", "Mason Liu", "Weilong Yang" ], "title": "Beyond synthetic noise: Deep learning on controlled noisy labels", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Armand Joulin", "Laurens van der Maaten", "Allan Jabri", "Nicolas Vasilache" ], "title": "Learning visual features from large weakly supervised data", "venue": null, "year": 2016 }, { "authors": [ "Bingyi Kang", "Saining Xie", "Marcus Rohrbach", "Zhicheng Yan", "Albert Gordo", "Jiashi Feng", "Yannis Kalantidis" ], "title": "Decoupling representation and classifier for long-tailed recognition", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Large scale learning of general visual representations for transfer", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Ananya Kumar", "Percy Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": null, "year": 2019 }, { "authors": [ "Kuang-Huei Lee", "Xiaodong He", "Lei Zhang", "Linjun Yang" ], "title": "Cleannet: Transfer learning for scalable image classifier training with label noise", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Junnan Li", "Yongkang Wong", "Qi Zhao", "Mohan S. Kankanhalli" ], "title": "Learning to learn from noisy labeled data", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Junnan Li", "Richard Socher", "Steven C.H. Hoi" ], "title": "Dividemix: Learning with noisy labels as semisupervised learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Junnan Li", "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven C.H. Hoi" ], "title": "Prototypical contrastive learning of unsupervised representations", "venue": "arXiv preprint arXiv:2005.04966,", "year": 2020 }, { "authors": [ "Wen Li", "Limin Wang", "Wei Li", "Eirikur Agustsson", "Luc Van Gool" ], "title": "Webvision database: Visual learning and understanding from web data", "venue": "arXiv preprint arXiv:1708.02862,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge J. Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C. Lawrence Zitnick" ], "title": "Microsoft COCO: common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross B. Girshick", "Kaiming He", "Bharath Hariharan", "Serge J. Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Xingjun Ma", "Yisen Wang", "Michael E. Houle", "Shuo Zhou", "Sarah M. Erfani", "Shu-Tao Xia", "Sudanthi N.R. Wijewickrema", "James Bailey" ], "title": "Dimensionality-driven learning with noisy labels", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Dhruv Mahajan", "Ross B. Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Scott E. Reed", "Honglak Lee", "Dragomir Anguelov", "Christian Szegedy", "Dumitru Erhan", "Andrew Rabinovich" ], "title": "Training deep neural networks on noisy labels with bootstrapping", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Daiki Tanaka", "Daiki Ikami", "Toshihiko Yamasaki", "Kiyoharu Aizawa" ], "title": "Joint optimization framework for learning with noisy labels", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Yi Tu", "Li Niu", "Dawei Cheng", "Liqing Zhang" ], "title": "Protonet: Learning from web data with memory", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Arash Vahdat" ], "title": "Toward robustness against label noise in training deep discriminative neural networks", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Andreas Veit", "Neil Alldrin", "Gal Chechik", "Ivan Krasin", "Abhinav Gupta", "Serge J. Belongie" ], "title": "Learning from noisy large-scale datasets with minimal supervision", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Yisen Wang", "Weiyang Liu", "Xingjun Ma", "James Bailey", "Hongyuan Zha", "Le Song", "Shu-Tao Xia" ], "title": "Iterative learning with open-set noisy labels", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X. Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Tong Xiao", "Tian Xia", "Yi Yang", "Chang Huang", "Xiaogang Wang" ], "title": "Learning from massive noisy labeled data for image classification", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Jingkang Yang", "Litong Feng", "Weirong Chen", "Xiaopeng Yan", "Huabin Zheng", "Ping Luo", "Wayne Zhang" ], "title": "Webly supervised image classification with self-contained confidence", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Kun Yi", "Jianxin Wu" ], "title": "Probabilistic end-to-end noise correction for learning with noisy labels", "venue": null, "year": 2019 }, { "authors": [ "Xiaohua Zhai", "Avital Oliver", "Alexander Kolesnikov", "Lucas Beyer" ], "title": "S4l: Self-supervised semisupervised learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Weihe Zhang", "Yali Wang", "Yu Qiao" ], "title": "Metacleaner: Learning to hallucinate clean representations for noisy-labeled visual recognition", "venue": null, "year": 2019 }, { "authors": [ "Zizhao Zhang", "Han Zhang", "Sercan Ömer Arik", "Honglak Lee", "Tomas Pfister" ], "title": "Distilling effective supervision from severe label noise", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Àgata Lapedriza", "Jianxiong Xiao", "Antonio Torralba", "Aude Oliva" ], "title": "Learning deep features for scene recognition using places database", "venue": "In NIPS, pp", "year": 2014 }, { "authors": [ "C APPENDIX" ], "title": "TRANSFER LEARNING IMPLEMENTATION DETAILS For low-shot image classification on Places and VOC, we follow the procedure in Li et al. (2020b) and train linear SVMs on the global average pooling features of ResNet-50. We preprocess all images by resizing to 256 pixels along the shorter side and taking a 224 × 224 center crop", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large-scale datasets with human-annotated labels have revolutionized computer vision. Supervised pretraining on ImageNet (Deng et al., 2009) has been the de facto formula of success for almost all state-of-the-art visual perception models. However, it is extremely labor intensive to manually annotate millions of images, which makes it a non-scalable solution. One alternative to reduce annotation cost is self-supervised representation learning, which leverages unlabeled data. However, self-supervised learning methods (Goyal et al., 2019; He et al., 2019; Chen et al., 2020a; Li et al., 2020b) have yet consistently shown superior performance compared to supervised learning, especially when transferred to downstream tasks with limited labels.\nWith the help of commercial search engines, photo-sharing websites, and social media platforms, there is near-infinite amount of weakly-labeled images available on the web. Several works have exploited the scalable source of web images and demonstrated promising results with weblysupervised representation learning (Mahajan et al., 2018; Sun et al., 2017; Li et al., 2017; Kolesnikov et al., 2020). However, there exists two competing claims on whether weakly-labeled noisy datasets lead to worse generalization performance. One claim argues that the effect of noise can be overpowered by the scale of data, and simply applies standard supervised learning method on web datasets (Mahajan et al., 2018; Sun et al., 2017; Li et al., 2017; Kolesnikov et al., 2020). The other claim argues that deep models can easily memorize noisy labels, resulting in worse generalization (Zhang et al., 2017; Ma et al., 2018). In this paper, we show that both claims are partially true. While increasing the size of data does improve the model’s robustness to noise, our method can substantially boost the representation learning performance by addressing noise.\nThere exists a large body of literature on learning with label noise (Jiang et al., 2018; Han et al., 2018; Guo et al., 2018; Tanaka et al., 2018; Arazo et al., 2019; Li et al., 2020a). However, existing methods have several limitations that make them less effective for webly-supervised representation learning. First, most methods do not consider out-of-distribution (OOD) samples, which is a major\nsource of noise in real-world web datasets. Second, many methods perform computation-heavy procedures for noise cleaning (Jiang et al., 2018; Li et al., 2019; 2020a), or require access to a set of samples with clean labels (Vahdat, 2017; Veit et al., 2017; Lee et al., 2018), which limit their scalability in practice.\nWe propose a new method for efficient representation learning from weakly-labeled web images. Our method is inspired by recent developments in contrastive learning for self-supervised learning (He et al., 2019; Chen et al., 2020a; Li et al., 2020b) We introduce Momentum Prototypes (MoPro), a simple component which is effective in label noise correction, OOD sample removal, and representation learning. A visual explanation of our method is shown in Figure 1. We use a deep network to project images into normalized low-dimensional embeddings, and calculate the prototype for a class as the moving-average embedding for clean samples in that class. We train the network such that embeddings are pulled closer to their corresponding prototypes, while pushed away from other prototypes. Images with corrupted labels are corrected either as another class or as an OOD sample based on their distance to the momentum prototypes.\nWe experimentally show that:\n• MoPro achieves state-of-the-art performance on the upstream weakly-supervised learning task. • MoPro substantially improves representation learning performance when the pretrained model is\ntransferred to downstream image classification and object detection tasks. For the first time, we show that weakly-supervised representation learning achieves similar performance as supervised representation learning, under the same data and computation budget. With a larger web dataset, MoPro outperforms ImageNet supervised learning by a large margin.\n• MoPro learns a more robust and calibrated model that generalizes better to distribution variations." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 WEBLY-SUPERVISED REPRESENTATION LEARNING", "text": "A number of prior works exploit large web datasets for visual representation learning (Divvala et al., 2014; Chen & Gupta, 2015; Joulin et al., 2016; Mahajan et al., 2018; Sun et al., 2017; Li et al., 2017; Kolesnikov et al., 2020). These datasets contain a considerable amount of noise. Approximately 20% of the labels in the JMT-300M dataset (Sun et al., 2017) are noisy, whereas 34% of images in the WebVision dataset (Li et al., 2017) are considered outliers. Surprisingly, most prior works have chosen to ignore the noise and applied vanilla supervised method, with the claim that the scale of data can overpower the noise (Mahajan et al., 2018; Sun et al., 2017; Li et al., 2017). However, we show that supervised method cannot fully harvest the power of large-scale weakly-labeled datasets.\nOur method achieves substantial improvement by addressing noise, and advances the potential of webly-supervised representation learning." }, { "heading": "2.2 LEARNING WITH LABEL NOISE", "text": "Learning with label noise has been widely studied. Some methods require access to a small set of clean samples (Xiao et al., 2015; Vahdat, 2017; Veit et al., 2017; Lee et al., 2018; Zhang et al., 2020), and other methods assume that no clean labels are available. There exist two major types of approaches. The first type performs label correction using predictions from the network (Reed et al., 2015; Ma et al., 2018; Tanaka et al., 2018; Yi & Wu, 2019; Yang et al., 2020). The second type separates clean samples from corrupted samples, and trains the model on clean samples (Han et al., 2018; Arazo et al., 2019; Jiang et al., 2018; Wang et al., 2018; Chen et al., 2019; Li et al., 2020a). However, existing methods have yet shown promising results for large-scale weakly-supervised representation learning. The main reasons include: (1) most methods do not consider OOD samples, which commonly occur in real-world web datasets; (2) most methods are computational-heavy due to co-training (Han et al., 2018; Li et al., 2020a; Jiang et al., 2018; 2020), iterative training (Tanaka et al., 2018; Yi & Wu, 2019; Wang et al., 2018; Chen et al., 2019), or meta-learning (Li et al., 2019; Zhang et al., 2019).\nDifferent from existing methods, MoPro achieves both label correction and OOD sample removal on-the-fly with a single step, based on the similarity between an image embedding and the momentum prototypes. MoPro also leverages contrastive learning to learn a robust embedding space." }, { "heading": "2.3 SELF-SUPERVISED REPRESENTATION LEARNING", "text": "Self-supervised methods have been proposed for representation learning using unlabeled data. The recent developments in self-supervised representation learning can be attributed to contrastive learning. Most methods (He et al., 2019; Chen et al., 2020a; Oord et al., 2018; Wu et al., 2018) leverage the task of instance discrimination, where augmented crops from the same source image are enforced to have similar embeddings. Prototypical contrastive learning (PCL) (Li et al., 2020b) performs clustering to find prototypical embeddings, and enforces an image embedding to be similar to its assigned prototypes. Different from PCL, we update prototypes on-the-fly in a weakly-supervised setting, where the momentum prototype of a class is the moving average of clean samples’ embeddings. Furthermore, we jointly optimize two contrastive losses and a cross-entropy loss.\nCurrent self-supervised representation learning methods are limited in (1) inferior performance in low-shot task adaptation, (2) huge computation cost, and (3) inadequate to harvest larger datasets. We show that weakly-supervised learning with MoPro addresses these limitations." }, { "heading": "3 METHOD", "text": "In this section, we delineate the details of our method. First, we introduce the components in our representation learning framework. Then, we describe the loss functions. Finally, we explain the noise correction procedure for label correction and OOD sample removal. A pseudo-code of MoPro is provided in appendix B." }, { "heading": "3.1 REPRESENTATION LEARNING FRAMEWORK", "text": "Our proposed framework consists of the following components. Figure 2 gives an illustration.\n• A noisy training dataset {(xi, yi)}ni=1, where xi is an image and yi ∈ {1, ...,K} is its class label. • A pseudo-label ŷi for each image xi, which is its corrected label. Details for generating the\npseudo-label is explained in Sec 3.3. • An encoder network, which maps an augmented image x̃i to a representation vector vi ∈ Rde .\nWe experiment with ResNet-50 (He et al., 2016) as the encoder, where the activations of the final global pooling layer (de = 2048) are used as the representation vector. • A classifier (a fully-connected layer followed by softmax) which receives the representation vi as input and outputs class predictions pi.\n• A projection network, which maps the representation vi into a low-dimensional embedding zi ∈ Rdp (dp = 128). zi is always normalized to the unit sphere. Following SimCLR (Chen et al., 2020a), we use a MLP with one hidden layer as the projection network. • Momentum embeddings z′i generated by a momentum encoder. The momentum encoder has the same architecture as the encoder followed by the projection network, and its parameters are the moving-average of the encoder’s and the projection network’s parameters. Same as in MoCo (He et al., 2019), we maintain a queue of momentum embeddings of past samples. • Momentum prototypes C ∈ Rdp×K . The momentum prototype of the k-th class, ck, is the normalized moving-average embedding for samples with pseudo-label ŷi = k." }, { "heading": "3.2 CONTRASTIVE LOSS", "text": "As illustrated in Figure 1, we aim to learn an embedding space where samples from the same class gather around its class prototype, while samples from different classes are seperated. We achieve it with two contrastive losses: (1) a prototypical contrastive loss Lpro which increases the similarity between an embedding and its corresponding class prototype, (zi, cŷi), in contrast to other prototypes; (2) an instance contrastive loss Lins which increases the similarity between two embeddings of the same source image, (zi, z′i), in contrast to embeddings of other images. Specifically, the contrastive losses are defined as:\nLipro = − log exp(zi · cŷi/τ)∑K k=1 exp(zi · ck/τ) , Liins = − log exp(zi · z′i/τ)∑R r=0 exp(zi · z′r/τ) , (1)\nwhere τ is a temperature parameter, and ŷi is the pseudo-label. We use R negative momentum embeddings to construct the denominator of the instance contrastive loss.\nWe train the classifier with cross-entropy loss, using pseudo-labels as targets.\nLice = − log(p ŷi i ) (2)\nWe jointly optimize the contrastive losses and the classification loss. The training objective is:\nL = n∑ i=1 (Lice + λproLipro + λinsLiins) (3)\nFor simplicity, we set λpro = λins = 1 for all experiments." }, { "heading": "3.3 NOISE CORRECTION", "text": "We propose a simple yet effective method for online noise correction during training, which cleans label noise and removes OOD samples. For each sample, we generate a soft pseudo-label qi by\ncombining the classifier’s output probability pi with si, a class probability distribution calculated using the sample’s similarity w.r.t the momentum prototypes:\nqi = αpi + (1− α)si, ski = exp(zi · ck/τ)∑K k=1 exp(zi · ck/τ) . (4)\nwhere the combination weight is simply set as α = 0.5 in all experiments.\nWe convert qi into a hard pseudo-label ŷi based on the following rules: (1) if the highest score of qi is above certain threshold T , use the class with the highest score as the pseudo-label; (2) otherwise, if the score for the original label yi is higher than uniform probability, use yi as the pseudo-label; (3) otherwise, label it as an OOD sample.\nŷi = argmaxk q k i if maxk q k i > T , yi elseif q yi i > 1/K,\nOOD otherwise. (5)\nWe remove OOD samples from both the cross-entropy loss and the prototypical contrastive loss so that they do not affect class-specific learning, but include them in the instance contrastive loss to further separate them from in-distribution samples. Examples of OOD images and corrected pseudo-labels are shown in the appendices." }, { "heading": "3.4 MOMENTUM PROTOTYPES", "text": "For each class k, we calculate its momentum prototype as a moving-average of the normalized embeddings for samples with pseudo-label k. Specifically, we update ck by:\nck ← Normalize(mck + (1−m)zi), ∀i ∈ {i | ŷi = k}, (6) where Normalize(c) = c/ ‖c‖2. The momentum coefficient m is set 0.999 in our experiments." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASET FOR UPSTREAM TRAINING", "text": "We use the WebVision (Li et al., 2017) dataset as the noisy training data. It consists of images automatically crawled from Google and Flickr, using visual concepts from ImageNet as queries. We experiment with three versions of WebVision with different sizes: (1) WebVision-V1.0 contains 2.44m images with the same classes as the ImageNet-1k (ILSVRC 2012) dataset; (2) WebVisionV0.5 is a randomly sampled subset of WebVision-V1.0, which contains the same number of images (1.28m) as ImageNet-1k; (3) WebVision-V2.0 contains 16m images with 5k classes." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "We follow standard settings for ImageNet training: batch size is 256; total number of epochs is 90; optimizer is SGD with a momentum of 0.9; initial learning rate is 0.1, decayed at 40 and 80 epochs; weight decay is 0.0001. We use ResNet-50 (He et al., 2016) as the encoder. For MoProspecific hyperparameters, we set τ = 0.1, α = 0.5, T = 0.8 (T = 0.6 for WebVision-V2.0). The momentum for both the momentum encoder and momentum prototypes is set as 0.999. The queue to store momentum embeddings has a size of 8192. We apply standard data augmentation (crop and horizontal flip) to the encoder’s input, and stronger data augmentation (color changes in MoCo (He et al., 2019)) to the momentum encoder’s input. We warm-up the model for 10 epochs by training on all samples with original labels, before applying noise correction." }, { "heading": "4.3 UPSTREAM TASK PERFORMANCE", "text": "In Table 1, we compare MoPro with existing weakly-supervised learning methods trained on WebVision-V1.0, where MoPro achieves state-of-the-art performance. Since the training dataset\nhas imbalanced number of samples per-class, inspired by Kang et al. (2020), we perform the following decoupled training steps to re-balance the classifier: (1) pretrain the model with MoPro; (2) perform noise correction on the training data using the pretrained model, following the method in Section 3.3; (3) keep the pretrained encoder fixed and finetune the classifier on the cleaned dataset, using square-root data sampling (Mahajan et al., 2018) which balances the classes. We retrain the classifier for 15 epochs, using a learning rate of 0.01 which is decayed at 5 and 10 epochs. Surprisingly, we also find that a vanilla cross-entropy method with decoupled classifier re-balancing can also achieve competitive performance, outperforming most existing baselines." }, { "heading": "5 TRANSFER LEARNING", "text": "In this section, we transfer weakly-supervised learned models to a variety of downstream tasks. We show that MoPro yields superior performance in image classification, object detection, instance segmentation, and obtains better robustness to domain shifts. Implementation details for the transfer learning experiments are described in appendix C." }, { "heading": "5.1 LOW-SHOT IMAGE CLASSIFICATION ON FIXED REPRESENTATION", "text": "First, we transfer the learned representation to downstream tasks with few training samples. We perform low-shot classification on two datasets: PASCAL VOC2007 (Everingham et al., 2010) for object classification and Places205 (Zhou et al., 2014) for scene recognition. Following the setup by Goyal et al. (2019); Li et al. (2020b), we train linear SVMs using fixed representations from pretrained models. We vary the number k of samples per-class and report the average result\nacross 5 independent runs. Table 2 shows the results. When pretrained on weakly-labeled datasets, MoPro consistently outperforms the vanilla CE method. The improvement of MoPro becomes less significant when the number of web images increases from 2.4m to 16m, suggesting that increasing dataset size is a viable solution to combat noise.\nWhen compared with ImageNet pretrained models, MoPro substantially outperforms self-supervised learning (MoCo v2 (Chen et al., 2020b) and PCL v2 (Li et al., 2020b)), and achieves comparable performance with supervised learning when the same amount of web images (i.e. WebVision-V0.5) is used. Our results for the first time show that weakly-supervised representation learning can be as powerful as supervised representation learning under the same data and computation budget." }, { "heading": "5.2 LOW-RESOURCE TRANSFER WITH FINETUNING", "text": "Next, we perform experiment to evaluate whether the pretrained model provides a good basis for finetuning when the downstream task has limited training data. Following the setup by Chen et al. (2020a), we finetune the pretrained model on 1% or 10% of ImageNet training samples. Table 3 shows the results. MoPro consistently outperforms CE when pretrained on Web datasets. Compared to self-supervised learning methods pretrained on ImageNet, weakly-supervised learning achieves significantly better performance with fewer number of epochs.\nSurprisingly, pretraining on the larger WebVision-V2 leads to worse performance compared to V0.5 and V1.0. This is because WebVision-V0.5 and V1.0 contain the same 1k class as ImageNet, whereas V2 also contains 4k extra classes. Hence, the representations learned from V2 are less task-specific and more difficult to adapt to ImageNet, especially with only 1% of samples for finetuning. This suggests that if the classes for a downstream task are known a priori, it is more effective to curate a task-specific weakly-labeled dataset with the same classes." }, { "heading": "5.3 OBJECT DETECTION AND INSTANCE SEGMENTATION", "text": "We further transfer the pretrained model to object detection and instance segmentation tasks on COCO (Lin et al., 2014). Following the setup by He et al. (2019), we use the pretrained ResNet-50 as the backbone for a Mask-RCNN (He et al., 2017) with FPN (Lin et al., 2017). We finetune all layers end-to-end, including BN. The schedule is the default 1× or 2× in Girshick et al. (2018) Table 4 shows the results. Weakly-supervised learning with MoPro outperforms both supervised learning on ImageNet and self-supervised learning on one billion Instagram images." }, { "heading": "5.4 ROBUSTNESS", "text": "It has been shown that deep models trained on ImageNet lack robustness to out-of-distribution samples, often falsely producing over-confident predictions. Hendricks et al. have curated two benchmark datasets to evaluate models’ robustness to real-world distribution variation: (1) ImageNetR (Hendrycks et al., 2020) which contains various artistic renditions of object classes from the\noriginal ImageNet dataset, and (2) ImageNet-A (Hendrycks et al., 2019) which contains natural images where ImageNet-pretrained models consistently fail due to variations in background elements, color, or texture. Both datasets contain 200 classes, a subset of ImageNet’s 1,000 classes.\nWe evaluate weakly-supervised trained models on these two robustness benchmarks. We report both accuracy and the `2 calibration error (Kumar et al., 2019). The calibration error measures the misalignment between a model’s confidence and its accuracy. Concretely, a well-calibrated classifier which give examples 80% confidence should be correct 80% of the time. Results are shown in Table 5. Webly-supervised learning show significantly higher accuracy and lower calibration error. The robustness to distribution shift could come from the higher diversity of samples in Web images. Compared to vanilla CE, MoPro further improves the model’s robustness on both datasets. Note that we made sure that the training data of WebVision does not overlap with the test data." }, { "heading": "6 ABLATION STUDY", "text": "We perform ablation study to verify the effectiveness of three important components in MoPro: (1) prototypical contrastive loss Lpro, (2) instance contrastive loss Lins, (3) prototypical similarity si used for noise correction (equation 4). We choose low-resource finetuning on 1% of ImageNet training data as the benchmark, and report the top-1 accuracy for models pretrained on WebVisionV0.5. As shown in Table 6, all of the three components contribute to the efficacy of MoPro." }, { "heading": "7 CONCLUSION", "text": "This paper introduces a new contrastive learning framework for webly-supervised representation learning. We propose momentum prototypes, a simple component that is effective in label noise\ncorrection, OOD sample removal, and representation learning. MoPro achieves state-of-the-art performance on the upstream task of learning from real-world noisy data, and superior representation learning performance on multiple down-stream tasks. Webly-supervised learning with MoPro does not require the expensive annotation cost in supervised learning, nor the huge computation budget in self-supervised learning. For future work, MoPro could be extended to utilize other sources of free Web data, such as weakly-labeled videos, for representation learning in other domains." }, { "heading": "APPENDIX A NOISY SAMPLE VISUALIZATION", "text": "In Figure 3, we show example images randomly chosen from the out-of-distribution samples filtered out by our method. In Figure 4, we show random examples where their pseudo-labels are different from the original training labels. By visual examination, we observe that our method can remove OOD samples and correct noisy labels at a high success rate." }, { "heading": "APPENDIX B PSEUDO-CODE OF MOPRO", "text": "Algorithm 1 summarizes the proposed method.\nAlgorithm 1: MoPro’s main algorithm. 1 Input: number of classes K, temperature τ , threshold T , momentum m, encoder network f(·),\nprojection network g(·), classifier h(·), momentum encoder g′(f ′(·)). 2 for {(xi, yi)}bi=1 in loader do // load a minibatch of noisy training data 3 for i ∈ {1, ..., b} do 4 x̃i = weak aug(xi) // weak augmentation 5 x̃′i = strong aug(xi) // strong augmentation 6 vi = f(x̃i) // representation 7 zi = g(vi) // normalized low-dimensional embedding 8 zi = g\n′(f ′(x̃′i)) // momentum embedding 9 pi = h(vi) // class prediction\n10 si = {ski }Kk=1, ski = exp(zi·ck/τ)∑K\nk=1 exp(zi·ck/τ) // prototypical score\n// noise correction 11 qi = (pi + si)/2 // soft pseudo-label 12 if maxk qki > T then 13 ŷi = argmaxk q k i 14 else if qyii > 1/K then 15 ŷi = yi 16 else 17 ŷi = OOD 18 end\n// calculate losses\n19 Liins = − log exp(zi·z′i/τ)∑R\nr=0 exp(zi·z′r/τ) // instance contrastive loss\n20 if ŷi is not OOD then 21 Lipro = − log exp(zi·cŷi/τ)∑K k=1 exp(zi·ck/τ)\n// prototypical contrastive loss\n22 Lice = − log(p ŷi i ) // cross entropy loss 23 else 24 Lipro = Lice = 0 25 end\n// update momentum prototypes 26 cŷi ← Normalize(mcŷi + (1−m)zi) 27 end 28 L = ∑b i=1(Lice + Lipro + Liins) // total loss 29 update networks f, g, h to minimize L. 30 end" }, { "heading": "APPENDIX C TRANSFER LEARNING IMPLEMENTATION DETAILS", "text": "For low-shot image classification on Places and VOC, we follow the procedure in Li et al. (2020b) and train linear SVMs on the global average pooling features of ResNet-50. We preprocess all images by resizing to 256 pixels along the shorter side and taking a 224 × 224 center crop. The SVMs are implemented in the LIBLINEAR (Fan et al., 2008) package.\nFor low-resource finetuning on ImageNet, we adopt different finetuning strategy for different versions of WebVision pretrained models. For WebVision V0.5 and V1.0, since they contain the same 1000 classes as ImageNet, we finetune the entire model including the classification layer. We train with SGD, using a batch size of 256, a momentum of 0.9, a weight decay of 0, and a learning rate of 0.005. We train for 40 epochs, and drop the learning rate by 0.2 at 15 and 30 epochs. For WebVision 2.0, since it contains 5000 classes, we randomly initialize a new classification layer with 1000 output\ndimension, and finetune the model end-to-end. We train for 50 epochs, using a learning rate of 0.01, which is dropped by 0.1 at 20 and 40 epochs.\nFor object detection and instance segmentation on COCO, we adopt the same setup in MoCo (He et al., 2019), using Detectron2 (Girshick et al., 2018) codebase. The image scale is in [640, 800] pixels during training and is 800 at inference. We fine-tune all layers end-to-end. We finetune on the train2017 set (∼118k images) and evaluate on val2017." }, { "heading": "APPENDIX D STANDARD DEVIATION FOR LOW-SHOT CLASSIFICATION", "text": "Table 7 reports the standard deviation for the low-shot image classification experiment in Section 5.1." } ]
2,021
MOPRO: WEBLY SUPERVISED LEARNING WITH MOMENTUM PROTOTYPES
SP:98f5d14f7167266f06fd7e2a30c93a20905e7a6c
[ "The authors identify putative clusters of units/neurons in deep networks using spectral clustering on a graph defined by synaptic weights. The authors then argue that these structurally defined clusters of neurons have similar *functional representations*. Finding interpretable relationships between weight matrices and functional modules is challenging, and the authors should be applauded for attempting to tackle this challenging problem that few research groups are devoting energy to." ]
As deep neural networks become more widely-used, it is important to understand their inner workings. Toward this goal, modular interpretations are appealing because they offer flexible levels of abstraction aside from standard architectural building blocks (e.g., neurons, channels, layers). In this paper, we consider the problem of assessing how functionally interpretable a given partitioning of neurons is. We propose two proxies for this: importance which reflects how crucial sets of neurons are to network performance, and coherence which reflects how consistently their neurons associate with input/output features. To measure these proxies, we develop a set of statistical methods based on techniques that have conventionally been used for the interpretation of individual neurons. We apply these methods on partitionings generated by a spectral clustering algorithm which uses a graph representation of the network’s neurons and weights. We show that despite our partitioning algorithm using neither activations nor gradients, it reveals clusters with a surprising amount of importance and coherence. Together, these results support the use of modular interpretations, and graph-based partitionings in particular, for interpretability.
[]
[ { "authors": [ "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Ferran Alet", "Tomás Lozano-Pérez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "arXiv preprint arXiv:1806.10166,", "year": 2018 }, { "authors": [ "Alice Anonymous", "Bob Anonymous", "Charlie Anonymous", "David Anonymous" ], "title": "Clusterability in neural networks (currently under submission)", "venue": "Association for the Advancement of Artificial Intelligence,", "year": 2021 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yoav Benjamini", "Yosef Hochberg" ], "title": "Controlling the false discovery rate: a practical and powerful approach to multiple testing", "venue": "Journal of the Royal statistical society: series B (Methodological),", "year": 1995 }, { "authors": [ "Alfio Borzı", "Giuseppe Borzı" ], "title": "Algebraic multigrid methods for solving generalized eigenvalue problems. International journal for numerical methods in engineering", "venue": null, "year": 2006 }, { "authors": [ "Stephen Casper", "Xavier Boix", "Vanessa D’Amario", "Ling Guo", "Kasper Vinken", "Gabriel Kreiman" ], "title": "Frivolous units: Wider networks are not really that wide", "venue": "arXiv preprint arXiv:1912.04783,", "year": 2020 }, { "authors": [ "Róbert Csordás", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "Are neural nets modular? inspecting their functionality through differentiable weight masks", "venue": null, "year": 2020 }, { "authors": [ "Matthias De Lange", "Rahaf Aljundi", "Marc Masana", "Sarah Parisot", "Xu Jia", "Aleš Leonardis", "Gregory Slabaugh", "Tinne Tuytelaars" ], "title": "A continual learning survey: Defying forgetting in classification", "venue": null, "year": 1909 }, { "authors": [ "Richard E Fairley" ], "title": "Tutorial: Static analysis and dynamic testing of computer", "venue": "software. Computer,", "year": 1978 }, { "authors": [ "Michael Gazzaniga", "Richard B Ivry" ], "title": "Cognitive Neuroscience: The Biology of the Mind: Fourth International Student Edition", "venue": "WW Norton,", "year": 2013 }, { "authors": [ "Michelle Girvan", "Mark EJ Newman" ], "title": "Community structure in social and biological networks", "venue": "Proceedings of the national academy of sciences,", "year": 2002 }, { "authors": [ "Anirudh Goyal", "Alex Lamb", "Jordan Hoffmann", "Shagun Sodhani", "Sergey Levine", "Yoshua Bengio", "Bernhard Schölkopf" ], "title": "Recurrent independent mechanisms", "venue": null, "year": 1909 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sture Holm" ], "title": "A simple sequentially rejective multiple test procedure", "venue": "Scandinavian journal of statistics,", "year": 1979 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Louis Kirsch", "Julius Kunze", "David Barber" ], "title": "Modular networks: Learning to decompose neural computation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": null, "year": 2015 }, { "authors": [ "Brenden M Lake", "Tomer D Ullman", "Joshua B Tenenbaum", "Samuel J Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and brain sciences,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Zachary C Lipton" ], "title": "The mythos of model", "venue": "interpretability. Queue,", "year": 2018 }, { "authors": [ "Shuying Liu", "Weihong Deng" ], "title": "Very deep convolutional neural network based image classification using small training sample size", "venue": "IAPR Asian Conference on Pattern Recognition,", "year": 2015 }, { "authors": [ "Spandan Madan", "Timothy Henry", "Jamell Dozier", "Helen Ho", "Nishchal Bhandari", "Tomotake Sasaki", "Frédo Durand", "Hanspeter Pfister", "Xavier Boix" ], "title": "On the capability of neural networks to generalize to unseen category-pose combinations", "venue": "arXiv preprint arXiv:2007.08032,", "year": 2020 }, { "authors": [ "Jesse Mu", "Jacob Andreas" ], "title": "Compositional explanations of neurons", "venue": "arXiv preprint arXiv:2006.14032,", "year": 2020 }, { "authors": [ "Mark EJ Newman", "Michelle Girvan" ], "title": "Finding and evaluating community structure in networks", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Stefano Panzeri", "Christopher D Harvey", "Eugenio Piasini", "Peter E Latham", "Tommaso Fellin" ], "title": "Cracking the neural code for sensory perception by combining statistics", "venue": "intervention, and behavior. Neuron,", "year": 2017 }, { "authors": [ "Giambattista Parascandolo", "Niki Kilbertus", "Mateo Rojas-Carulla", "Bernhard Schölkopf" ], "title": "Learning independent causal mechanisms", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of machine Learning research,", "year": 2011 }, { "authors": [ "Jianbo Shi", "Jitendra Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2000 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Alberto Testolin", "Michele Piccolini", "Samir Suweis" ], "title": "Deep learning systems as complex networks", "venue": "Journal of Complex Networks,", "year": 2020 }, { "authors": [ "Ulrike von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and computing,", "year": 2007 }, { "authors": [ "Chihiro Watanabe" ], "title": "Interpreting layered neural networks via hierarchical modular representation", "venue": "In International Conference on Neural Information Processing,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Jiaxuan You", "Jure Leskovec", "Kaiming He", "Saining Xie" ], "title": "Graph structure of neural networks", "venue": "arXiv preprint arXiv:2007.06559,", "year": 2020 }, { "authors": [ "Bolei Zhou", "Yiyou Sun", "David Bau", "Antonio Torralba" ], "title": "Revisiting the importance of individual units in CNNs via ablation", "venue": "arXiv preprint arXiv:1806.02891,", "year": 2018 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have achieved state-of-the-art performance in a variety of applications, but this success contrasts with the challenge of making them more intelligible. As these systems become more advanced and widely-used, there are a number of reasons we may need to understand them more effectively. One reason is to shed light on better ways to build and train them. A second reason is the importance of transparency, especially in settings which involve matters of safety, trust, or justice (Lipton, 2018). More precisely, we want methods for analyzing a trained network that can be used to construct semantic and faithful descriptions of its inner mechanisms. We refer to this as mechanistic transparency.\nToward this goal, we consider modularity as an organizing principle to achieve mechanistic transparency. In the natural sciences, we often try to understand things by taking them apart. Aside from subdivision into the standard architectural building blocks (e.g., neurons, channels, layers), are there other ways a trained neural network be meaningfully “taken apart”? We aim to analyze a network via a partitioning of its neurons into disjoint sets with the hope of finding that these sets are “modules” with distinct functions. Since there are many choices for how to partition a network, we would like metrics for anticipating how meaningful a given partition might be.\nInspired by the field of program analysis (Fairley, 1978), we apply the concepts of “dynamic” and “static” analysis to neural networks. Dynamic analysis includes performing forward passes and/or computing gradients, while static analysis only involves analyzing architecture and parameters. In a concurrent submission (Anonymous et al., 2021), we use spectral clustering to study the extent to which networks form clusters of neurons that are highly connected internally but not externally and find that in many cases, networks are structurally clusterable. This approach is static because the partitioning is produced according to the network’s weights only, using neither activations nor gradients. Here, we build off of this concurrent submission by working to bridge graph-based clusterability and functional modularity.\nTo see how well neurons within each cluster share meaningful similarities, we introduce two proxies: importance and coherence. Importance refers to how crucial clusters are to the network’s perfor-\nmance overall and lends insight into how well a partition identifies clusters that are individually key to the network’s function. Coherence refers to how consistently the neurons within a cluster correspond in their activations to particular features in data. We analyze coherence both with respect to input features and output labels. To measure these proxies, we utilize dynamic interpretability methods that have been conventionally used for single-neuron analysis to the study of these partitions. We conduct a set of experiments and hypothesis tests in networks scaling from the MNIST to the ImageNet level. In doing so, we show that spectral clustering is capable of identifying functionally important and coherent clusters of neurons. This new finding the and methods we present for combining spectral clustering with dynamic methods supports the use of modular decompositions of neurons toward mechanistic transparency.\nOur key contributions are threefold:\n1. Introducing two proxies, importance and coherence, to assess whether a given partitioning of a network exhibits modularity. 2. Quantifying these two proxies with interpretability methods equipped with statistical hypothesis testing procedures. 3. Applying our methods on the partitions produced by the spectral clustering technique of Anonymous et al. (2021) on a range of networks, and finding evidence of modularity among these clusters." }, { "heading": "2 GENERATING PARTITIONINGS WITH SPECTRAL CLUSTERING", "text": "In our concurrent submission, we introduce and study in-depth a procedure to partition a neural network into disjoint clusters of neurons (Anonymous et al., 2021) based only on its weights. We found that trained networks are more clusterable than randomly initialized ones, and they are also often more clusterable than similar networks with identical weight distributions. The experimental procedure consists of three steps: (1) “Graphification” - transforming the network into an undirected edge-weighted graph; (2) Spectral clustering - obtaining a partitioning via spectral clustering of the graph.\nGraphification: To perform spectral clustering, a network must be represented as an undirected graph with non-negative edges. For MLPs (multilayer perceptrons), each graph vertex corresponds to a neuron in the network including input and output neurons. If two neurons have a weight connecting them in the network, their corresponding vertices are connected by an edge giving its absolute value. For CNNs (convolutional neural networks), a vertex corresponds to a single feature map (which we also refer to as a “neuron”) in a convolutional layer. Here, we do not use input, output, or fully-connected layers. If two feature maps are in adjacent convolutional layers, their corresponding vertices are connected with an edge giving the L1 norm for the corresponding 2 dimensional kernel slice. If convolutional layers are separated by a batch normalization layer (Ioffe & Szegedy, 2015), we multiply weights by γ/(σ+ ε) where γ is the scaling factor, σ is the moving standard deviation, and ε is a small constant.\nSpectral Clustering: We run normalized spectral clustering on the resulting graph (Shi & Malik, 2000) to obtain a partition of the neurons into clusters. For all experiments, we set the number of clusters to 12 unless explicitly mentioned otherwise. We choose 12 because (1) it is computationally tractable, (2) it is larger than the number of classes in MNIST and CIFAR-10, and (3) it is small compared to the number of neurons in the layers of all of our networks. However, in Appendix A.6, we show results for k = 8 and k = 18 for a subset of experiments and find no major differences. We use the scikit-learn implementation (Pedregosa et al., 2011) with the ARPACK eigenvalue solver (Borzı̀ & Borzı̀, 2006). Refer to appendix A.1 for a complete description of the algorithm." }, { "heading": "3 EVALUATION OF MODULARITY USING IMPORTANCE AND COHERENCE", "text": "Clusters of neurons produced by spectral clustering span more than one layer. However, layers at different depths of a network tend to develop different representations. To control for these differences, we study the neurons in clusters separately per layer. We call these sets of neurons within the same cluster and layer “sub-clusters.” In our experiments, we compare these sub-clusters to other sets of random units of the same size and same layer. When discussing these experiments, we refer\nto the sub-clusters from the clustering algorithm as “true sub-clusters” and the sets composed of random neurons as “random sub-clusters.” Random sub-clusters form the natural control condition to test whether the specific partitioning of neurons exhibits importance or coherence compare to alternative partitions, while taking account location and size.\nAs outlined in the Introduction, we study importance: how crucial each sub-cluster is to the network; input coherence: how well neurons in a sub-cluster associate with similar input features; and output coherence, how well they associate with particular output labels, as proxies for modularity. In this section, we present two types of experiments. First, we use visualization techniques on sub-clusters to measure input and output coherence, and second, we use “lesion tests” based on dropping out neurons in a sub-cluster to measure output coherence and importance.\nThese techniques are scalable, and we experiment with a wide range of networks. For small-scale experiments, we train and analyze MLPs with four hidden layers of 256 neurons each and small convolutional networks with 3 layers of 64 neurons each followed by a dense layer of 128 neurons trained on the MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017) datasets. At a mid scale, we train and analyze VGG-style CNNs containing 13 convolutional layers using the architectures from Simonyan & Zisserman (2014) trained on CIFAR-10 (Krizhevsky et al., 2009) using the procedure from Liu & Deng (2015). Finally, for the ImageNet (Krizhevsky et al., 2009) scale, we analyze pretrained ResNet18, ResNet50, (He et al., 2016) VGG-16, and VGG-19 (Simonyan & Zisserman, 2014) models.\nIn our concurrent submission (Anonymous et al., 2021) we show that in some cases, weight pruning and dropout can each be used to promote graph-based clusterability. We use pruning in small MLPs but no other networks. We use dropout for MLPs in correlation-based visualization experiments in subsection 3.1.1 but no other MLPs. Also, for the mid-sized VGG-CNNs, we experiment both with versions that are unregularized and which are regularized using dropout and L2 regularization as done in Liu & Deng (2015). Complete training details including testing accuracies are in the appendix A.2." }, { "heading": "3.1 FEATURE VISUALIZATION", "text": "" }, { "heading": "3.1.1 CORRELATION-BASED VISUALIZATION", "text": "First, we introduce here a simple method to provide visual examples and build intuition. In later subsections, we present a quantitative approach with statistical hypothesis testing. A simple way to visualize a sub-cluster is to identify what input features each of its neurons respond to and then use these to create an aggregated visualization. We do this for small MLPs in which we construct visualizations of neurons using their correlations with the input pixels across the test dataset. We use their post-ReLU activations, and consider the activation of a convolutional feature map to be its L1 norm. Instead of linear correlation, we use the Spearman correlation (which is the linear correlation of ranks) because it is able to capture relationships which tend to monotonically increase even if they are nonlinear.\nAfter obtaining visualizations for each neuron in a sub-cluster, we do not directly take their average to visualize the entire sub-cluster. To see why, consider two neurons which are highly anticorrelated across the testing set. These neurons are highly coherent, but averaging together their visualizations would obscure this by cancellation. To fix this problem, we align the signs of the visualizations for individual neurons using a variant of an algorithm from Watanabe (2019). To visualize a sub-cluster, for a number of iterations (we use 20), we iterate over its neurons, and calculate for each the sum of cosines between its visualization and each of the other neurons’ visualizations in vector form. If this sum is negative, we flip the sign of this neuron’s visualization. Refer to appendix A.3 for a complete algorithmic description. After this procedure, we take the mean of the visualizations within a sub-cluster.\nTo see how much meaningful input coherence these sub-clusters exhibit, we compare them to random sub-clusters (recall each of these are randomly selected sets of neurons of the same size from the same layer as a true sub-cluster). Figure 1a-b shows results from MLPs trained on MNIST and Fashion-MNIST. Here, these MLPs are trained with dropout which we found to be helpful for clearer visualizations. In the first row of each image are visualizations for true sub-clusters, and the bottom four rows show visualizations for random ones. The true sub-clusters in the top row\nproduce more coherent visualizations with better-defined and higher-contrast features compared to the random ones in the bottom 4 rows.\nNext, we hypothesized that if we trained a network on a task that lent itself well to parallel processing, spectral clustering would capture specialized modules. To test this, we designed “halves-same” and “halves-diff” tasks for small MLPs based on the MNIST and Fashion-MNIST datasets. For the halves-same tasks, two images of the same class were resized to have half their original width and concatenated side-by-side in order to create a composite image of the same size as the originals. We gave these images the same label as their component halves. For the halves-diff tasks, this was done with two images from random classes, and the resulting image was labeled with the sum of their labels modulo 10. Example images from each of the the halves-same/diff MNIST and Fashion-MNIST datasets are shown in figure 3. We expected that the halves-diff task would be more economical to compute in a modular way by separately recognizing the two halves and computing their modular sum. In appendix A.3, we show that our networks can compute this modular sum.\nFigure 1c-d shows these visualizations for MLPs trained with dropout on halves-same MNIST and without dropout on halves-diff MNIST. We did not use dropout to train the halves-diff networks because it resulted in poor accuracy. This is likely because while amenable to image classification, dropout is not amenable to modulo arithmetic. Columns are arranged from left to right in the order of the layer in which they appear in the network. Visualizations for the halves-same networks tend to result in similar left and right halves, but in the early (leftmost) layers of the networks trained on the halves-diff tasks, there is a tendency for true sub-clusters to be selective to one half.\nThis method of understanding input coherence has the advantage of being able to provide intuitive visual examples and efficiently construct interpretable features for MLPs. However, it was not as effective for CNNs. In appendix A.3 we detail this process, and in figure 4, we show visualizations for small CNNs in which we find less evidence of coherence among sub-clusters. To expand on\nthe intuitive visual examples offered here, in the following section, we introduce a more versatile, scalable method along with hypothesis testing procedures for obtaining quantitative results." }, { "heading": "3.1.2 INPUT CONSTRUCTION", "text": "Another way to visualize neurons in a network is to use gradient-based optimization to create an input image which maximizes the activation of a neuron, or in our case, a sub-cluster of them. Patterns in the resulting visualizations can suggest what features the neurons respond to. We visualize sub-clusters with this method (Olah et al., 2017) using the Lucid1 package. Implementation details are in appendix A.8. Figure 5 gives example visualizations.\nTo obtain quantitative results, we used two techniques. First, we analyzed the value of the maximization objective for each image we produced, which we call the “score.” This gives of one notion of how coherent a sub-cluster may be with respect to input features, because if a single image can activate an entire sub-cluster well, this suggests that the neurons comprising it can be activated by similar features. Second, we analyze the entropy of the softmax outputs of the network when these images are passed through it. If the entropy of the softmax distribution is low, this suggests that a cluster is coherent with respect to outputs.\nWe then test the null hypothesis that these sub-clusters are equally coherent as random sets of neurons. For each sub-cluster in a network with at least three neurons and at most 80% of the neurons in a layer, we compare its visualization’s score and output entropy to those of 9 random sub-clusters. We then obtain one-sided p values by taking the percentiles for the true sub-cluster’s score and entropy relative to the random sub-clusters’ score and entropy. We take right-sided p values for scores and left-sided p values for output entropies so that lower p values indicate greater input/output coherence in both cases. We then use two different methods to combine all sub-cluster p values to obtain a combined p value for the entire network for either score or entropy. Both are presented here, but full details for both are in appendix A.4.\nFisher Method: First, we center the sub-cluster p values around 0.5 to obtain a granular approximation of the uniform distribution under the null, and then use the Fisher Method. The test statistic for a set of sub-cluster p values p1...pn is −2 ∑n i=1 log pi which takes a chi squared distribution with 2n degrees of freedom under the null hypothesis.\nChi Squared Method: Second, since there are only a set number, m, of values which the p values can take (in our case m = 10), we perform a Chi Squared categorical test to see whether their distribution among these discrete values is nonuniform. The test statistic is ∑m i=1 (xi−µi)2 µi\nin which each xi gives an observed count and each µi gives a expected one. It will have a chi squared distribution with m− 1 degrees of freedom under the null hypothesis. These methods test for different things. The Fisher method indicates how low the p values for subclusters tend to be across a network and tests whether the true sub-clusters are consistently more coherent than random ones. However, the distribution of sub-cluster p values may be nonuniform but in a way that the Fisher Method is not designed to detect. For example, they may tend to be very high or follow a U-shaped distribution. The Chi Squared method adds additional resolution by detecting cases like this.\nThe top section of table 1 summarizes these results. For each network which we perform this test on, we provide the Fisher and Chi Squared categorical p values for both the score (input coherence) and output entropy (output coherence). For the non-ImageNet networks, we report results for each measure (separately) as a median across 5 networks. We find strong evidence of significant levels of input coherence in the VGG family of networks, and find that the unregularized VGGs trained on CIFAR-10 also seems to exhibit a significant amount of output coherence. In Appendix A.8, we also present experiments for understanding variance of activations in true and random sub-clusters." }, { "heading": "3.2 LESION TESTS", "text": "Another set of tools that has been used for understanding both biological (Gazzaniga & Ivry, 2013) and artificial (Zhou et al., 2018; Casper et al., 2020) neural systems involves disrupting neurons dur-\n1https://github.com/tensorflow/lucid\ning inference. Whereas the images produced with feature visualization were optimized to maximally activate a sub-cluster, we perform a dual type of experiment with “lesion” tests in which we analyze network outputs when a sub-cluster is dropped out. When lesioning a sub-cluster, we set all weights incoming to the constituent neurons to 0, while leaving the rest of the network untouched. Refer to figure 6 for example plots of the accuracy drops for a small MLP and CNN trained on FashionMNIST. We then determine the damage to the network’s overall and per-class testing accuracy. This allows us to evaluate both importance and output coherence.\nImportance Importance allows us to identify which sub-clusters are key to the network and therefore of particular interest for scrutiny. To systematically quantify the importance of a network’s sub-clusters in aggregated way, we combine the right-sided p values of all the network’s true subclusters using the Fisher and Chi Squared categorical methods discussed above 3.1.2 and described in detail in appendix A.4. Note that these experiments are analogous to those in 3.1.2 The bottom section of table 1 gives results for these. We find strong evidence across the networks which we train that spectral clustering reveals important sub-clusters.\nThere is generally significant diversity among sub-clusters in their size, importance, and importance relative to random sub-clusters. To demonstrate this, we construct an example descriptive taxonomy, in which we consider three criteria for identifying particularly important sub-clusters. First, the subcluster should be at least 5% of the neurons of the layer. Second, the drop in accuracy under lesion should be greater than 1 percentage point; and third, the drop should not simply be due to the number of damaged neurons. To evaluate the third criterion, we generate random sub-clusters with the same number of neurons as the true sub-cluster from the same layer, and collect the distribution of accuracy drops. We say that this criterion is met if the accuracy drop for the true sub-cluster is greater than all of 20 random sub-clusters, i.e its p value is smaller than 1/20.\nIn figure 2, we plot sub-cluster size versus accuracy drop for an MLP trained on Fashion-MNIST and a VGG trained on CIFAR-10 that has been clustered into 8 clusters (we use 8 for the sake of visualization here, but we use 12 clusters for all quantitative experiments). Many sub-clusters are too small to be counted as important, and many are significantly impactful compared to random sub-clusters but not practically significant. However, some clearly are practically important for the functioning of the network.\nCoherence To measure the output coherence using lesions, we analyze the accuracy changes for each of the output classes. For ten classes, we define d = (d0, d1, . . . , d9), where di is the change in the i-th class accuracy due to the lesioning of a sub-cluster. In order to obtain a measurement independent of the overall importance, we divide these class-wise accuracy changes by their mean, d′ = d/d̄, and the then take their range ∆ = max d′ −min d′. We refer to this as the (normalized) class-wise range. We compare true and random sub-clusters to obtain a right-sided p value for each sub-cluster based on the p values of the true ∆. We then combine these for the entire network using the Fisher and Chi Squared categorical methods as discussed above and detailed in appendix A.4.\nThese results are in the bottom section of table 1. The Chi Squared p values demonstrate that spectral clustering usually identifies sub-clusters with a significantly different distribution of importances compared to random sub-clusters. Meanwhile, the Fisher tests suggests that at least in VGG networks trained on CIFAR-10, the sub-clusters exhibit more output coherence. Interestingly, for VGG-16s trained on ImageNet, the opposite seems to be the case. The Fisher p value is high, suggesting that the p values for its individual sub-clusters tend to be high. However, the Chi Squared p value is low, suggesting nonuniformity among the sub-cluster p values. Together, these indicate the the clusters are consistently less coherent than random ones." }, { "heading": "4 RELATED WORK", "text": "The most closely-related work to this is our paper under concurrent submission (Anonymous et al., 2021) which uses the same spectral clustering-based approach to establish that deep networks are in many cases clusterable and investigates in depth methods can be used to control the development of clusterability. Both of these works inherit insights from network science involving clustering in general (Girvan & Newman, 2002; Newman & Girvan, 2004), and spectral clustering (Shi & Malik, 2000; von Luxburg, 2007) in particular.\nOur experiments in which we combine spectral clustering with correlation-based visualization (Watanabe, 2019), feature visualization (Olah et al., 2017), and lesions (Zhou et al., 2018) highlight the usefulness of combining multiple interpretability methods in order to build an improved set of tools for more rigorously understanding systems. In a similar way, other dynamic techniques for interpretability such as analysis of selectivity (Madan et al., 2020), network “dissection” (Bau et al., 2017; Mu & Andreas, 2020), earth-mover distance (Testolin et al., 2020), or intersection information (Panzeri et al., 2017) could also be combined with static graph-based partitionings under a similar framework. There already exist examples of interpretability methods being used for the identification of unexpected adversarial weaknesses (Carter et al., 2019; Mu & Andreas, 2020). We expect that developing more powerful tools like these for scrutinizing networks will be helpful toward building more robust systems.\nThis work adds to a growing body of research focused on modularity and compositionality in neural systems (e.g. Lake et al. (2015; 2017); Csordás et al. (2020); You et al. (2020)). This paradigm is useful both for interpretability and for building better models. Neural circuits with distributed, non-modular representations pose a litany of challenges including non-interpretability, less useful representations, poorer generalization, catastrophic forgetting, and biological implausibility. One limitation of this work is a focus on clustering in models which have fairly monolithic architectures (e.g. all neurons/filters in one layer being connected to all neurons/filters in the next). However, there exists a body of research focused specifically on developing more modular networks which either have an explicitly-modular architecture (Alet et al., 2018; Parascandolo et al., 2018; Goyal et al., 2019) or are trained in a way that promotes modularity via regularization or parameter isolation (Kirsch et al., 2018; De Lange et al., 2019)." }, { "heading": "5 DISCUSSION", "text": "In this work, we introduce an approach for evaluating whether a partitioning of a network exhibits modular characteristics. Key to this is analyzing proxies: importance as a means of understanding what parts of a network are crucial for performance, input/output coherence as measures for how specialized these parts are. We measure these proxies using statistical hypothesis testing procedures based on interpretability techniques which have conventionally been used for analyzing individual neurons. Though we analyze partitions produced by spectral clustering, a static method, we find that these clusters exhibit a significant amount of importance compared to random clusters. We also show that our networks in the VGG family also tend to exhibit a significant level of input coherence, and in some cases, output coherence. By and large, these findings, and those of a concurrent submission (Anonymous et al., 2021), support the analysis of modules, and in particular graph-based clusters of neurons, for developing a better understanding of neural networks’ inner-workings.\nBuilding a framework for evaluating modularity in neural networks can can guide the development of new interpretability methods which examine networks at the module level. Toward this goal, compositionality, how modules are combined and interact together, can be another proxy of modularity. For evaluating this, some of our methods can be extended to study dependencies between clusters. In appendix A.10, we present exploratory lesion-based experiments for studying cluster interactions and constructing dependency graphs.\nWhile we make progress here toward mechanistic transparency, neural systems are still complex, and more insights are needed to develop richer understandings. The ultimate goal would be to master the process of building compositional systems which lend themselves to simple and faithful semantic interpretations. We hope that using modularity as an organizing principle to achieve mechanistic transparency and expanding our interpretability toolbox with combined static and dynamic methods will lead to a richer understanding of networks and better tools for building them to be reliable." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 SPECTRAL CLUSTERING ALGORITHM", "text": "The spectral clustering algorithm on the graph G = (V,E) produces a partition of its vertices, in which there are stronger connections within sets of vertices than between them (Shi & Malik, 2000). It does so by approximately minimizing the n-cut (normalized cut) of a partition. For disjoint, nonempty sets X1, ...Xk where ∪ki=1Xi = V , it is defined by (von Luxburg, 2007) as:\nn-cut(X1, ..., Xk) := 1\n2 k∑ i=1 W (Xi, Xi) vol(Xi)\nwhere A := (wij)i,j=1..n is the adjacency matrix of the graph G; for two sets of vertices X,Y ⊆ V , we define W (X,Y ) := ∑ vi∈X,vj∈Y wij ; the degree of a vertex vi ∈ V is di = ∑n j=1 wij ; and the\nvolume of a subset X ⊆ V is vol(X) := ∑ i∈X di.\nAlgorithm 1: Normalized spectral clustering according to (Shi & Malik, 2000), implemented in scikit-learn (Pedregosa et al., 2011), description taken from von Luxburg (2007). Input : Weighted adjacency matrix W ∈ Rn×n, number k of clusters to construct\n1 Compute the unnormalized Laplacian L. 2 Compute the first k generalized eigenvectors u1, ..., uk of the generalized eigenproblem Lu = λDu. 3 Let U ∈ Rn×k be the matrix containing the vectors u1, ..., uk as columns. 4 For i = 1, .., n, let yi ∈ Rk be the vector corresponding to the ith row of U . 5 Cluster the points (yi)i=1,...,n in Rk with the k-means algorithm into clusters C1, ..., Ck,\nOutput: Clusters A1, ..., Ak with Ai = {j|yj ∈ Ci}." }, { "heading": "A.2 NETWORK TRAINING DETAILS", "text": "We use Tensorflow’s implementation of the Keras API Abadi et al. (2015); Chollet et al. (2015). When training all networks, we use the Adam algorithm (Kingma & Ba, 2014) with the standard Keras hyperparameters: learning rate 0.001, β1 = 0.9, β2 = 0.999, no amsgrad. The loss function was categorical cross-entropy.\nSmall MLPs (MNIST and Fashion-MNIST): We train MLPs with 4 hidden layers, each of width 256, for 20 epochs of Adam (Kingma & Ba, 2014) with batch size 128. We then prune on a polynomial decay schedule (Zhu & Gupta, 2017) up to 90% weight-sparsity for an additional 20 epochs after initial training. Initial and final sparsities were chosen due to their use in the TensorFlow Model Optimization Tutorial.2 In cases where we use dropout (for correlation visualization experiments including halves-diff tasks), we apply it after each fully-connected layer with a rate of 0.5. All MLPs achieved a testing accuracy on the MNIST and Fashion-MNIST datasets of at least 97% and 86% respectively except for the ones trained on the Halves-diff datasets which all achieved an accuracy of at least 92% and 71% respectively.\nSmall CNNs (MNIST and Fashion-MNIST): These networks had 3 convolutional layers with 64 3 × 3 channels each with the second and third hidden layers being followed by max pooling with a 2 by 2 window. There was a final fully-connected hidden layer with 128 neurons. We train them with a batch size of 64 for 10 epochs with no dropout or pruning. All small CNNs achieved a testing accuracy on the MNIST and Fashion-MNIST datasets of at least 99% and 89% respectively except for the ones trained on the Halves-diff datasets which all achieved an accuracy of at least 89% and 67% respectively.\nMid-sized VGG CNNs (CIFAR-10): We implement a version of VGG-16 described by Simonyan & Zisserman (2014); Liu & Deng (2015). We train these with Adam, and L2 regularization with\n2URL: https://web.archive.org/web/20190817115045/https://www.tensorflow. org/model_optimization/guide/pruning/pruning_with_keras\na coefficient of 5 × 10−5 for 200 epochs with a batch size of 128. Training was done with data augmentation which consisted of random rotations between 0 and 15 degrees, random shifts both vertically and horizontally of up to 10% of the side length, and random horizontal flipping. In cases where we use dropout, we use a per-layer dropout rate as specified in Liu & Deng (2015). All of these networks achieved testing accuracies of at least 87%.\nLarge CNNs (ImageNet): We experimented with VGG-16 and 19 (Simonyan & Zisserman, 2014) and ResNet-18, and 50 (He et al., 2016) networks. Weights were obtained from the Python image-classifiers package, version 1.0.0." }, { "heading": "A.3 CORRELATION-BASED VISUALIZATION", "text": "Algorithm 2: Sign Alignment Algorithm (Similar to Watanabe (2019)) Result: Set of sign-aligned neuron visualizations. Input Neuron visualizations V1:n for iter in num iters do\nfor vi in V do Calculate sum of cosines, c = ∑ j 6=i vi·vj√ vi·vi √ vj ·vj\nif c < 0 then vi ← −vi\nend end\nend\nAlgorithm 2 gives the sign alignment algorithm we use which is based on a a similar one from Watanabe (2019).\nFigure 3 shows examples from the ‘halves‘ and ‘stack’ datasets which we use for MLPs and CNNs respectively to test whether a parallelizable task can cause a network to develop clusters that cohere with one portion of the inputs or another. Details of the halves dataset experiments are detailed in Section 3. Analogous experiments for CNNs were done but with “stack” datasets. For CNNs with max pooling, object detection is insensitive to spatial location, so we design stack-same and stack-diff datasets in an analogous way using channels instead of image-halves.\nVisualizations for sub-clusters in the halves datasets are provided in section 3. However, here in figure 4 are visualization results for Small CNNs for the stack-same/diff datasets. Unlike for the small MLPs, these visualizations do show obvious coherence among clusters, which was part of our motivation for the subsequent input construction experiments.\nFor constructing all correlation-based visualizations, we use the Spearman correlation which is defined as the linear (Pearson) correlation of ranks. This measures how one series of values of values can be expressed as a monotonically increasing function of another. We used this rather than linear correlation because of the nonlinear nature of deep networks.\nNetworks can Compute Modular Sums: A network an do this for M values by using an intermediate layer of M2 neurons, each of which serve as a detector of one of the possible combinations of inputs. Consider a ReLU MLP with 2M inputs, a single hidden layer with M2 neurons, and then M outputs. Suppose that it is given the task of mapping datapoints in which the input nodes numbered i and M + j are activated with value 1 to an output in which the mod (i+ j,M)th node is active with value 1. It could do so if each hidden neuron with a ReLU activation detected one of the M2 possible input combinations via a bias of -1 and two weights of 1 connecting it to each of the input nodes in the combination is detects. A single weight from each hidden neuron to its corresponding output point would allow the network to compute the modular sum. In our networks, we haveM = 10 classes, and all MLPs and CNNs have a dense layer with> 102 neurons preceding the output layer. Thus, they are capable of computing a modular sum in the halves and stack-diff tasks we give to them." }, { "heading": "A.4 HYPOTHESIS TESTING", "text": "Here, we provide details for the hypothesis testing methods used for input construction and lesion experiments in Section 3. In each of these experiments, for all sub-clusters in a network, we obtain quantities for the true sub-clusters and random sub-clusters. We compare these values to get a p value in the form of a percentile for each true-cluster comparing it to the random ones. We then obtain a single combined p value for a network overall using two methods. Both methods involve constructing a test statistic which has a chi squared distribution under the null hypothesis that truesub-clusters have the same properties as random ones.\nFisher Method: This measures how low p values for the true sub-clusters are overall across a network. In our case, because we use 9 random sub-clusters, the p values for sub-clusters take values in {0.1, 0.2...1.0}. To obtain a granular approximation of the uniform distribution under the null, we subtract 0.05 from them to center their distribution around 0.5 so that they give a granular approximation to the continuous Uniform(0, 1) distribution under the null hypothesis that visualizations for true sub-clusters are as good as random. Then we obtain the Fisher method test statistic\n−2 n∑ i=1 log pi\nWhich for n and p values has a chi squared distribution on 2n degrees of freedom under the null. We then conduct a right-sided test with respect to this distribution. The fact that we use a granular approximation of the uniform distribution makes this test conservative because when −2 times the sum of the logs of the p values is taken during the calculation of the test-statistic, the smallest p values will pull the test statistic toward the heavy tail of the Chi Squared distribution while the largest ones will pull it toward zero.\nChi Squared Categorical Method: This measures how nonuniform the distribution of p values were for sub-clusters were across a network. The p values fall into discrete bins, so a standard Chi Squared categorical test can be used to test to see whether the assortment across the sub-clusters for a network is consistent with randomness or not. The test statistic is\nm∑ i=1 (xi − µi)2 µi .\nHere, this is a sum over m discrete values which results can take, and each xi gives the count of observations of each value while µi gives the expected count under the null. This test statistic will take a Chi Squared distribution on m− 1 degrees of freedom under the null hypothesis. We conduct a right-sided test with respect to this distribution." }, { "heading": "A.5 COHERENCE IN UNTRAINED NETWORKS", "text": "In most cases, we find that trained networks, exhibit significant levels of importance and/or coherence. However, in order to get a sense of how much importance and coherence result from the training process, it is also natural to ask to what extent untrained, randomly-initialized networks exhibit these. Here, we present the results for experiments with feature visualization as done in table 1a. We do not do this for lesion tests though because in expectation, any untrained network will have accuracy at the random guess baseline whether intact or lesioned. Table 2 shows these results for untrained CIFAR-10 scale VGGs. Here, the p values for input coherence are not indicative of any sort of interesting phenomenon which contrasts with the corresponding input coherence values from table 1 which are very low. These suggest that the training process promotes input coherence in these networks. For output coherence, the p values here are lower than the regularized VGGs but higher than the unregularized VGGs from table 1.\nA.6 LESION TESTS WITH ALTERNATE CHOICES OF k\nIn all quantitative experiments in the main paper, we present results for k = 12 clusters. However, to test the robustness of result to the choice of k, we present here in table 3, replicates of table 1b with 8 (50% more) and 18 (50% fewer) clusters. Overall, results are very similar with no apparent systematic differences. In table 3a, there are only 3 values which are different from table 1b by whether they are below the threshold of 0.05, and similarly, there is only 1 such value in 3b." }, { "heading": "A.7 MULTIPLE COMPARISON ADJUSTMENT", "text": "In table 1, we report various p values that summarize the degree to which statistics of sub-clusters vary from those of random groups of neurons within a network. For each network, one can use the p value to test whether the sub-cluster statistics are drawn from the same distribution of the statistics of random groups of neurons. However, when testing multiple networks, one might want to ensure that the experiment and significance-testing procedure are unlikely to generate false positives. In order to do this, a more complicated procedure to decide significance must be used.\nThe Benjamini-Hochberg procedure (Benjamini & Hochberg, 1995) controls the false discovery rate: that is, the expected proportion of rejections of the null hypothesis that are false positives, where the expectation is taken under the data-generating distribution. It relies on all experiments being independent, and therefore it was run separately on the Fisher combined p values and on the Chi squared p values. Results that are declared significant under this procedure when the maximum acceptable false discovery rate is 1/20 is shown in table 4.\nThe Holm-Bonferroni method (Holm, 1979) controls the family-wise error rate: that is, the probability under the data-generating distribution that any null hypotheses are falsely rejected. Results that are declared significant by the Holm-Bonferroni method run on the whole of table 1 capping the family-wise error rate at 1/20 are shown in table 5.\nA.8 INPUT CONSTRUCTION\nAll visualizations were created using the Lucid3 package. The optimization objective for visualizing sub-clusters was the mean post-ReLU activation for all neurons inside the cluster (it was a mean of means for convolutional feature maps). For small MLPs, small CNNs, and mid-sized CNNs, we generated images using random jittering and scaling, and for ImageNet models, we used Lucid’s default transformations which consist of padding, jittering, rotation, and scaling with default hyperparameters. For all networks, we used the standard pixel-based parameterization of the image and no regularization on the Adam optimizer. For visualizations in small MLPs and CNNs, we used versions of these networks trained on 3-channel versions of their datasets in which the same\n3https://github.com/tensorflow/lucid\ninputs were stacked thrice because Lucid requires networks to have 3-channel inputs. However, we show grayscaled versions of these in figure 5. Refer to the main text (section 3.1.2) for quantitative analysis of the optimization objective values.\nImportantly these feature visualizations, while designed to maximally activate a sub-cluster, will not necessarily highly activate all of the neurons inside of it. In order to get a sense of how much\nvariance there is among these activations, we analyze two properties of the distribution of sub-cluster activations when a visualization is passed through the network.\nFirst, we perform the same tests as for score and entropy in table 1, but with the variance among neuron activations. The “Activational Variance” columns in table 6 show these p values. Here, low Fisher p values reflect a low variance for unit activations in a true sub-cluster compared to the variance for unit activations in random sub-clusters. In table 6, there is significant evidence that some of the networks at the CIFAR-10 and ImageNet scale have lower variance among the activations of true sub-clusters than random ones when a sub-cluster’s visualization is passed through the network. This suggests that in the networks for which this is the case, neurons in true sub-clusters are more consistently activated by the same visualizations than those in random sub-clusters.\nSecond, we directly analyze the empirical coefficients of variation (CoVs) for the distributions of true sub-clusters. The CoV is the standard deviation of a distribution divided by its mean: σ̂/µ̂. As such, a high CoV means that the distribution has a high standard deviation relative to the mean. For each sub-cluster of a network, we take the CoV of the distribution of post-ReLU activations. Then, for each network, we take the distribution of CoVs of its subclusters, and find the first quartile, median, and third quartile. For each training condition, we train five networks, rank the five by their median CoV, take the median network under this ranking, and report that network’s CoV quartiles in the final three columns of table 6. We find that in some cases the CoVs are relatively low, including the ImageNet models which indicates relatively consistent activations. In other networks though, many of the CoVs are above 1." }, { "heading": "A.9 LESION TESTS", "text": "Section 3.2 presents the lesion test experiments. Example accuracy-change profiles for an MLP and small CNN in the Fashion datasets are shown here in figure 6. Table 7 and Table 8 show data on the importance of sub-clusters in the single lesion experiments, and is plotted in figure 2. “Acc. diff.” means the difference in accuracy between the actual network and the network with that subcluster lesioned, while “Acc. diff. dist.” shows the mean and standard deviation of the distribution of accuracy differentials between the actual network and one with a random set of neurons lesioned.\nThe “Proportion” column denotes the proportion of the layer’s neurons that the sub-cluster represents. ‘Important’ means that the sub-cluster makes up at least 5% of the layer, that the drop in accuracy is greater than one percentage point, and that it was more important than all of 20 random sub-clusters it was compared against. ‘Sig-but-not-diff’ means that the drop in accuracy is significant but less than 1 percentage point, ‘Diff-but-not-sig’ means that the lesioning damage was more than 1 percentage point but not significant, ‘Prunable’ means that the drop in the accuracy is smaller than all random shuffles and smaller than 1 percentage point, ‘Complete’ means that the sub-cluster contains the whole layer, ‘Small’ means that the sub-cluster consists of less than 5% of the layer, and ‘Other’ means that the drop in accuracy is not statistically significant and less than 1 percentage point.\nOne detail not included in the main paper is that for the sake of computational efficiency, two measures were used for lesion experiments in the ImageNet models we used (ResNet-18 and VGG-16). First, we used a downsampled version of the ImageNet2012 dataset (Krizhevsky et al., 2012) with 10,000 instead of 50,000 images. Second, we omitted sub-clusters with fewer than 5 neurons or more than 90% of the neurons in the layer (this is different from the thresholds of 3 units and 80% we used for input construction experiments)." }, { "heading": "A.10 EXPLORING THE “COMPOSABILITY” PROXY WITH DOUBLE LESION TEST", "text": "Given the lesion test presented in the main text, we know which sub-clusters are important, and it would be ideal to understand how the important sub-clusters depend on each other. To do this, we conduct experiments where we lesion two different important sub-clusters, which we’ll call X and Y , in different layers. First, we measure the loss in accuracy when both are lesioned, which we’ll call `(X ∪ Y ). We then compare `(X ∪ Y ) to the loss in accuracy `(X ∪ Y ′) if we take a random subset Y ′ of neurons of size |Y | from the same layer as Y , and check if `(X ∪ Y ) is larger than 50 random samples of `(X ∪ Y ′). This tests if the damage from lesioning Y is statistically significant given how many neurons are contained in Y , and given that we are already lesioning X . We also calculate δ(Y,X) := `(X ∪ Y ) − `(X), which is the additional damage from lesioning Y given that X has been lesioned. If `(X ∪ Y ) is statistically significantly different to the distribution of `(X∪Y ′), and if δ(Y,X) is larger than one percentage point, we say that sub-cluster Y is important\nconditioned on sub-cluster X . Similarly, we test if X is important conditioned on Y by comparing `(X∪Y ) to the distribution of `(X ′∪Y ), and by determining the size of δ(X,Y ). Table 9 shows the δ values and importances of different pairs of sub-clusters for an MLP trained on Fashion-MNIST with pruning and dropout, when the number of cluster is set to 8 for visualization.\nBy examining the importances of sub-clusters conditioned on each other, we can attempt to construct a dependency graph of sub-clusters by determining which sub-clusters send information to which\nothers. Consider a pair of sub-clusters (X,Y ) where X is in an earlier layer than Y , and where both are individually important (refer to figure 7 for an elaborated visual illustration).\n• If X is not important conditioned on Y , and Y is not important conditioned on X , we reason that all of the information from X is sent to Y (since otherwise lesioning X would damage accuracy even conditioned on Y being lesioned), and that the only information that Y receives is sent via X (since otherwise lesioning Y would damage accuracy even conditioned on X being lesioned).\n• If X is not important conditioned on Y but Y is important conditioned on X , then we reason that X sends information to Y and also to other sub-clusters.\n• If Y is not important conditioned on X but X is important conditioned on Y , we reason that Y receives information from X and other sub-clusters.\n• We can draw no conclusion if both X and Y are important conditioned on the other.\nThese assumptions, together with data shown in figure 9, let us draw some edges in a dependency graph of sub-clusters, which is shown in figure 8. Note that sub-clusters of cluster 0 seem to send information to each other, which is what we would expect if modules were internally connected. The same holds for the sub-cluster of cluster 7." }, { "heading": "1-1 1-2 1-6", "text": "" } ]
2,020
null
SP:e5719e04d242e5f1b4646cf4bfe43b8aeaa950ad
[ "The submission proposes a meta-learning algorithm attuned to the hierarchical structure of a dataset of tasks. Hierarchy is enforced in a set of synthetically-generated regression tasks via the data-sampling procedure, which is modified from the task-sampling procedure of [1] to include an additional source of randomness corresponding to which of a set of cluster components task parameters are generated from. The authors propose to adapt the model-agnostic meta-learning algorithm (MAML) of [1] to reflect this hierarchical structure by either observing (Section 4.1, FixedTree MAML) or inferring (Section 4.2, LearnedTree MAML) an assignment of tasks to clusters at each step of the inner loop (task-specific adaptation phase) of MAML; if tasks belong to the same cluster, the correspond task-parameters receive the same update at that step (in particular, the update direction is averaged). It is assumed that there are increasingly many clusters at each step, so that task-specific parameter updates are increasingly granular." ]
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related, and sharing information between unrelated tasks might hurt performance. A fruitful approach is to share gradients across similar tasks during training, and recent work suggests that the gradients themselves can be used as a measure of task similarity. We study the case in which datasets associated to different tasks have a hierarchical, tree structure. While a few methods have been proposed for hierarchical meta-learning in the past, we propose the first algorithm that is model-agnostic, a simple extension of MAML. As in MAML, our algorithm adapts the model to each task with a few gradient steps, but the adaptation follows the tree structure: in each step, gradients are pooled across task clusters, and subsequent steps follow down the tree. We test the algorithm on linear and non-linear regression on synthetic data, and show that the algorithm significantly improves over MAML. Interestingly, the algorithm performs best when it does not know in advance the tree structure of the data.
[]
[ { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2Vec: Task Embedding for MetaLearning. arXiv:1902.03545 [cs, stat], February 2019", "venue": "URL http://arxiv.org/abs/ 1902.03545", "year": 1902 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "CVPR, pp", "year": 2009 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv:1703.03400 [cs], March 2017", "venue": "URL http://arxiv.org/ abs/1703.03400", "year": 2017 }, { "authors": [ "Timothy Hospedales", "Antreas Antoniou", "Paul Micaelli", "Amos Storkey" ], "title": "Meta-Learning in Neural Networks: A Survey. arXiv:2004.05439 [cs, stat], April 2020", "venue": "URL http://arxiv.org/ abs/2004.05439", "year": 2004 }, { "authors": [ "Ghassen Jerfel", "Thomas L Griffiths", "Erin Grant", "Katherine Heller" ], "title": "Reconciling meta-learning and continual learning with online mixtures of tasks", "venue": "NIPS, pp", "year": 2019 }, { "authors": [ "Sameeksha Katoch", "Kowshik Thopalli", "Jayaraman J. Thiagarajan", "Pavan Turaga", "Andreas Spanias" ], "title": "Invenio: Discovering Hidden Relationships Between Tasks/Domains Using Structured Meta Learning", "venue": "[cs],", "year": 2020 }, { "authors": [ "Lu Liu", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Learning to Propagate for Graph Meta-Learning. arXiv:1909.05024 [cs, stat], November 2019", "venue": "URL http://arxiv", "year": 1909 }, { "authors": [ "Aditya Krishna Menon", "Anand Rajagopalan", "Baris Sumengen", "Gui Citovsky", "Qin Cao", "Sanjiv Kumar" ], "title": "Online hierarchical clustering approximations", "venue": null, "year": 1909 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. arXiv:1909.09157 [cs, stat], February 2020", "venue": "URL http://arxiv.org/abs/1909.09157", "year": 1909 }, { "authors": [ "Nitish Srivastava", "Russ R Salakhutdinov" ], "title": "Discriminative Transfer Learning with Treebased Priors", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Huaxiu Yao", "Ying Wei", "Junzhou Huang", "Zhenhui Li" ], "title": "Hierarchically Structured Meta-learning. arXiv:1905.05301 [cs, stat], November 2019", "venue": "URL http://arxiv.org/abs/1905", "year": 1905 }, { "authors": [ "Amir Zamir", "Alexander Sax", "William Shen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling Task Transfer Learning", "venue": "[cs],", "year": 2018 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A Survey on Multi-Task Learning", "venue": "[cs],", "year": 2018 }, { "authors": [ "Luisa M. Zintgraf", "Kyriacos Shiarlis", "Vitaly Kurin", "Katja Hofmann", "Shimon Whiteson" ], "title": "Fast Context Adaptation via Meta-Learning", "venue": "URL http: //arxiv.org/abs/1810.03642", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning models require a large amount of data in order to perform well when trained from scratch. When data is scarce for a given task, we can transfer the knowledge gained in a source task to quickly learn a target task, if the two tasks are related. The field of Multi-task learning studies how to learn multiple tasks simultaneously, with a single model, by taking advantage of task relationships (Ruder (2017), Zhang & Yang (2018)). However, in Multi-task learning models, a set of tasks is fixed in advance, and they do not generalize to new tasks. The field of of Meta-learning is inspired by the ability of humans to learn how to quickly learn new tasks, by using the knowledge of previously learned ones.\nMeta-learning has seen a widespread use in multiple domains, especially in recent years and after the advent of Deep Learning (Hospedales et al. (2020)). However, there is still a lack of methods for sharing information across tasks in meta-learning models, and the goal of our work is to fill this gap. In particular, a successful model for meta-learning, MAML (Finn et al. (2017)), does not diversify task relationships according to their similarity, and it is unclear how to modify it for that purpose.\nIn this work, we contribute the following:\n• We propose a novel modification of MAML to account for a hierarchy of tasks. The algorithm uses the tree structure of data during adaptation, by pooling gradients across tasks at each adaptation step, and subsequent steps follow down the tree (see Figure 1a).\n• We introduce new benchmarks for testing a hierarchy of tasks in meta-learning on a variety of synthetic non-linear (sinusoidal) and multidimensional linear regression tasks.\n• We compare our algorithm to MAML and a baseline model, where we train on all tasks but without any meta-learning algorithm applied. We show that the algorithm has a better performance with respect to both of these models in the sinusoidal regression task and the newly introduced synthetic task because it exploits the hierarchical structure of the data." }, { "heading": "2 RELATED WORK", "text": "The problem of quantifying and exploiting task relationships has a long history in Multi-task learning, and is usually approached by parameter sharing, see Ruder (2017), Zhang & Yang (2018) for\nreviews. However, Multi-task Learning is fundamentally different from Meta-learning as it does not consider the problem of generalizing to new tasks (Hospedales et al. (2020)). Recent work includes Zamir et al. (2018), who studies a large number of computer vision tasks and quantifies the transfer between all pairs of tasks. Achille et al. (2019) proposes a novel measure of task representation, by assigning an importance score to each model parameter in each task. The score is based on the gradients of each task’s loss function with respect to each model parameter. This work suggests that gradients can be used as a measure of task similarity, and we use this insight in our proposed algorithm.\nIn the context of Meta-learning, a few papers have been published on the problem of learning and using task relationships in the past months. The model of Yao et al. (2019) applies hierarchical clustering to task representations learned by an autoencoder, and uses those clusters to adapt the parameters to each task. The model of Liu et al. (2019) maps the classes of each task into the edges of a graph, it meta-learns relationships between classes and how to allocate new classes by using a graph neural network with attention. However, these algorithms are not model-agnostic, they have a fixed backbone and loss function, and are thus difficult to apply to new problems. Instead, we design our algorithm as a simple generalization of Model-agnostic meta-learning (MAML, Finn et al. (2017)), and it can be applied to any loss function and backbone.\nA couple of studies looked into modifying MAML to account for task similarities. The work of Jerfel et al. (2019) finds a different initial condition for each cluster of tasks, and applies the algorithm to the problem of continual learning. The work of Katoch et al. (2020) defines parameter updates for a task by aggregating gradients from other tasks according to their similarity. However, in contrast with our algorithm, both of these models are not hierarchical, tasks are clustered on one level only and cannot be represented by a tree structure. As far as we know, ours is the first model-agnostic algorithm for meta-learning that can be applied to a tree structure of tasks." }, { "heading": "3 THE META-LEARNING PROBLEM", "text": "We follow the notation of Hospedales et al. (2020). We assume the existence of a distribution over tasks τ and, for each task, a distribution over data points D and a loss function L. The loss function of the meta-learning problem, Lmeta, is defined as an average across both distributions of tasks and data points:\nLmeta (ω) = E τ E D|τ Lτ (θτ (ω);D) (1)\nThe goal of meta-learning is to minimize the loss function with respect to a vector of metaparameters ω. The vector of parameters θ is task-specific and depends on the meta-parameters ω. Different meta-learning algorithms correspond to a different choice of θτ (ω). We describe below the choice of TreeMAML, the algorithm proposed in this study.\nDuring meta-training, the loss is evaluated on a sample of m tasks, and a sample of nv validation data points for each task\nLmeta (ω) = 1 mnv m∑ i=1 nv∑ j=1 Lτi (θτi(ω);Dij) (2)\nFor each task i, the parameters θτi are learned by a set of nt training data points, distinct from the validation data. During meta-testing, a new (target) task is given and the parameters θ are learned by a set of nr target data points. In this work, we also use a batch of training data points to adapt θ at test time. No training data is used to compute the final performance of the model, which is computed on separate test data of the target task." }, { "heading": "3.1 MAML", "text": "MAML aims at finding the optimal initial condition ω from which a good parameter set can be found, separately for each task, after K gradient steps (Finn et al. (2017)). For task i, we define the single gradient step with learning rate α as\nUi(ω) = ω − α\nnt nt∑ j=1 ∇L(ω;Dij) (3)\nThen, MAML with K gradient steps corresponds to K iterations of this step (here we assume that the same batch of training data points is used at each step, because these are task specific)\nθτi(ω) = Ui(Ui(...Ui(ω))) (K times) (4)\nThis update is usually referred to as inner loop, and is performed separately for each task, while optimization of the loss 2 is referred to as outer loop." }, { "heading": "3.2 TREEMAML", "text": "We propose to modify MAML in order to account for a hierarchical structure of tasks. The idea is illustrated in Figure 1.\nAt each gradient step k, we assume that tasks are aggregated into Ck clusters, and the parameters for each task are updated according to the average gradient across tasks within the corresponding cluster (in Fig.1b, we use K = 3 steps and C1 = 2, C2 = 4, C3 = 8). We denote by Tc the set of tasks in cluster c. Then, the gradient update for the parameters of each task belonging to cluster c is equal to\nUc(ω) = ω − α nt |Tc| ∑ i∈Tc nt∑ j=1 ∇L(ω;D(i)j ) (5)\nFurthermore, we denote by cki the cluster to which task i belongs at step k. Then, TreeMAML with k gradient steps corresponds to K iterations of this step\nθτi(ω) = UcKi (UcK−1i (...Uc1i (ω))) (6)\nThe intuition is the following: if each task has scarce data, gradient updates for single tasks are noisy, and adding up gradients across similar tasks increases the signal. Note that we recover MAML if Ck is equal to the total number of tasks m at all steps. On the other hand, if Ck = 1 then the inner loop would take a step with a gradient averaged across all tasks.\nBecause at one specific step the weight updates are equal for all tasks within a cluster, it is possible to define the steps of the inner loop update per cluster c instead of per task θτi . Given a cluster c and its parent cluster pc in the tree, the update at step k is given by\nθc,k = θpc,k−1 − α nt |Tc| ∑ i∈Tc nt∑ j=1 ∇L(θpc,k−1;Dij) (7)\nwhere θck is the parameter value for cluster c at step k. In terms of the notation used in expression 6, we have the equivalence θτi(ω) = θci,K , which depends on the initial condition ω. The full procedure is described in Algorithm 1\nWe consider two versions of the algorithm, depending on how we obtain the tree structure (similar to Srivastava & Salakhutdinov (2013)):\n• Fixed tree. The tree is fixed by the knowledge of the tree structure of tasks, when this structure is available. In that case, the values of Ck are determined by such tree.\n• Learned tree. The tree is unknown a priori, and is learned using a hierarchical clustering algorithm. In that case, the values of Ck are determined at each step as a result of the clustering algorithm.\nIn the latter case, we cluster tasks based on the gradients of each task loss, consistent with recent work (Achille et al. (2019)). After each step k at cluster ci, the clustering algorithm takes as input the gradient vectors of the children tasks i\ngik = 1\nnt nt∑ j=1 ∇L(θci,k;Dij) (8)\nand these gradients are further allocated into clusters according to their similarity. The clustering algorithm is described in subsection 3.3.\nSimilar to MAML, adaptation to a new task is performed by computing θ(i)(ω) on a batch of data of the target task. In order to exploit task relationships, we first reconstruct the tree structure by using a batch of training data and then we introduce the new task.\nAlgorithm 1 TreeMAML Require: distribution over tasks p(τ); distribution over data for each task p(D|τ); Require: number of inner steps K; number of training tasks m; learning rates α, β; Require: number of clusters Ck for each step k; loss function Lτ (ω,D) for each task\nrandomly initialize ω while not done do\nsample batch of i = 1 : m tasks {τi} ∼ p(τ) for all tasks i = 1 : m initialize a single cluster ci = 1 initialize θ1,0 = ω for steps k = 1 : K do\nfor tasks i = 1 : m do sample batch of j = 1 : nv data points {Dij} ∼ p(D|τi) evaluate gradient gik = 1nt ∑nt j=1∇Lτi(θci,k−1;Dij) end for regroup tasks into Ck clusters Tc = {i : ci = c} according to similarity of {gik} and parent clusters {pc} update θc,k = θpc,k−1 − α|Tc| ∑ i∈Tc gik for all clusters c = 1 : Ck\nend for update ω ← ω − β 1mnv ∑m i=1 ∑nv j=1∇ωLτi (θci,K(ω);Dij)\nend while" }, { "heading": "3.3 CLUSTERING ALGORITHM", "text": "In the learned tree case we employ a hierarchical clustering algorithm to cluster the gradients of our model parameters. We specifically opt for an online clustering algorithm to maximise computational efficiency at test time and scalability. When a new task is evaluated, we reuse the tree structure that was generated for a training batch and add the new task. This saves us from computing a new task hierarchy from scratch for every new task. Moreover, with offline hierarchical clustering all the data needs to be available to the clustering algorithm at the same time, which becomes a problem when dealing with larger batch sizes. Therefore online clustering favours scalability.\nWe follow the online top down (OTD) approach set out by Menon et al. (2019) and adapt this to approximate to non-binary tree structures. Our clustering algorithm is shown in Algorithm 2. Specifically, we introduce two modifications to the original OTD algorithm:\n• Maximum Tree Depth Parameter D: This is equivalent to the number of inner steps to take in the TreeMAML, since the tree is a representation of the inner loop where each layer in the tree represents a single inner step.\n• Non-binary Tree Approximation: We introduce a hyperparameter ξ which represents how far the similarity of a new task needs to be to the average cluster similarity in order to be considered child of a that same cluster. This is not an absolute value of distance, but it is a multiplicative factor of the standard deviation of the intracluster similarities. Introducing this factor allows clusters at any level to have a number of children greater than two.\nAlgorithm 2 Online top down (OTD) - Non-binary Require: origin cluster node C with a given set of children A = {x1, x2, ..xN} Require: new task x; maximum depth allowed D; similarity metric, ω() Require: standard deviation multiplicative hyperparameter ξ;\nif |A| = 0 then new task becomes a new child A = {x} else if |A| = 1 then add new task to set of children A← A ∪ {x} else if ω(A ∪ {x}) > ω(A) then identify most similar child x∗ = argminxi(ω({xi, x})) if reached maximum depth Cdepth + 1 = D then\nadd new task to set of children A← A ∪ {x} else\nrecursively perform OTD to create new node C ′ = OTD(x∗, x) add new node to set of children A← (A \\ {x∗}) ∪ C ′\nend if else if ω(A ∪ {x}) < ω(A)− ξσT then\ncurrent node and new task become children to new cluster A← {C, x} else\nadd new task to set of children A← A ∪ {x} end if" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 FIXED TREEMAML", "text": "In this section we report experiments where we assumed knowledge about the structure of the underlying tasks and used this specifically to aggregate the gradients.\nExperiment 1: Sinusoidal Regression Tasks We start with a modification of the regression problem used in the Finn et al. (2017) paper, the regression of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. We introduce a single modification to the experiment, in the original dataset the amplitude varies within [0.1, 5.0] and the phase varies within [0, π], but in our experiments we have selected subsets of these values to simulate structured data, as shown in Figure 2. To increase the difficulty of the task, we added different levels of noise to the datasets.\nAs in the original experiment, the data points for each task are sampled uniformly between [-5.0, 5.0]. During training and testing, the loss used is the mean-squared error. The model is a neural network with 2 hidden layers of size 40 with ReLU nonlinearities. We assume a relationship between tasks as the one represented in Figure 2 (c), where the total depth of the tree is 2 and 4 leaves in total. These leaves represent the finer level clusters and each task is fed to the TreeMAML algorithm with an assigned label corresponding to one of these.\nWe evaluate performance by fine-tuning the model learned by MAML, TreeMAML and baseline on K = {3, 5, 100} data points. We also evaluate for the case where K = 5 data points were provided for\ntraining, but a single data point was provided at meta-testing, see point marked as a star in Figure 3.\nExperiment 2a: Multidimensional Linear Regression Task Here we consider a multidimensional linear regression problem y = ∑64 i=1 Pixi+ η where the tasks are randomly sampled from a set of 4 defined clusters of multidimensional parameters P . η is randomly some generated Gaussian noise. Even in this case, the parameter clusters are arranged hierarchically such that C1 = 2, C2 = 4. The data points for the tasks are sampled uniformly xi ∼ U [−5.0, 5.0] for all training and testing tasks where. The models are then trained and tested on a set of tasks with K = 4, 8, 16, 32, 64 and 128 data points." }, { "heading": "4.2 LEARNED TREEMAML", "text": "In this section we report experiments where we assume no prior knowledge of the underlying structure of the data. Therefore the hierarchy of the data is learnt per-batch using the modified OTD algorithm described in section 3.3.\nExperiment 2b: Learned Tree with Multidimensional Linear Regression Tasks For the multidimensional linear regression task, we set the maximum depth of the clustering algorithm to 2 and we use the cosine similarity metric. This is equivalent to 3 inner steps, beacuse there one last step that is task-specific; therefore for these experiments MAML is also set to perform 3 inner steps.\nIn Table 1 we show that TreeMAML outperforms the Baseline and MAML across all numbers of data points. Learned TreeMAML performs relatively better for a larger number of data points; this is expected because, as the number of data points increases, the gradients used to cluster the tasks will be less affected by the noise and become more accurate, leading also to better clustering.\nExperiment 3: Mixed Synthetic Regression Tasks Here we follow a similar setup as described in Yao et al. (2019), however we choose different parameters for our tasks in order to have better defined clusters of tasks. Again, we sample data points for the regression tasks as x ∼ U [−5.0, 5.0]. We define a total of 6 clusters as follow: (1) Linear - Positive Slopes: y = al+x + bl+, al+ ∼ U [1.0, 2.0] and bl+ ∼ U [0, 1.0]; (2) Quadratic - Positive Slopes: y = aq+x2 + bq+x+ cq+, aq+ ∼ U [0.1, 0.2], bq+ ∼ U [1.0, 2.0] and cq+ ∼ U [2.0, 3.0]; (3) Linear - Negative Slopes: y = al−x+bl−, al− ∼ U [−2.0,−1.0] and bl− ∼ U [−1.0, 0]; (4) Quadratic - Negative Slopes: y = aq−x2 + bq−x + cq−, aq− ∼ U [−0.2,−0.1], bq− ∼ U [−2.0,−1.0] and cq− ∼ U [−3.0,−2.0]; (5) Cubic: y = acx 3 + bcx 2 + ccx + dc, ac ∼ U [−0.1, 0.1], bc ∼ U [−0.2, 0], cc ∼ U [−2.0,−1.0] and dc ∼ U [0, 3.0]; (6) Sinusoidals: y = as sin(x) + bs, as ∼ U [4.0, 5.0] and bs ∼ U [2.0, π]. To each of these we add a noise variable sampled for each data point sampled from ∼ U [−0.01, 0.01]. We logically arranged this set of tasks in a tree structure as the one shown in Figure 5 and we used this as reference for the Fixed TreeMAML experiments. We train all tasks with K=10 data points. The model used is a neural network with 2 hidden layers of size 40 with ReLU nonlinearities.\nThe results stated here are averaged over 600 test tasks (100 from each cluster). For these experiments we set the number of MAML inner steps to 4 and the depth of the Learnt TreeMAML to 3 in order to match the number of steps as in the hierarchy in Figure 5. The results for the models are shown in Table 2 for a number of epochs E = {1, 5, 10}. The results confirm that TreeMAML has a clear advantage over the MAML and baseline algorithms." }, { "heading": "5 DISCUSSION", "text": "We proposed a simple modification of MAML to address the problem of meta-learning hierarchical task distributions. The proposed algorithm is based on the intuitive notion that learning tasks by gradient descent may benefit from gradient sharing, across similar tasks. Inspired by recent work Achille et al. (2019), we use the insight that similarity of tasks can be measured by the similarity of gradient themselves, thus reducing the problem of task transfer to gradient clustering.\nWe show that the new algorithm, which we term TreeMAML, performs better than MAML when the task structure is hierarchical. However, our tests are so far limited to synthetic data, and future work will have to validate our approach to more realistic settings. For example, some computer vision datasets have a hierarchical structure (Deng et al. (2009)) and thus may represent a good test bed for our algorithm.\nWe presented a very basic instance of our algorithm, thaht can be improved in several ways. For example, not all parameters need to be adapted, and recent work suggests that removing the inner loop for a subset of parameters increases performance, especially when data is scarce (Zintgraf et al. (2019),Raghu et al. (2020)). Another possible modification of the algorithm is to have a different number of clusters for different subset of parameters. In general, given that our algorithm is a relatively simple modification of MAML, several tricks to improve training of the latter could be used for our algorithm as well." } ]
2,020
null
SP:18a31dc5f6d12d1d30a3d1e4698523336cd67eb1
[ "The authors present the split Poisson Gamma (SPG) distribution, an extension of the Poisson-Gamma distribution, to model a discrete non-stationary stochastic process. SPG has an analytical posterior allowing accurate prediction after the model parameters have been inferred a single time. The authors apply the SPG to model tumor mutation rates and show that model parameters can be accurately inferred from high-dimensional epigenetic data. This is achieved through a combination of CNNs, GPs and MLE. The results are promising in detecting tumor drivers such as genes, regulatory structures and base-pairs." ]
Detection of cancer-causing mutations within the vast and mostly unexplored human genome is a major challenge. Doing so requires modeling the background mutation rate, a highly non-stationary stochastic process, across regions of interest varying in size from one to millions of positions. Here, we present the splitPoisson-Gamma (SPG) distribution, an extension of the classical Poisson-Gamma formulation, to model a discrete stochastic process at multiple resolutions. We demonstrate that the probability model has a closed-form posterior, enabling efficient and accurate linear-time prediction over any length scale after the parameters of the model have been inferred a single time. We apply our framework to model mutation rates in tumors and show that model parameters can be accurately inferred from high-dimensional epigenetic data using a convolutional neural network, Gaussian process, and maximum-likelihood estimation. Our method is both more accurate and more efficient than existing models over a large range of length scales. We demonstrate the usefulness of multi-resolution modeling by detecting genomic elements that drive tumor emergence and are of vastly differing sizes.
[ { "affiliations": [], "name": "Adam Yaari" }, { "affiliations": [], "name": "Maxwell Sherman" }, { "affiliations": [], "name": "Oliver Priebe" }, { "affiliations": [], "name": "Po-Ru Loh" }, { "affiliations": [], "name": "Boris Katz" }, { "affiliations": [], "name": "Andrei Barbu" }, { "affiliations": [], "name": "Bonnie Berger" } ]
[ { "authors": [ "N. Abdennur" ], "title": "Python bindings to UCSC BigWig and BigBed library. Contribute to nvictus/pybbi development by creating an account on GitHub, November 2018", "venue": "URL https://github. com/nvictus/pybbi. original-date: 2016-05-16T19:18:58Z", "year": 2016 }, { "authors": [ "K.C. Akdemir" ], "title": "Somatic mutation distributions in cancer genomes vary with three-dimensional chromatin structure", "venue": "Nature Genetics,", "year": 2020 }, { "authors": [ "L.B. Alexandrov" ], "title": "Signatures of mutational processes in human cancer", "venue": "ISSN 1476-4687. doi: 10.1038/nature12477. URL https://www. nature.com/articles/nature12477", "year": 2013 }, { "authors": [ "J. Bertl", "Q. Guo", "M. Juul", "S. Besenbacher", "M.M. Nielsen", "H. Hornshøj", "J.S. Pedersen", "A. Hobolth" ], "title": "A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data", "venue": "BMC Bioinformatics,", "year": 2018 }, { "authors": [ "J. Bradshaw", "A.G. d. G. Matthews", "Z. Ghahramani" ], "title": "Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks", "venue": "[stat],", "year": 2017 }, { "authors": [ "P.J. Campbell" ], "title": "Pan-cancer analysis of whole genomes", "venue": "Nature,", "year": 2020 }, { "authors": [ "R.G. Gallager" ], "title": "Stochastic Processes: Theory for Applications", "venue": "ISBN 978-1-107-43531-5", "year": 2013 }, { "authors": [ "J.R. Gardner", "G. Pleiss", "D. Bindel", "K.Q. Weinberger", "A.G. Wilson" ], "title": "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration", "venue": "URL http://arxiv.org/abs/1809.11165", "year": 2019 }, { "authors": [ "B.S. Gloss", "M.E. Dinger" ], "title": "Realizing the significance of noncoding functionality in clinical genomics", "venue": "Experimental & Molecular Medicine,", "year": 2018 }, { "authors": [ "A. Gonzalez-Perez", "R. Sabarinathan", "N. Lopez-Bigas" ], "title": "Local Determinants of the Mutational Landscape of the Human Genome", "venue": "Cell, 177(1):101–114,", "year": 2019 }, { "authors": [ "D. Guo", "B. Chen", "H. Zhang", "M. Zhou" ], "title": "Deep poisson gamma dynamical systems", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "M. Juul", "J. Bertl", "Q. Guo", "M.M. Nielsen", "M. Świtnicki", "H. Hornshøj", "T. Madsen", "A. Hobolth", "J.S. Pedersen" ], "title": "Non-coding cancer driver candidates identified with a sample- and position-specific model of the somatic mutation rate", "venue": "eLife, 6. ISSN 2050-084X. doi: 10.7554/eLife.21778. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5440169/", "year": 2050 }, { "authors": [ "M. Juul", "T. Madsen", "Q. Guo", "J. Bertl", "A. Hobolth", "M. Kellis", "J.S. Pedersen" ], "title": "ncdDetect2: improved models of the site-specific mutation rate in cancer and driver detection with robust significance evaluation", "venue": null, "year": 2019 }, { "authors": [ "E. Khurana", "Y. Fu", "D. Chakravarty", "F. Demichelis", "M.A. Rubin", "M. Gerstein" ], "title": "Role of non-coding sequence variants in cancer", "venue": "Nature Reviews Genetics,", "year": 2016 }, { "authors": [ "M.S. Lawrence" ], "title": "Mutational heterogeneity in cancer and the search for new cancer-associated genes", "venue": "Nature,", "year": 2013 }, { "authors": [ "J.K. Lindsey. Statistical Analysis of Stochastic Processes in Time. Cambridge University Press", "August" ], "title": "ISBN 978-1-139-45451-3", "venue": "Google-Books-ID: podDRPdOFTQC.", "year": 2004 }, { "authors": [ "L. Lochovsky", "J. Zhang", "Y. Fu", "E. Khurana", "M. Gerstein" ], "title": "LARVA: an integrative framework for large-scale analysis of recurrent variants in noncoding annotations", "venue": "Nucleic Acids Research,", "year": 2015 }, { "authors": [ "I. Martincorena", "P.J. Campbell" ], "title": "Somatic mutation in cancer and normal cells", "venue": "doi: 10.1126/science.aab4082. URL https://science.sciencemag.org/content/349/6255/1483. Publisher: American Association for the Advancement of Science Section: Review", "year": 2015 }, { "authors": [ "I. Martincorena", "K.M. Raine", "M. Gerstung", "K.J. Dawson", "K. Haase", "P. Van Loo", "H. Davies", "M.R. Stratton", "P.J. Campbell" ], "title": "Universal Patterns of Selection in Cancer and Somatic Tissues", "venue": "ISSN 00928674. doi: 10.1016/j.cell.2017.09.042. URL https://linkinghub.elsevier.com/retrieve/pii/S0092867417311364", "year": 2017 }, { "authors": [ "L. McInnes", "J. Healy", "N. Saul", "L. Grossberger" ], "title": "Umap: Uniform manifold approximation and projection", "venue": "The Journal of Open Source Software,", "year": 2018 }, { "authors": [ "L. Mularoni", "R. Sabarinathan", "J. Deu-Pons", "A. Gonzalez-Perez", "N. López-Bigas" ], "title": "OncodriveFML: a general framework to identify coding and non-coding regions with cancer driver mutations", "venue": "Genome Biology,", "year": 2016 }, { "authors": [ "S. Nik-Zainal" ], "title": "Landscape of somatic mutations in 560 breast cancer", "venue": "whole-genome sequences. Nature,", "year": 2016 }, { "authors": [ "C.J. Paciorek", "M.J. Schervish" ], "title": "Nonstationary Covariance Functions for Gaussian Process Regression", "venue": "Advances in Neural Information Processing Systems", "year": 2004 }, { "authors": [ "A. Paszke", "S. Gross", "S. Chintala", "G. Chanan", "E. Yang", "Z. DeVito", "Z. Lin", "A. Desmaison", "L. Antiga", "A. Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "P. Polak" ], "title": "Cell-of-origin chromatin organization shapes the mutational landscape of cancer", "venue": "Nature, 518(7539):360–364,", "year": 2015 }, { "authors": [ "K. Polimis", "A. Rokem", "B. Hazelton" ], "title": "Confidence intervals for random forests in python", "venue": "Journal of Open Source Software,", "year": 2017 }, { "authors": [ "E. Rheinbay" ], "title": "Analyses of non-coding somatic drivers in 2,658 cancer", "venue": "whole genomes. Nature,", "year": 2020 }, { "authors": [ "M.D. Risser" ], "title": "Review: Nonstationary Spatial Modeling, with Emphasis on Process Convolution and Covariate-Driven Approaches", "venue": "[stat],", "year": 2016 }, { "authors": [ "Roadmap Epigenomics Consortium" ], "title": "Integrative analysis of 111 reference human epigenomes", "venue": "Nature, 518(7539):317–330,", "year": 2015 }, { "authors": [ "A. Schein", "H. Wallach", "M. Zhou" ], "title": "Poisson-gamma dynamical systems", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "B. Schuster-Böckler", "B. Lehner" ], "title": "Chromatin organization is a major influence on regional mutation rates in human cancer", "venue": "cells. Nature,", "year": 2012 }, { "authors": [ "S. Seabold", "J. Perktold" ], "title": "statsmodels: Econometric and statistical modeling with python", "venue": "In 9th Python in Science Conference,", "year": 2010 }, { "authors": [ "M.R. Stratton", "P.J. Campbell", "P.A. Futreal" ], "title": "The cancer", "venue": "genome. Nature,", "year": 2009 }, { "authors": [ "J.G. Tate" ], "title": "COSMIC: the Catalogue Of Somatic Mutations In Cancer", "venue": "Nucleic Acids Research, 47(D1):D941–D947,", "year": 2018 }, { "authors": [ "L. Wadi" ], "title": "Candidate cancer driver mutations in superenhancers and long-range chromatin interaction networks. bioRxiv", "venue": "doi: 10.1101/236802", "year": 2017 }, { "authors": [ "J. Wang", "W. Feng", "Z. Yuan", "J.D. Weber", "Y. Zhang" ], "title": "Dhx33 interacts with ap-2β to regulate bcl-2 gene expression and promote cancer cell survival", "venue": "Molecular and Cellular Biology,", "year": 2019 }, { "authors": [ "D. Weghorn", "S. Sunyaev" ], "title": "Bayesian inference of negative and positive selection in human cancers", "venue": "Nature Genetics,", "year": 2017 }, { "authors": [ "C. Zhang" ], "title": "Mterfd1 functions as an oncogene", "venue": "Oncotarget,", "year": 2014 }, { "authors": [ "T. Šuštić", "S. van Wageningen", "E. Bosdriesz", "R.J.D. Reid", "J. Dittmar", "C. Lieftink", "R.L. Beijersbergen", "L.F.A. Wessels", "R. Rothstein", "R. Bernards" ], "title": "A role for the unfolded protein response stress sensor ERN1 in regulating the response to MEK inhibitors in KRAS mutant colon cancers", "venue": "Genome Medicine,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Numerous domains involve modeling highly non-stationary discrete-time and integer-valued stochastic processes where event counts vary dramatically over time or space. An important open problem of this nature in biology is understanding the stochastic process by which mutations arise across the genome. This is central to identifying mutations that drive cancer emergence (Lawrence et al., 2013).\nTumor drivers provide a cellular growth advantage to cells by altering the function of a genomic element such as a gene or regulatory feature (e.g. promoter). Drivers are identifiable because they reoccur across tumors, but there are two major challenges to detecting such recurrence. First, driver mutations are rare and their signal is hidden by the thousands of passenger mutations that passively and stochastically accumulate in tumors (Stratton et al., 2009; Martincorena & Campbell, 2015). Second, because functional elements vary dramatically in size (genes: 103-106 bases; regulatory elements: 101-103 bases; and single positions), driver mutations accumulate across regions that vary many orders of magnitude. Accurately predicting the stochastic accumulation of passenger mutations at multiple scales is necessary to reveal the subtle recurrence of driver mutations across the genome.\nHere, we introduce the split-Poisson Gamma (SPG) process, an extension of the Poisson-Gamma distribution, to efficiently model a non-stationary discrete stochastic process at numerous length scales. The model first approximates quasi-stationary regional rate parameters within small windows; it then projects these estimates to arbitrary regions in linear time (10-15 minutes for genome-wide inference). This approach is in contrast to existing efforts that model fixed regions and require computationally expensive retraining (e.g. over 5 hours) to predict over multiple scales of interest (Nik-Zainal et al., 2016; Martincorena et al., 2017). We apply our framework to model cancer-specific mutation patterns (fig. 1). We perform data-driven training of our model’s parameters and show that it more accurately captures mutation patterns than existing methods on simulated and real data. We demonstrate the power of our multi-resolution approach by identifying drivers across functional ∗Authors contributed equally to this work.\nelements: genes, regulatory features, and single base mutations. Despite the method having no knowledge of genome structure, it detects nearly all gene drivers present in over 5% of samples while making no false discoveries and detects all previously characterized regulatory drivers. Detected events also include novel candidate drivers, providing promising targets for future investigation." }, { "heading": "1.1 PREVIOUS WORK", "text": "Numerous methods exist for modeling stationary stochastic processes (Lindsey, 2004). Far fewer exist for non-stationary processes because they are difficult to capture with the covariance functions of parametric models (Risser, 2016). Non-stationary kernels have been introduced for Gaussian processes (Paciorek & Schervish, 2004), but these may not be tractable on large datasets due to their computational complexity. More recently, there has been work developing Poisson-gamma models for dynamical systems (Schein et al., 2016; Guo et al., 2018), but these methods have focused on learning relationships between count variables, not predicting counts based on continuous covariates.\nIn the particular case of modeling mutation patterns across the cancer genome, numerous computational methods exist to model mutation rates within well-understood genomic contexts such as genes (Lawrence et al., 2013; Martincorena et al., 2017; Wadi et al., 2017; Mularoni et al., 2016; Juul et al.). These models account for < 4% of the genome (Rheinbay et al., 2020). They are not applicable in non-coding regions, where the majority of mutations occur (Gloss & Dinger, 2018). A handful of methods to model genome-wide mutation rates have been introduced (Polak et al., 2015; Nik-Zainal et al., 2016; Bertl et al., 2018). However, they operate on a single length-scale or set of regions and require computationally expensive retraining to predict over each new length-scale. Several methods rely on Poisson or binomial regression; however, previous work has extensively documented that mutation counts data are over-dispersed, leading these models to underestimate variance and yield numerous false-positive driver predictions (Lochovsky et al., 2015; Martincorena et al., 2017; Juul et al., 2019). Negative binomial regression has recently been used to account for over-dispersion (Nik-Zainal et al., 2016) and perform genome-wide mutation modeling and driver detection. However, resolution was coarse, and it only found a few, highly recurrent driver mutations." }, { "heading": "1.2 OUR CONTRIBUTIONS", "text": "This work makes three key contributions: 1) we introduce an extension of the Poisson-Gamma distribution to model non-stationary discrete stochastic processes at any arbitrary length scale without retraining; 2) we apply the framework to capture cancer-specific mutation rates with unprecedented accuracy, resolution, and efficiency; and 3) we perform a multi-scale search for cancer driver mutations genome-wide, including the first-ever base-resolution scan of the whole genome. This search yields\nseveral new candidate driver events in the largely unexplored non-coding genome, which we are working on validating with experimental collaborators. Crucially, our approach allows fast, efficient, and accurate searches for driver elements and mutations anywhere in the genome without requiring arduous retraining of a model, a feat which is not possible with existing approaches." }, { "heading": "2 MULTI-RESOLUTION MODELING OF A NON-STATIONARY DISCRETE STOCHASTIC PROCESS", "text": "We consider a non-stationary discrete stochastic process {Mi; i = 1,2, ...} where Mi is the integervalued event count at position i. Associated with each position i is a real-valued, L-dimensional feature vector ηi that determines the instantaneous event rate λi via an unknown function. Thus a region R = {i, i+1, ..., i+N} of N contiguous positions is characterized by an L×N feature matrix ηR and an event count XR = ∑i∈R Mi. As training data, ηR, XR, and Mi are observed for some set of regions {R ∈T }. Then given a set of feature matrices from unobserved regions {ηR; R ∈H }, the challenge is to predict the distribution of event counts over any arbitrary set I of unseen positions that may or may not be contiguous. Real-world examples include traders in a stock market, packets delivered to routers in a network, and mutations accumulating at positions in the genome." }, { "heading": "2.1 THE SPLIT-POISSON-GAMMA PROCESS", "text": "We assume that the process is near-stationary within a small enough region R= {i, i+1, ..., i+N} and that the L×N covariate matrix ηR is observed. Thus the rate of events λR within R is approximately constant and associated with ηR, albeit in an unknown way. A number of events (XR) may occur within R dependent on λR and are then stochastically distributed to individual positions within R, implying a hierarchical factorization of the scalar random variables λR, XR, and Mi (fig. 1e) as\nPr(Mi = k,XR,λR;ηR) = Pr(Mi = k|XR;ηR)Pr(XR|λR;ηR)Pr(λR;ηR). (1)\nXR and λR are unknown nuisance variables and are marginalized in general as Pr(Mi = k|ηR) = ∫ ∞\n0 Pr(λR;ηR)\n∞\n∑ XR=k Pr(Mi = k|XR;ηR)Pr(XR|λR;ηR)dλR. (2)\nSince applications often require many posterior predictions over regions of varying sizes, we propose a prior parameterization that builds on the success and flexibility of the classical Poisson-Gamma distribution while ensuring the marginalization has an easy-to-compute posterior distribution:\nλR ∼ Gamma(αR,θR) (3) XR ∼ Poisson(λR) (4) Mi ∼ Binomial(XR, p̃i) (5)\nwhere αR and θR are shape and scale parameters dependent on ηR, pi is the time-averaged probability of an event at i and p̃i =\npi ∑ j∈R p j\n, the normalized probability within R. A plate diagram of the hierarchical model is presented in fig. 1e.\nThe above formulation provides a simple, closed form solution to eq. (2) as a negative binomial (NB) distribution (See Appendix for details):\nPr(Mi = k|αR,θR, p̃i;ηr) = NB ( k;αR, 1\n1+θR · p̃i\n) . (6)\nEq. 5 implicitly assumes that events are distributed independently to units within R. Exploiting this assumption, eq. (6) immediately generalizes to consider any set of units I ⊆ R as\nPr ( ∑ i∈I Mi = k|αR,θR,{ p̃i}i∈I ;ηR ) = NB ( k;αR, 1 1+θR ·∑i∈I p̃i ) . (7)\nThe above formulation is an extension of the classical Poisson-Gamma distribution whereby the Poisson is randomly split by a binomial. We term this a split-Poisson-Gamma (SPG) process. While\nthe derivation of the SPG solution makes simplifying assumptions, the benefit is that the parameters αR and θR need to be estimated only once for each non-overlapping region R. Estimates for a region of any other size can then be computed in constant time from eq. (7). If a new region R′ is larger than R, we approximate the gamma distribution in a super-region containing R′ as a superposition of the previously inferred parameters of each region of size R within the super-region (see section 2.2)." }, { "heading": "2.2 INFERRING REGIONAL RATE PARAMETERS", "text": "The statistical power of SPG depends on accurate estimation of the regional gamma rate parameters αR and θR. We propose a variational approach to enable flexible, accurate, and non-linear inference of these parameters from a set of covariates. Let G(α,θ) be a gamma distribution. By the central limit theorem, limα→∞ G(α,θ) = N(µ,σ2) where µ = αθ and σ2 = αθ 2. We thus use a Gaussian process (GP) to non-linearly map covariates to regional estimates for µR and σ2R. The variational estimates for the gamma parameters are then\nαR = µ2R/σ 2 R, θR = µR/σ 2 R (8)\nFor a super-region R′ = Ri +R j, µR′ = µRi +µR j and σ2R′ = σ 2 Ri +σ 2 R j .\nA limitation of this approach is that GPs can only operate on vectors of covariates. Thus a dimensionality reduction method must be applied to the input matrix ηR. In cases where ηR includes spatial relationships, a convolutional neural network can be a powerful approach to dimension-reduction; however, other approaches are feasible (see section 3.2 and section 5.1)." }, { "heading": "2.3 INFERRING TIME-AVERAGED EVENT PROBABILITIES", "text": "The time-averaged parameters {pi; i = 1,2, ...} must also be inferred. Crucially, as seen in eq. (5), these parameters are never used directly; instead, they are always renormalized to sum to one within a region of interest. Thus, estimates do not need to reflect the absolute probability of an event at i but merely the relative rate of events between positions. Indeed, because of the renormalization procedure, the estimates need not even be a true probability distribution. Estimating pi can thus be accomplished by clustering units with similar relative rates of events. How this clustering should be performed will depend on the application of interest (see section 3.3 for a concrete example)." }, { "heading": "3 FITTING PARAMETERS TO PREDICT CANCER MUTATION PATTERNS", "text": "We obtained publicly available mutation counts from four cancer cohorts previously characterized by the Pan-Cancer Analysis of Whole Genomes Consortium (PCAWG) (Campbell et al., 2020): esophageal adenocarcinoma (N = 98 tumors; n ≈ 2.7M mutations), skin melanoma (N = 70 tumors; n ≈ 7.8M mutations), stomach adenocarcinoma (N = 37 tumors; n ≈ 480k mutations), and liver hepatocellular carcinoma (N = 264 tumors; n ≈ 3.3M mutations). Crucially, these data contain only the total number of mutations at each position in the genome. We do not know a priori which mutations are background mutations and which are driver mutations. We also do not know the true mean and variance of the underlying mutation rate in any region.\nWe do know that the mutation rate is highly associated with chemical modifications of the DNA that set the way it is processed in a cell, collectively termed the epigenome (Schuster-Böckler & Lehner, 2012; Polak et al., 2015). We obtained 733 datasets characterizing the patterns of these chemical modifications in 111 human tissues from Roadmap Epigenomics (Roadmap Epigenomics Consortium et al., 2015). These data are the largest compendium of uniformly processed human epigenome sequencing currently available. Each track provides the -log10 P-value that a particular modification is present at each location of the genome in a given tissue type. We additionally created two tracks that provide the average nucleotide and GC content in a region based on the human reference genome GRCh37. See Appendix and supplementary data for additional information on the epigenetic tracks. The input matrix for each region ηR thus has 735 rows. We fixed the number of columns to be 100 irrespective of the size of R, where each column is the mean across R/100 adjacent positions." }, { "heading": "3.1 ARTIFICIAL DATASET", "text": "In order to evaluate the ability of SPG and other models to estimate the unknown mean and variance of regional rates, we created simulated datasets with known mean and variance parameters dependent\non the observed input matrix (fig. 2a). We created input matrices of size 735×100 from the epigenetic tracks (described above) for non-overlapping regions of 50,000 positions. To define non-stationary mean and variance of mutation rate dependent on the each region’s input matrix, we reduced ηR to an feature vector of size 735 by taking the mean across columns and used a k-nearest-neighbors (KNN) strategy to identify 500 regions with similar epigenetic feature vectors; we then defined µR and σ2R for each region as the mean and variance of the observed event counts across its 500 neighbouring regions. The number of observed events for that region was then randomly drawn from a negative binomial distribution defined by those parameters (full technical details in Appendix). Models were trained on the randomly drawn counts and evaluated on their ability to accurately infer the true mean and variance. We simulated 50kb regions following previous work (Rheinbay et al., 2020)." }, { "heading": "3.2 ESTIMATING DYNAMIC REGIONAL RATES WITH UNCERTAINTY", "text": "The input matrices ηR ∈ R735×100 required significant dimension reduction before we could employ our GP-based variational strategy to infer SPG regional rate parameters. Columns encode the highresolution spatial organization of the epigenome which have recently been shown to be important determinants of local mutation rate (Gonzalez-Perez et al., 2019; Akdemir et al., 2020). Therefore, we hypothesized that a convolutional neural network (CNN) would provide a powerful approach to produce a low-dimensional embedding that retrains information about this local structure; the supervised nature of a CNN further enables the resulting embedding to be optimized for the cancer of interest, which is crucial to performance since the epigenetic determinants of mutation rate vary drastically between cancer types (Polak et al., 2015). We constructed a 1D CNN model with 4 residual blocks and 3 fully-connected layers to map mutation-rate-associated local epigenetic patterns to regional mutation rates. The CNN non-linearly reduces ηR ∈ R735×100 to a 16 dimensional feature vector in its last feature layer. The CNN was trained to minimize the mean squared error between observed and predicted mutation counts. Due to the interchangeable nature of the rows, the 1D kernels allow the network to identify arbitrary inter-track interactions. The final 16-dimension feature vector was then passed as input to a sparse GP (Titsias), fit to maximize the likelihood of the observed mutation counts using 2000 inducing points and a radial basis function kernel (fig. 1b). We found that results were robust to the particular choice of kernel and hyperpriors placed over kernel parameters. While end-to-end training is possible (Bradshaw et al., 2017), we did not find it necessary to achieve high accuracy in this particular application. A CNN is not the only method available to reduce dimensionality prior to GP inference; we investigated numerous other methods, but found the CNN+GP to produce the most accurate results (see Appendix)." }, { "heading": "3.3 ESTIMATING TIME-AVERAGED EVENT PROBABILITIES", "text": "In the case of cancer mutation patterns, previous work showed that the mutation rate at any position i is heavily influenced by the nucleotide at i and the two nucleotides directly adjacent to i; positions with this same “trinucleotide context” will have similar mutation patterns (Alexandrov et al., 2013). Following previous works (Mularoni et al., 2016; Wadi et al., 2017; Martincorena et al., 2017; Weghorn & Sunyaev, 2017), we used trinucleotide context to estimate pi. Let ntn′ be the trinucleotide context centered at position i. We estimate the probability that i is mutated using the ensemble maximum-likelihood estimate of its cluster\npi = pn,t,n′ = vn,t,n′ Nn,t,n′ . (9)\nwhere Nn,t,n′ is the number of ntn′ trinucleotides in the genome and vn,t,n′ is the number of times t is mutated within ntn′. This approach alone explains little variance in sub-megabase regions (see Appendix) because it does not account for regional mutation rates." }, { "heading": "3.3.1 COMPARING TO BENCHMARK MODELS", "text": "We compared SPG to three alternative approaches that have been previously used to learn both the mean and variance of regional mutation patterns genome-wide. The alternative models are random forest (RF) regression (Polak et al., 2015), binomial regression (BR) (Bertl et al., 2018), and negative binomial regression (NBR) (Nik-Zainal et al., 2016; Martincorena et al., 2017). For the RF, we used the Jackknife method (Wager et al.) to estimate the variance; this method requires O(n) trees where n is the number of samples in the training set. BR and NBR directly specify the variance as function of the mean: BR as σ2R = µ−µ2/n and NBR as σ2R = µR(1+β µR), where β is an overdispersion parameter. Benchmarking comparisons were performed on the skin melanoma, esophageal adenocarcinoma, and stomach adenocarcinoma cohorts." }, { "heading": "3.4 MODEL TRAINING", "text": "For every region of size R, epigenetic features were extracted into matrices of size 735 tracks by 100 binned position columns, where each column was the mean across R/100 adjacent base-pairs. Regions with highly repetitive DNA sequence (<70% of 36mer sub-sequences being unique) where excluded from the training set to ensure high data quality as in previous analyses (Polak et al., 2015). Before training, high-quality data regions were strictly split into train (64%), validation (16%) and test (20%) sets. Predictions for excluded regions and held-out test sets were obtained after model training. Genome-wide predictions were generated using 5-fold cross-validation. The CNN received the full 735× 100 matrices as input. Vector-based methods (RF, NBR, BR) received the 735-dimension vector of epigenetic values averaged across position columns. Following previous work, we also included the expected number of mutations based on the trinucleotide composition of a region as an offset term in NBR and BR when predicting mutation counts (Nik-Zainal et al., 2016). Additional details on training (e.g. number of epochs) are in Appendix." }, { "heading": "4 IDENTIFYING GENETIC DRIVERS OF CANCER", "text": "Because cancer drivers reoccur across tumors, driver elements (genes, regulatory structures, and individual base-pairs) will contain an excess of mutations relative to the of expected background mutations. The SPG model provides a simple, efficient, and accurate method to search for this recurrence. We first estimate mean and variance of the background mutation rate using the CNN+GP estimation method. We then apply eq. (7) to search for statistical evidence that the number of observed mutations, k, exceeds expectation within every gene, known regulatory structure, and 50 bp window in the genome by changing the set of tested positions I. For a gene, k is the number of observed missense or nonsense mutations and I is the set of all possible mutations in the gene. For both a regulatory element and window of fixed size, k is the number of mutations observed in the element / window and I is the set of all positions within the element / window. If an element overlaps multiple 10kb regions, we merge the mean and variance estimates for overlapped regions as described in section 2. To maintain strict train-test separation, both the rate parameters and pi are estimated excluding the element being tested. We controlled family-wise error rate at the α = 0.05 level using a Bonferroni correction for the total number of tests in genes, regulatory elements, or 50bp windows.\nGene information was obtained from Martincorena et al. (2017) and regulatory element information from Rheinbay et al. (2020). Driver detection was performed in all four cancer cohorts." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 ACCURACY OF REGIONAL RATE PARAMETER ESTIMATION", "text": "We first evaluated various methods’ abilities to infer regional rate parameters, considering both new (CNN+GP) and existing (RF, NBR, BR) methods. We assessed each method’s ability to learn the expected mutation rate by directly assessing the amount of variance (Pearson R2) it explained over observed mutation counts in 50kb real data windows (fig. 2b top), and found the CNN+GP estimation method performed the best, although random forest was a close second (results were similar when estimating the mean in simulated data; fig. 2b middle). We then evaluated each method’s ability to capture the variance σ2R in the simulated data, quantified as the Pearson R2 to the true variance. The CNN+GP method again outperformed the others (fig. 2b bottom). Notably, RF was unable to infer the variance beyond chance level, and thus we did not consider this method further because its inability to infer variance precludes accurate driver detection.\nWe also considered other dimensionality reduction techniques including both non-neural and neural approaches as well as supervised and unsupervised approaches, as an alternative to the CNN; no other approach achieved accuracy comparable to the CNN+GP over both mean and variance (see Appendix). Moreover, we validated the necessity of the GP by directly optimizing the CNN to predict both parameters and found it significantly reduced model performance (7% decrease over mutation counts and 13% over σ2R within 10kb windows in melanoma)." }, { "heading": "5.2 ACCURACY AND EFFICIENCY OF MUTATION RATE PREDICTION", "text": "To further compare the SPG performance to existing methods, we evaluated the accuracy and efficiency of each method over length scales ranging over 5 orders of magnitude (10-106 positions). To evaluate SPG, we estimated the background mutation rate parameters, µR and σ2R, in 10kb regions genome-wide using the CNN+GP estimation strategy; we then applied the SPG distribution to estimate mutation count distributions over all other region sizes. The existing methods with\nreasonable performance on both mean and variance prediction (BR and NBR) were trained to directly predict the count distribution in each region for each length scale genome-wide.\nAcross all tested window sizes and cancers, SPG outperformed existing methods, with performance particularly improved in esophageal adenocarcinoma and skin melanoma (fig. 3a), crucial for highaccuracy driver detection downstream. Across 1Mb windows, SPG explains > 95% of the variance in mutation density across all three cancers (fig. 3a,b); this is >15% more variance than both existing methods (Fig. 3a), highlighting the ability of SPG to accurately capture regional distribution parameters and project them upwards. The decrease in variance explained in smaller window sizes is expected because observed mutation counts become increasingly stochastic relative to the expected number of mutations predicted by each method. The theoretical foundations of negative binomial regression and SPG are similar, both built upon the classical Poisson-gamma model. SPG differs from NBR in three key ways that help explain its improved performance: 1) SPG models mutation patterns over arbitrary sets of positions enabling it to dynamically pool information across positions after a single training; in contrast, NBR operates on fixed regions and must be retrained for every new region size. 2) SPG’s variational inference method estimates the gamma parameters for each region independently; NBR estimates only the shape parameter independently for each window and uses a single scale parameter for all windows. 3) SPG’s CNN data reduction enables non-linear mapping of spatial covariate information to mutation rate, whereas NBR can perform only linear inference and disregards the spatial organization of the genome.\nSPG is also the most efficient method for multi-resolution search (appendix D.3). Initial training of parameters using the CNN+GP method for one fold of 10kb regions required 36 minutes using 1 GPU. Projection to each additional scale using 8 CPUs required at most 4 minutes (table 1). In contrast, training time for BR and NBR increases considerably as the resolution decreases. Performing a search across resolutions of 50bp, 100bp, 500bp, 1kb, and 10kb would require >5h for negative binomial, >2h for BR, and only 52 minutes for SPG (Appendix). We have also found that parameter estimation on windows as large as 100kb does not significantly reduce accuracy across scale (Appendix), allowing SPG parameter estimation in considerably shorter time (e.g. only 8 minutes for 50kb)." }, { "heading": "5.3 IDENTIFICATION OF CANCER DRIVER MUTATIONS", "text": "We leveraged SPG’s ability to model multiple resolutions to search the whole genome of each of the four cancer cohorts for gene drivers, non-coding regulatory drivers, and 50bp windows that may harbor a driver mutation. All significant results are provided as supplementary data tables. We compared our results to those obtained from a previous comprehensive characterization of these cohorts by Campbell et al. (2020), who used 13 different methods to identify drivers. Our model did not have access to information about gene structure or function unlike the methods used in the previous characterization. Nonetheless, the model’s p-values were well calibrated (fig. 3c), and we identified 19 genes with a significant excess of missense or nonsense mutations. All 19 genes were previously reported as drivers by Campbell et al. (2020). We failed to detect only two known driver genes present in >5% of samples. This performance is on par with state-of-the-art methods specifically designed for driver gene identification (Rheinbay et al., 2020).\nWhen analyzing non-coding regulatory elements, SPG’s p-values were again well calibrated (fig. 3c), and it identified all non-coding drivers (n=11) identified by Campbell et al. (2020). Moreover, SPG implicated several additional putative non-coding driver elements that had not been previously reported. Examples include 1) the promoter of the gene MTERFD1 in esophageal cancer (P = 3.1× 10−8), whose over-expression has been observed in numerous cancers, has been shown to promote cell growth, and decrease clinical survival (Zhang et al., 2014); 2) an enhancer of DHX33 in\nliver cancer (P = 4.8×10−11), whose over-expression has been shown to promote cancer development (Wang et al., 2019); and 3) the 5’ UTR of ERN1 in melanoma, which has been linked to cancer therapy resistance (Šuštić et al., 2018).\nFinally, we performed the first, to our knowledge, genome-wide search for individual driver mutations. All significant genic hits fell within known driver genes whose functions have been experimentally validated including TPF3, BRAF, KRAS, PIK3CA, and CTNNB1. In addition, SPG identified two recurrent mutations in the genes GPR98 and KLB that had not been previously identified in Campbell et al. (2020)’s analysis of the data . These mutations are listed as driver mutations in the Catalogue of Somatic Mutations in Cancer Tate et al. (2018). SPG implicated numerous hotspots in the mostly unexplored non-coding genome, including the well-known TERT promoter mutation (fig. 3d). These results are promising targets for future studies of non-coding drivers in cancer cell lines and organoids." }, { "heading": "6 DISCUSSION", "text": "We introduced an extension of the Poisson-Gamma distribution to model discrete-time, integervalued stochastic processes at multiple scales. The split-Poisson-Gamma (SPG) model makes several simplifying assumptions including: 1) that the process is quasi-stationary in a small enough region; 2) events are distributed among the discrete units approximately independently; and 3) the behavior of the random variables can be captured by particular parametric distributions. The assumptions are necessary to derive a closed-form posterior distribution. This enables efficient prediction over multiple length-scales without having to re-estimate the model parameters. We additionally proposed a variational inference strategy to reduce input dimensionality and estimate the parameters of the model using a CNN coupled with a GP. Indeed, the use of a CNN+GP for a distribution variational inference may be of use well beyond the SPG framework and discrete stochastic process modeling.\nTo demonstrate the utility of the SPG, we applied it to model mutation rates in cancer and identify genomic elements that drive tumor emergence. In the case of this application, previous work has established the validity of the above assumptions, demonstrating that the mutation rate is approximately constant within 50kb regions (Rheinbay et al., 2020) and that mutations occur approximately independently given each position’s trinucloeotide context (Martincorena et al., 2017). We demonstrated that the approach is more accurate than other methods on both real and synthetic data. We also demonstrated that multi-resolution prediction enables identification of both known and novel putative drivers of cancer, including in the non-coding genome, a crucial open problem in genomics (Khurana et al., 2016; Rheinbay et al., 2020).\nSPG is also applicable to discrete stochastic challenges in other domains, particularly when anomaly detection is the goal. For example, cybersecurity is often interested in detecting malicious network activity that may occur over seconds, hours, or weeks. Such activity ought to appear as anomalous relative to the expected network traffic. However, similarly to cancer drivers, detecting such anomalies is confounded by the fact that expected network traffic can vary dramatically over time. Thus detecting malicious activity requires modeling non-stationary event rates and searching for anomalous activity across multiple resolutions. SPG is highly suited for efficient execution of this task. While the details of parameter estimation will depend upon the application, we expect the variational Gaussian process approach will be broadly applicable and that, in the case of high-dimensional matrix input, a CNN will provide a powerful tool to reduce the data to an informative feature vector.\nAnother timely use-case is identifying infectious disease outbreaks. The task is to determine when a new infection hotspot is developing to implement containment measures. This task is challenging because infection rates vary by geography and by individuals (e.g. young vs. elderly). SPG provides a framework to identify infection hotspots while accounting for geographic and demographic risk. In this application, regional rate parameters would reflect geographic infection rates while individual people would be the \"positions\" of the stochastic process. The task would be to identify groups of people (e.g. schools, neighborhoods, cities, etc.) with more cases than expected. Local information (e.g. medical treatment availability, number of cases, population density, etc.) could serve as predictors to infer regional infection rates while an individual’s demographics could provide the clustering criteria to estimate pi. Such an application could be particularly useful for early identification of hotspot outbreaks of COVID-19. These examples highlight the diverse situations to which SPG could be applied, and we expect that the number of applications will continue to grow as collections of time- and space-varying data grow increasingly large." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We acknowledge the contributions of the many clinical networks across ICGC and TCGA who provided samples and data to the PCAWG Consortium and the contributions of the Technical Working Group for collation, realignment, and harmonized variant calling of the cancer genomes used in this study. We thank the patients and their families for their participation in the individual ICGC and TCGA projects." }, { "heading": "A APPENDIX", "text": "In this appendix, we provide detailed information on:\n1. The data used in this work including its origin and all preprocessing steps.\n2. Additional method details including:\n• A derivation of the closed-form marginal distribution of the graphical model presented in main text.\n• Architecture and training details of all models. • How the genome-wide search for driver mutations was performed.\nThe appendix also includes an analysis of the sensitivity of negative binomial regression to detect well-known drivers genome-wide and additional figures that provide context to results presented in the main paper." }, { "heading": "B DATA", "text": "" }, { "heading": "B.1 EPIGENETIC TRACKS", "text": "We obtained 733 −log10(P-value) chromatin tracks representing the epigenetic organization of 111 human tissues from Roadmap Epigenomics Roadmap Epigenomics Consortium et al. (2015) (see Appendix table “predictor_track_descriptions.csv”). These tracks measure the abundance of a particular chromatin mark genome-wide, with smaller (more significant) p-values reflecting a greater abundance of the chromatin mark at a genomic position. Chromatin marks are chemical modifications of histones, the proteins used to package DNA within a cell. We additionally obtained 10 replication timing tracks from the ENCODE consortium. Replication timing assays measure the relative time at which each position in the genome is replicated during cell division. For non-overlapping regions R of predefined size and location (see main text for more details), we extracted the signal for each epigenetic track using 100 bins per region with pybbi Abdennur (2018). We additionally calculated the average nucleotide content in each window by assigning each nucleotide a numeric value between 1 and 5 and taking the average across a bin (N [unspecified nucleotide] = 1, A = 2, C = 3, G = 4, T = 5), and we calculated the GC content as the percent of G and C nucleotides in a bin, resulting in a total of 735 epigenome tracks per region. The mean values for each region were calculated as the mean chromatin signal for each track in the region." }, { "heading": "B.2 MUTATION COUNT DATA", "text": "We downloaded somatic single-base substitution mutations identified in the ICGC subset of the Pan-Cancer Analysis of Whole Genomes Consortium cohorts of esophageal adenocarcinoma, skin melanoma, stomach adenocarcinoma, and liver hepatocellular carcinoma. These data are freely available for download from the International Cancer Genomics Consortium data portal (see fig. 4). We excluded mutations on the sex chromosomes (X and Y) because males and females carry different sets of these chromosomes, leading to differential mutation patterns. We summarized the data as mutation counts per window for window sizes of 50bp, 100bp, 500bp, 1kb, 5kb, 10kb, 25kb, 50kb, 100kb, and 1Mb." }, { "heading": "B.3 RESTRICTION TO REGIONS OF HIGH MAPPABILITY", "text": "High-throughput genome sequencing works by randomly reading millions of short sequences of nucleotides (36-150 bases in length) from a target genome. These “reads” are then mapped to the human reference genome to reconstruct the target. A challenge is that short sequences of k nucleotides (kmers) can occur multiple times in the genome. This results in ambiguous mappings for some reads and thus a degradation of data quality in regions composed of many kmers that occur multiple times across the genome. Following previous work Polak et al. (2015), we removed regions of the genome with low quality data by calculating a mappability score for each region. Mappability scores reflect how many times a particular kmer occurs in the genome and have been pre-computed for the human reference genome GRCh37. We required that a region’s average mappability score based on 36mers\n(e.g. average across all sequences of 36 nucleotides in the region) be >70%, reflecting that all 36mers in the region be >70% unique. The majority of the genome passed this threshold; for 10kb regions, for example, >75% of the genome passed this threshold. We chose to measure mappability with 36mers because this was the length of read used to generate the Roadmap Epigenomics sequencing data." }, { "heading": "B.4 SYNTHETIC DATA SIMULATION", "text": "We generated synthetic datasets for each of the cancers in order to have datasets with known mean and variance rate parameters. To generate the datasets, we used a k-nearest-neighbors strategy to identify the 500 nearest neighbors for each region. The mean and variance for that region were then taken to be the empirical mean and variance calculated from the 500 nearest neighbors. The number of \"observed\" mutations was then randomly sampled from a binomial defined by the mean and variance parameters. It is important to note that these datasets are purely derived for the purpose of comparing methods over datasets with a known ground-truth. They do not reflect mutation patterns in the real datasets. The specific steps to generate the simulated data were:\n1. Generate vectors of the mean values for each of the 735 tracks (733 epigenetic tracks, GC content track, and average nucleotide content track) in 50kb regions of the genome with 36mer uniqueness >70%.\n2. Perform ordinary least-squares (OLS) regression of the mean vectors against the observed number of mutations in each 50kb window for that cancer.\n3. Scale each value in the feature vectors by its corresponding coefficient from OLS and compress the weighted mean vectors to 50 components using Principal Components Analysis (capturing >94% of the variance for each cancer).\n4. For each region R, perform k-nearest-neighbor clustering with Euclidean distance to identify its 500 nearest neighbors in the PC space. Define the mean µR and variance σ2R of the mutation rate in R to be the mean and variance of the KNN cluster.\n5. For region R, randomly draw a new “observed” number of mutations from a negative binomial distribution defined using the associated mean and variance. Specifically, XR ∼ NB(α,1/(θ +1)) where α = µ2R/σ2R and θ = σ2R/µR\nWe created two versions of the simulated data, one in which all regions in the genome were used to estimate the rate parameters and one in which rate parameters were estimated separately within independent train and test subsets. Results were qualitatively indistinguishable." }, { "heading": "C METHODS", "text": "" }, { "heading": "C.1 GRAPHICAL MODEL", "text": "Here we derive the closed form negative binomial distribution presented in the main text as the graphical model marginal distribution over events at some unit i in a region R. We use the following notation:\n• Mi: # mutations observed at pos i (observed) • pi: genome-wide probability of observing a mutation at the nucleotide context of i (inferred) • p̃i: normalized probability of observing a mutation at i in region R (inferred) • λR: the background mutation rate in region R (unobserved) • XR: # background mutations in region R (unobserved) • µR: the expected background mutation rate in region R (inferred) • σ2R: the variance of background mutation rate in region R (inferred). • ηR: covariates associated with the behavior of the stochastic process within R (observed)\nAs presented in the main text and main Figure 1, the graphical model implies the factorization\nPr(Mi,XR,λR|αR,θR, p̃i;ηR) = Pr(Mi = k|XR, p̃i;ηR) ·Pr(XR = x|λR;ηR) ·Pr(λR|αR,θR;ηR) (10)\nwhere\nαR = µ2R/σ 2 R θR = σ2R/µR.\nSince ηR is a given in each equation, we suppress it for notational ease.\nTo marginalize out XR, we note that\nPr(Mi = k|λR) = ∞\n∑ x=k Pr(Mi = k|XR, p̃i) ·Pr(XR = x|λR)\nis equivalent to a split Poisson process (Gallager, 2013). Thus\nPr(Mi = k|λR) = Possion(Mi = k; p̃iλR). (11)\nWe now marginalize out the unknown rate parameter λR. P(Mi = k|p̃i,αR,θR) = ∫ ∞\n0 P(Mi = k|λR; p̃i)P(λR|αR,θR)dλR\n= ∫ ∞\n0\n(p̃iλR)k\nk! e−p̃iλR 1 Γ(αR)θ αRR λ αR−1R e −λR/θR dλR\n= p̃ik\nk!Γ(αR)θ αRR ∫ ∞ 0 λ αR+k−1R e −λR(p̃i+1/θR)dλR.\nMaking the substitution t = λ (p̃i +1/θR) and noting that the resulting integrand is an unnormalized gamma distribution, we have:\nP(Mi = k|p̃i,αR,θR) = p̃ik\nk!Γ(αR)θ αRR Γ(αR + k)\n( 1\np̃i +1/θR )αR+k =\nΓ(αR + k) k!Γ(αR)\n( p̃iθR\np̃iθR +1 )k( 1 p̃iθR +1 )αR = NB ( Mi = k;αR,\n1 p̃iθR +1\n) ." }, { "heading": "C.2 OVERVIEW OF PARAMETER ESTIMATION PROCEDURE", "text": "Estimation of regional rate parameters: As training data, we use a set of input matrices {ηR; R ∈T } and associated mutation counts {XR; R ∈T }. First, a CNN is trained to take ηR as input and predict XR as output, using mean squared error loss. The final 16-dimension feature vector of the trained CNN is then used as input to train a Gaussian process to predict the mutation count XR and the associated estimation uncertainty by maximizing the likelihood of the observed data. The mean and variance output by the GP were used as estimates for µR and σ2R .\nEstimation of time-averaged event probabilities: the time-average probability of an event at pi was estimated based on it’s trinucleotide composition, n, t,n′ where n is the nucleotide at i− 1, t is the nucleotide at i and n′ is the nucleotide at i+ 1 in the reference genome. We first counted every occurrence of n, t,n′ in the human genome and then counted the number of times the middle nucleotide of the 3mer was mutated across the genome. The maximum likelihood estimate of pi is then the ratio of the number of observed mutations of the 3mer divided by the total occurrences of the 3mer." }, { "heading": "C.3 REGIONAL PARAMETERS ESTIMATION METHODS", "text": "To compute a model’s R2 accuracy to µR and σ2R for regions R of size S, the genome was divided into non-overlapping contiguous segments of size S. To assure high data quality, any region with mappability score < 70% was excluded from further analysis. The remaining windows (accounting for more than 75% of the genome) were randomly divided into train and test sets in an 80–20 split respectively. The test set was held-out and served solely for evaluation purposes. The train set was then divided into train and validation sets by another 80–20 split respectively (train set = 64%, validation = 16%, and test = 20% of the considered regions with mappability score < 70%, see appendix B.3)." }, { "heading": "C.3.1 GAUSSIAN PROCESS FEATURE VECTOR GENERATION", "text": "All networks were independently trained for 20 epochs with a batch size of 128 samples and using the Adam optimizer to minimize mean squared error loss to either the true mutation count (CNN and FCNN) or input tensor (AE). After training the model parameters using the train set, predictions over the held-out test set were computed by 1) extracting the last 16-dimensional feature layer (middle feature layer for AE) for all sets over the best performing model over the validation set across all epochs (according to the validation accuracy); 2) training multiple GPs (typically 10) to predict mutation counts using the 16 dimension feature vectors of the train set as input (see appendix C.3.2 for details); 3) taking the mean µR and σ2R of all 10 runs over the test set as the ensemble prediction of the model. All neural network models were implemented in Pytorch Paszke et al. (2017).\n1. Convolutional neural network (CNN): The CNN contains 4 convolutional blocks with 2 batch normalized convolutional layers and ReLU activation. The first block transformed the input tesor from 735×100 to 256×50 with 256 channels and a double stride. The other blocks are ResNet-style residual blocks that maintain their input dimension to facilitate residual connections, with 256, 512, and 1024 channels respectively. Between each of the 3 residual blocks there is a double stride (ReLU activated and batch normalized) convolutional layer, which divides the tensor length by two and doubles its height with additional channels. The output of the last residual block is flattened and passed through 3 fully-connected layers. The first two are ReLU activated and reduce the dimensionality of the tensor to 128 and 16 dimensions respectively. The last uses linear functions to reduce the tensor to a single cell holding the output of the regression. This forces a linear relation between the regression output and the last feature layer, thus simplifying the function the GP needs to learn, which we found empirically improves the GP’s accuracy.\n2. Fully-connected neural network (FCNN): The FCNN has an architecture similar to the CNN’s 3 fully-connected layers but with an input space of the mean epigenetic vector (735 dimensions). Thus, the FCNN is computationally similar to the CNN, but operates on the mean vector instead of the full matrix as an input. The FCNN is designed to demonstrate maximum performance possible when reducing the input tensor to an averaged feature vector.\n3. Autoencoder neural network (AE): The encoder of the AE used the same architecture as the CNN, excluding the last linear fully connected layer. The decoder has a mirror architecture with the same number of parameters but differs in the internal design of the convolutional blocks. Convolutional layers were replaced by 1D transpose convolutional layers with no batch normalization and no residual connections. The AE was designed to demonstrate the predictive power of a feature embedding that was not optimized to a specific task but produced in a way comparable to the CNN.\n4. Other dimensionality reduction methods: PCA was computed using the Python Scikitlearn package with default settings and UMAP was computed via Python’s umap-learn package McInnes et al. (2018) with 20 nearest neighbours and Euclidean distance. Both methods were computed over the entire training set (80%) with no validation set and reduced the mean epigenetic vector dimensionality (735 dimensions) to 16, just like all other models. Prior to processing, we log-transformed the epigenetic data as we found this improved prediction accuracy downstream." }, { "heading": "C.3.2 GAUSSIAN PROCESS", "text": "We implemented a sparse, inducing-point Gaussian process Titsias with a radial basis function kernel using Python’s GPyTorch package Gardner et al. (2019). The GP was optimized with 2000 inducing points using the Adam optimizer for 100 steps. All features were mean-centered and standardized to unit variance prior to training. For each dataset, we ran the GP ten independent times and calculated the ensemble mean of the mean and variance predictions from each of the individual runs. We took these ensemble predictions as the mean and variance for each region." }, { "heading": "C.3.3 ALTERNATIVE MODELS", "text": "We implemented previously proposed alternative methods Polak et al. (2015); Nik-Zainal et al. (2016); Martincorena et al. (2017) for the estimation of µR and σ2R without the use of GP. These methods use the mean epigenetic vector as an input.\n1. Random forest (RF): RF regression was implemented via the Ensemble Methods module in the Python Scikit-learn package, with a maximum depth of 50 trees. Since RF does not directly compute a variance, we implemented the Jackknife method as described in Wager et al. (we have compared our implementation to Polimis et al. (2017) and found them highly correlated). Wager et al. suggests that the number of estimators, i.e., trees, must be linearly related to the number of samples to obtain reasonable estimates of the variance. We chose to have one tenth as many estimators as samples in an attempt to keep running time within reasonable limit for datasets of smaller region sizes. Even so, for 10kb regions (containing approximately 300K regions), RF required >24 hours to train.\n2. Negative binomial regression (NBR): As described in section 3.3.2 of the main text, NBR directly specifies the variance as σ2R = µR(1+β µR), where β is an overdispersion parameter. When β = 0 NBR reduces to Poisson regression, also widely used in the community. NBR was implemented via the discrete module in the Python statsmodels package Seabold & Perktold (2010) with the Broyden–Fletcher–Goldfarb–Shanno optimization algorithm and 1k maximum iterations. Epigenetic predictors were log-transformed and reduced to 20 principle components, following the field-standard Martincorena et al. (2017) in both train and test sets. When used to compare against the GM we also included the expected number of mutations based on the sequence context model (see main paper section 3.2) as an exposure term in the model as in previous work Nik-Zainal et al. (2016); Martincorena et al. (2017).\n3. Binomial regression (BR): Following a previous study Bertl et al. (2018) that suggested multinomial regression to model multiple types of mutations, we also considered binomial regression (as the binary version of multinomial regression applicable to our simple counts data) as a method to model mutation rates at high resolution. BR was implemented via the generalized linear module in the Python statsmodels package Seabold & Perktold (2010). As in previous work Nik-Zainal et al. (2016); Martincorena et al. (2017), we included the expected number of mutations based on the sequence context model (see main paper section 3.2) as an exposure term in the model. As with NBR, the epigenetic predictors\nwere log-transformed and reduced to 20 principle components for both train and test sets following state-of-the-art recommendations Martincorena et al. (2017)." }, { "heading": "C.4 EMPIRICAL VARIANCE ESTIMATION", "text": "For real data, the true variance in mutation counts of a region is unknown. Thus to estimate variance empirically for a given model, we used the following approach:\n1. For a region in the test set, perform k-nearest neighbors clustering with Euclidean distance to identify the 500 regions in the train set that are most similar to the region of interest based on the model’s feature embedding. For all models, a feature embedding of 16 dimensions was used.\n2. Calculate the empirical variance as the variance of the KNN cluster.\nSince feature embeddings are model-specific, we calculated an empirical variance estimate per model. The feature-vector embeddings for models specified in section C.3.1 were the feature vectors used as input to the GP. Models specified in section C.3.3 do not create or require comparable feature vectors and therefore were not considered in the main paper results. However, to measure the ability of these methods to estimate empirical variance (Fig. 7), we computed their feature vectors by 1) taking the dot product of the model parameters and the input data mean vectors and 2) reduced these scaled vectors to 16 dimensions via a PCA reduction (explaining 80%-95% of the variance across the different region scales). For RF, we took the model parameters to be the feature importance weights derived from the trained forest and for NBR, we used the model coefficients as the parameters." }, { "heading": "C.5 PERFORMING A GENOME-WIDE SEARCH FOR CANCER DRIVER MUTATIONS", "text": "For each cancer, the background mutation rate parameters were estimated across the genome using 5-fold cross validation in 10kb, 25kb and 50kb regions. While the model is robust to choice of 10kb, 25kb or 50kb region size (fig. 5), the 25kb and 50kb models include some additional regions of the genome due to the the mappability threshold (see section B.3). To analyze the largest possible subset of the genome, we performed our analysis iteratively: we first searched for drivers using regions accessible via the 10kb model; we then searched additional regions not accessible by the 10kb model in the 25kb model and then in the 50kb model. To search for drivers, we applied our probabilistic model to estimate the mutation count distributions in 50bp regions across the genome, and we then searched for 50bp regions with significantly more observed mutations than expected under the null distribution of our model. We controlled false-discovery rate at the 0.05 level using a Bonferroni-corrected p-value threshold of P<1e-9.\nTo compare our hits with known cancer drivers, we tabulated the recurrent driver mutations reported by PCAWG that were present in our dataset, including in the TERT promoter, a well known noncoding driver. While most recurrent driver mutations are activating mutations (e.g. cause a gain of cellular function), we also found recurrent mutations in the tumor suppressor genes TP53 and SMAD4. Recurrent mutations in a single position are far less likely in tumor suppressor genes because any deleterious mutation can act as a potential cancer-causing mutation. For example, TP53 had 6 genome-wide significant 50bp regions, consistent with its status as a crucial tumor suppressor that can be knocked-out with many different mutations (see table 2). Methods specialized to discover driver genes are necessary to find tumor suppressor genes in general Lawrence et al. (2013); Mularoni et al. (2016); Martincorena et al. (2017)." }, { "heading": "C.6 ENVIRONMENT AND COMPUTE TIME", "text": "A benchmark run at 10kb scale with 10 GP reruns takes 2-3 hours on a single 24 Gb Nvidia RTX GPU, with 8 CPU cores and 756GB RAM. Thus, a full 5-fold of the entire genome takes 10-15 hours. Due to the model’s robustness to scale, this time may be significantly reduced without drastic loss of accuracy by using larger region scales (e.g. only 30-40 minutes for 50Kb regions, fig. 5). Importantly, after completing the CNN+GP training, projections to lower or higher scales via the GM require no additional training." }, { "heading": "D APPENDIX RESULTS", "text": "" }, { "heading": "D.1 NEGATIVE BINOMIAL REGRESSION DOES NOT DETECT WELL-KNOWN DRIVERS GENOME-WIDE", "text": "Negative binomial regression is the only other method that has been used to perform an unbiased genome-wide search for driver mutations Nik-Zainal et al. (2016); Rheinbay et al. (2020). We thus evaluated how the sensitivity of NBR to detect driver mutations genome-wide compared with the sensitivity of our method. While all known melanoma drivers present in >3 samples were found by the GM by projecting down to only 1kb scale, NBR at 1kb fails to detect TERT, the only known common non-coding driver mutation, yielding a p-value that was an order of magnitude less significant than the genome-wide significance for this scale. Similarly, while the GM detects all known esophageal adenocarcinoma drivers by projecting down to 100bp, NBR over 100bp fails to detect KRAS, an important genic driver of esophageal cancers, again yielding a p-value that was an order of magnitude less significant than the genome-wide significance threshold for 100bp. Note: we presented results at 50bp in the text to highlight our model’s ability to search in arbitrarily small regions, but all known drivers for esophageal adenocarcinoma are also detected in a search over regions of 100bp." }, { "heading": "D.2 CONVOLUTIONAL NEURAL NETWORK OUTPERFORMS OTHER DIMENSIONALITY REDUCTION ALTERNATIVES FOR A GAUSSIAN PROCESS", "text": "We first evaluated the methods for regional rate first and second moment inference, µR and σ2R , using our simulated datasets. We calculated accuracy as the Pearson R2 of the estimated mean and variance to the simulated ground-truth mean and variance. CNN+GP, FCNN+GP, NBR and RF accurately inferred µR, with R2µR > 0.95 for all three datasets (fig. 6a). However, PCA+GP, UMAP+GP, and AE+GP consistently under-performed (fig. 6a left), suggesting supervision when creating feature vectors is critical for the GP downstream performance.\nThe CNN+GP and FCNN+GP outperformed the other models when estimating the simulated variance (fig. 6a, right), suggesting the ability to represent arbitrary functions is important for learning\nuncertainty in a complex dataset. This conclusion is strengthened by the observation that UMAP and AE enabled relatively accurate variance estimation despite mediocre performance over the mean. Importantly, the clusters used for the simulated data were computed from mean epigenetic vectors; thus our CNN architecture (receiving an input in matrix form) was at a disadvantage. Nonetheless, the CNN+GP most accurately learned both µR and σ2R across all three simulated datasets (Fig. 6a), with slight improvement over the FCNN+GP.\nTo further compare the approaches, we applied the GP coupled models to estimate real mutation counts from the three cancers on multiple scales. Models were compared by their R2 to the observed mutations over the test set and to an empirical variance based on the model’s own feature vectors (fig. 6b) (see Appendix). The CNN+GP outperformed the FCNN+GP model over observed mutation counts and empirical variance estimation for all three cancer types. Additionally, the performance advantage of the CNN appeared to grow as window size and observed mutation counts increased. This suggests that local epigenetic patterns play an appreciable role in setting mutational processes and indicates that our model is well-designed to leverage the recent growth in genomics corpus sizes." }, { "heading": "D.3 EXISTING WHOLE-GENOME REGRESSION MODELS ARE TIME INEFFICIENT AT MULTI-RESOLUTION SEARCH", "text": "All existing regression models (RF, NBR, BR) require retraining for each desired scale. A requirement that becomes computationally challenging at finer resolutions (e.g. >1.5h for NBR at 100bp). To provide an estimate of the differences between existing methods and our SPG, we performed a multi-scale time analysis presented in . However, it does not include scales <100bp, such as 50bp used in this work to detect driver hot-spots. A log-log transform of the scale against the run-time () exposes a polynomial relation between the the window size and time (for small enough scales where the compute power is not governed by the machine’s memory and system operations). Extending this relation to a scale as small as 50bp run-time is as high as 1.5h for BR and 2.5h for NBR. Making the overall run-time for a typical multi-resolution scan of 50bp, 100bp, 500bp, 1kb, 10kb over 2h for BR and over 4h for NBR, while the SPG run-time remains under 1h." }, { "heading": "17 7578200 7578249 6 0.0146 1.28×10−13", "text": "" }, { "heading": "17 7578500 7578549 6 0.0129 6.16×10−14", "text": "" }, { "heading": "17 7578400 7578449 8 0.0147 2.76×10−18", "text": "" }, { "heading": "17 7577550 7577599 8 0.00856 3.72×10−20", "text": "" }, { "heading": "17 7577100 7577149 10 0.0153 7.43×10−23", "text": "" }, { "heading": "17 7577500 7577549 13 0.0141 1.75×10−30", "text": "" } ]
2,021
MULTI-RESOLUTION MODELING OF A DISCRETE STOCHASTIC PROCESS IDENTIFIES CAUSES OF CANCER
SP:9deac038d6aedcb20ea92ca2d40863e859515d9a
[ "The paper proposes a robust training algorithm for graph neural networks against label noise. The authors assume the labeled nodes are divided into two parts, clean part without noise and train part with some noise. The proposed method contains two parts. Firstly, it leverages label propagation (LP) trained on the clean nodes to assign pseudo labels on train nodes with noisy labels. Secondly, the authors design a learnable weight \\lambda to learn the label for those noisy nodes where LP does not agree with the original labels. The final graph neural network is trained with clean nodes, high confidence train nodes, and uncertain train nodes with learned labels. The authors conduct experiments on four graph datasets with manual injected noise and one real-world noisy dataset to validate the proposed method." ]
Massive labeled data have been used in training deep neural networks, thus label noise has become an important issue therein. Although learning with noisy labels has made great progress on image datasets in recent years, it has not yet been studied in connection with utilizing GNNs to classify graph nodes. In this paper, we propose a method, named LPM, to address the problem using Label Propagation (LP) and Meta learning. Different from previous methods designed for image datasets, our method is based on a special attribute (label smoothness) of graphstructured data, i.e., neighboring nodes in a graph tend to have the same label. A pseudo label is computed from the neighboring labels for each node in the training set using LP; meta learning is utilized to learn a proper aggregation of the original and pseudo label as the final label. Experimental results demonstrate that LPM outperforms state-of-the-art methods in graph node classification task with both synthetic and real-world label noise. Source code to reproduce all results will be released.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "arXiv preprint arXiv:1703.03400,", "year": 2017 }, { "authors": [ "Luca Franceschi", "Mathias Niepert", "Massimiliano Pontil", "Xiao He" ], "title": "Learning discrete structures for graph neural networks", "venue": "arXiv preprint arXiv:1903.11960,", "year": 2019 }, { "authors": [ "Chen Gong", "Dacheng Tao", "Wei Liu", "Liu Liu", "Jie Yang" ], "title": "Label propagation via teaching-tolearn and learning-to-teach", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2016 }, { "authors": [ "Chen Gong", "Hengmin Zhang", "Jian Yang", "Dacheng Tao" ], "title": "Learning with inadequate and incorrect supervision", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2017 }, { "authors": [ "Edward Grefenstette", "Brandon Amos", "Denis Yarats", "Phu Mon Htut", "Artem Molchanov", "Franziska Meier", "Douwe Kiela", "Kyunghyun Cho", "Soumith Chintala" ], "title": "Generalized inner loop metalearning", "venue": null, "year": 1910 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yifan Hou", "Jian Zhang", "James Cheng", "Kaili Ma", "Richard TB Ma", "Hongzhi Chen", "Ming-Chang Yang" ], "title": "Measuring and improving the use of graph information in graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ahmet Iscen", "Giorgos Tolias", "Yannis Avrithis", "Ondrej Chum" ], "title": "Label propagation for deep semisupervised learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "Deep bilevel learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Masayuki Karasuyama", "Hiroshi Mamitsuka" ], "title": "Manifold-based similarity adaptation for label propagation", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Junnan Li", "Yongkang Wong", "Qi Zhao", "Mohan S Kankanhalli" ], "title": "Learning to learn from noisy labeled data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wen Li", "Limin Wang", "Wei Li", "Eirikur Agustsson", "Luc Van Gool" ], "title": "Webvision database: Visual learning and understanding from web data", "venue": "arXiv preprint arXiv:1708.02862,", "year": 2017 }, { "authors": [ "Yuncheng Li", "Jianchao Yang", "Yale Song", "Liangliang Cao", "Jiebo Luo", "Li-Jia Li" ], "title": "Learning from noisy labels with distillation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yanbin Liu", "Juho Lee", "Minseop Park", "Saehoon Kim", "Eunho Yang", "Sung Ju Hwang", "Yi Yang" ], "title": "Learning to propagate labels: Transductive propagation network for few-shot learning", "venue": "arXiv preprint arXiv:1805.10002,", "year": 2018 }, { "authors": [ "Xingjun Ma", "Hanxun Huang", "Yisen Wang", "Simone Romano", "Sarah Erfani", "James Bailey" ], "title": "Normalized loss functions for deep learning with noisy labels", "venue": "arXiv preprint arXiv:2006.13554,", "year": 2020 }, { "authors": [ "Hyoungseob Park", "Minki Jeong", "Youngeun Kim", "Changick Kim" ], "title": "Self-training of graph neural networks using similarity reference for robust training with noisy labels", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2020 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "arXiv preprint arXiv:1803.09050,", "year": 2018 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2018 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fei Wang", "Changshui Zhang" ], "title": "Label propagation through linear neighborhoods", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2007 }, { "authors": [ "Hongwei Wang", "Jure Leskovec" ], "title": "Unifying graph convolutional neural networks and label propagation", "venue": "arXiv preprint arXiv:2002.06755,", "year": 2020 }, { "authors": [ "Hongxin Wei", "Lei Feng", "Xiangyu Chen", "Bo An" ], "title": "Combating noisy labels by agreement: A joint training method with co-regularization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Tong Xiao", "Tian Xia", "Yi Yang", "Chang Huang", "Xiaogang Wang" ], "title": "Learning from massive noisy labeled data for image classification", "venue": null, "year": 2015 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "arXiv preprint arXiv:1810.00826,", "year": 2018 }, { "authors": [ "Xingrui Yu", "Bo Han", "Jiangchao Yao", "Gang Niu", "Ivor W Tsang", "Masashi Sugiyama" ], "title": "How does disagreement help generalization against label corruption", "venue": null, "year": 1901 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Huan Zhang", "Zhao Zhang", "Mingbo Zhao", "Qiaolin Ye", "Min Zhang", "Meng Wang" ], "title": "Robust triplematrix-recovery-based auto-weighted label propagation for classification", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Zhilu Zhang", "Mert Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "John Lafferty", "Ronald Rosenfeld" ], "title": "Semi-supervised learning with graphs. PhD thesis, Carnegie Mellon University, language technologies institute, school", "venue": null, "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Neural Networks (DNNs) have achieved great success in various domains, but the necessity of collecting large amount of samples with high-quality labels is both expensive and time-consuming. To address this problem, cheaper alternatives have emerged. For example, the onerous labeling process can be completed on some crowdsourced system like Amazon Mechanical Turk 1. Besides, we can collect labeled samples from web with search engines and social media. However, all these methods are prone to produce noisy labels of low quality. As is shown in recent research (Zhang et al., 2016b), an intractable problem is that DNNs can easily overfit to noisy labels, which dramatically degrades the generalization performance. Therefore, it is necessary and urgent to design some valid methods for solving this problem.\nGraph Neural Networks (GNNs) have aroused keen research interest in recent years, which resulted in rapid progress in graph-structured data analysis (Kipf & Welling, 2016; Velickovic et al., 2017; Xu et al., 2018; Hou et al., 2019; Wang & Leskovec, 2020). Graph node classification is the mostcommon issue in GNNs. However, almost all the previous works about label noise focus on image classification problem and handling noisy labels in the task of graph node classification with GNNs has not been studied yet. Fortunately, most edges in the graph-structured datasets are intra-class edges (Wang & Leskovec, 2020), indicating that a node’s label can be estimated by its neighbor nodes’ labels. In this paper, we utilize this special attribute of graph data to alleviate the damages caused by noisy labels. Moreover, meta learning paradigm serves as a useful tool for us to learn a proper aggregation between origin labels and pseudo labels as the final labels.\nThe key contributions of this paper are as follows:\n• To the best of our knowledge, we are the first to focus on the label noise existing in utilizing GNNs to classify graph nodes, which may serve as a beginning for future research towards robust GNNs against label noise.\n• We utilize meta-learning to learn how to aggregate origin labels and pseudo labels properly to get more credible supervision instead of learning to re-weight different samples.\n1https://www.mturk.com/\nWe experimentally show that our LPM outperforms state-of-the-art algorithms in utilizing GNNs to classify graph nodes with both synthetic and real-world label noise." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 GRAPH NEURAL NETWORKS", "text": "To start, we use G = (V, E ,X ) to denote a graph whose nodes set is V and edges set is E , and X ∈ Rn×d is the input feature matrix, where n denotes the number of nodes in the graph and d is the dimension of the input feature vector of each node. We use eu,v ∈ E to denote the edge that connects node u and v. For each node v ∈ V , its neighbor nodes set can be donated as Nv = {u : eu,v ∈ E}. For node classification task, the goal of GNNs is to learn optimal mapping function f(·) to predict the class label yv for node v. Generally speaking, GNNs follows a framework including aggregation and combination in each layer. Different GNNs have proposed different ways of aggregation and combination. In general, the k-th layer of a GNN reads\na(k)v = Aggregate (k)({h(k−1)u : u ∈ N (v)}), h(k)v = Combine(k)(h(k−1)v , a(k)v ), (1)\nwhere h(k)v is the output for k-th layer of node v, h (0) v is the input vector of node v." }, { "heading": "2.2 LABEL PROPAGATION", "text": "In Label Propagation (LP), node labels are propagated and aggregated along the edges in the graph (Zhou et al., 2004; Zhu et al., 2005; Wang & Zhang, 2007; Karasuyama & Mamitsuka, 2013). There are some works which were designed to improve the performance of label propagation. For example, Gong et al. (2016) proposed a novel iterative label propagation algorithm which explicitly optimizes the propagation quality by manipulating the propagation sequence to move from simple to difficult examples; Zhang et al. (2020) introduces a triple matrix recovery mechanism to remove noise from the estimated soft labels during propagation. Label propagation has been applied in semi-supervised image classification task. For example, Gong et al. (2017) used a weighted Knearest neighborhood graph to bridge the datapoints so that the label information can be propagated from the scarce labeled examples to unlabeled examples along the graph edges. Park et al. (2020) proposed a novel framwork to propagate the label information of the sampled data (reliable) to adjacent data along a similarity based graph. Compared to these methods, we utilize the intrinsic graph structure instead of handcrafted graph to propagate clean labels information, which is more reliable for graph-structured data. Besides, GNNs are utilized by us to extract features and classify nodes for graph-structured data." }, { "heading": "2.3 META-LEARNING BASED METHODS AGAINST NOISY LABELS", "text": "Meta-learning aims to learn not only neural networks’ weights, but also itself, such as hand-designed parameters, optimizer and so on (Andrychowicz et al., 2016; Finn et al., 2017). Several works have utilized meta-learning paradigm to deal with label noise. For example, Li et al. (2019) has proposed to find noise-tolerant model parameters by keeping the consistency between the output of teacher and student networks, and Li et al. (2017b) trains the teacher networks with samples with clean labels and then transfer the knowledge to student networks so that the student can learn correctly even if the existence of mislabeled data. Besides, Ren et al. (2018); Jenni & Favaro (2018); Shu et al. (2019) utilize meta-learning paradigm to re-weight samples, i.e., weight samples with clean labels more and weight mislabeled samples less. The weighting factors are optimized by gradient decent or generated by a network to minimizes the loss on a small amount of samples with correct labels. In contrast, meta-learning paradigm is utilized in this paper to learn how to aggregate origin labels and pseudo labels properly. We can get more credible supervision by combining the original label information with the label information provided by LP properly." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 PRELIMINARIES", "text": "Given a graph data with n nodes and their labels D = {(x0, y0), (x1, y1), ..., (xn−1, yn−1)}, where xj is the j-th node and yj ∈ {0, 1}c is the label over c classes. Dtrain = {(x0, y0), (x1, y1), ..., (xs−1, ys−1)} are training nodes with noisy labels. Our goal is to enable the GNNs f(xj ;w) trained with noisy sets Dtrain can also generalize well on test nodes. w is the learnable parameters of GNNs. In our method, m nodes with true labels Dclean = {(xs, ys), (xs+1, ys+1), ..., (xs+m−1, ys+m−1)} in the graph are provided as the initial clean sets (m s). GCN (Kipf & Welling, 2016) and GAT (Velickovic et al., 2017) are utilized in our experiments to extract features and classify nodes. Our method includes two main parts: label propagation and label aggregation. We will go into details about these two parts in the following section 3.2 and section 3.3." }, { "heading": "3.2 LABEL PROPAGATION", "text": "Label Propagation is based on the label smoothness that two connected nodes tend to have the same label. Therefore, the weighted average of neighbor nodes’ label of a node is similar to this node’s true label. An illustration of LP part in our method can be found in Figure. 1. The first step of LP is to construct an appropriate neighborhood graph. A common choice is k-nearest graph (Iscen et al., 2019; Liu et al., 2018) but there is an intrinsic graph structure (adjacency matrix A) in graph data, so our similarities matrix W with zero diagonal can be constructed with A, whose elements Wi,j are pairwise similarities between node i and node j:\nWi,j = Ai,j\nd(hi, hj) + ε , (2)\nwhere hi, hj are the feature vectors extracted by GNNs for node i and node j. d(·, ·) is a distance measure (e.g.,Euclidean distance). ε is an infinitesimal. Note that we can get W with time complexity O(| E |) instead of O(n2) because A is a sparse matrix whose edge lists are given. Then we can normalize the similarities matrix W :\nS = D−1/2WD−1/2, (3)\nwhere D is a diagonal matrix with (i,i)-value to be the sum of the i-th row of W . Let Y (k) = [y\n(k) 1 , ..., y (k) n ]T ∈ Rn×c be the soft label matrix in LP iteration k and the i-th row y(k)i is the predicted label distribution for node i. When k = 0, the initial label matrix Y (0) = [y(0)1 , ..., y (0) n ]T\nconsists of one-hot label vectors for i = s, s+ 1, ..., s+m− 1(i.e., initial clean sets) or zero vectors otherwise. The LP (Zhu et al., 2005) in iteration k can be formulated as:\nY (k+1) = SY (k), (4)\ny (k+1) i = y (0) i ,∀i ∈ [s, s+m− 1] (5)\nIn Eq. (4), every node’s label in the (k + 1)-th iteration equals the weighted average of its neighbor nodes’ labels in k-th iteration. In this way, the clean sets propagate labels to the noisy training nodes according to normalized edge weights. And then in Eq. (5), the labels of clean sets nodes are reset to their initial values. The reason is that we can take full advantage of the tiny minority of clean nodes and in case that the effect of clean sets fade away.\nCo-teaching (Han et al., 2018) and Co-teaching plus (Yu et al., 2019) have been proposed to train DNNs robustly against label noise. There are two DNNs which select samples with small loss from noisy training sets to train each other. Our method is similar to theirs to some extent because LP is utilized by us to select true-labeled samples from Dtrain for training. However, instead of taking the nodes with small loss as true-labeled nodes, we select the nodes Dselect whose original labels are same with pseudo labels for training. Original labels ofDselect are credible and we also inject them to initial clean sets Dclean for better LP in next epoch. This is why our method can achieve better performance even if few true-labeled nodes are provided.\n3.3 META-LEARNING BASED LABEL AGGREGATION\n1. Fo w ar d no is y\n2. Aggregation net forward\n3. Ba ck w ar d no is y Training loss 6.Backw ard on backw\nard 4. Fo rw ar d cl ea n\n5.Backw ard clean\nClean loss\nGradients descent\n7.Aggregation net backward\nIn section 3.2, the selected training nodes (node 5,7 in Figure.1) have been utilized for training and LP but the left training nodes Dleft (node 6,8,9,10 in Figure.1) with abundant information haven’t been fully exploited. In this section, we mine the abundant and precious information from Dleft via meta learning. The computation process of label aggregation is shown in Figure. 2.\nFor ∀(xj , yj) ∈ Dleft, we can get two loss values: l1 = loss(ŷj , yj), (6) l2 = loss(ŷj , ỹj), (7)\nwhere ŷj is the label predicted by GNNs for training node j and ỹj is the pseudo label predicted by LP for node j. We can also get final label yj for node j by aggregating original label yj and pseudo label ỹj :\nyj = λjyj + (1− λj)ỹj , λj ∈ [0, 1] (8)\nwhere λ is the aggregation coefficient. Some previous methods designed a weighting function mapping training loss to sample weights for noisy label problems (Kumar et al., 2010; Ren et al., 2018; Shu et al., 2019). Instead, we utilize a 3-layer multi-layer perceptron (MLP) as the aggregation network g(·; ·) to map loss values to aggregation coefficient λj :\nλj = g(l1 ‖ l2; θ) = λj(θ;w), (9) Where l1 ‖ l2 is a 2-dimensional vector which is the concatenation of l1 and l2 and θ is the weights of aggregation network g. The rationality lies on a consensus that samples’ loss values are affiliated with the credibility of samples’ original labels (Kumar et al., 2010; Shu et al., 2019; Yu et al., 2019). The MLP or aggregation networks’ input layer are 2 neurons and its output layer is one neuron, which can be an approximator to almost any continuous functions. The activation function of the last layer is sigmoid function to ensure that output λj ∈ [0, 1]. We can get the training loss Ltrj for node j:\nLtrj (w, θ) = loss(ŷj(w), yj(θ)), (10)\nThen we can backward on the GNNs:\nŵt(θt) = wt − α | Dleft | ∑\n(xj ,yj)∈Dleft\n∇wLtrj (w, θt)|wt , (11)\nwhere α is the learning rate of GNNs. Then we can get the loss Lc on clean sets Dclean:\nLc(ŵt(θt)) = 1 | Dclean | ∑\n(xi,yi)∈Dclean\nloss(f(xi; ŵt(θt)), yi), (12)\nWhere f(xi; ŵt(θt)) is the output of GNNs. Then we can utilize Lc to update the weights of aggregation network:\nθt+1 = θt − β∇θLc(ŵ(θ))|θt , (13) where β is the learning rate of aggregation network. Finally, GNNs’ weights can be updated:\nwt+1 = wt − α | Dleft | ∑\n(xj ,yj)∈Dleft\n∇wLtrj (w, θt+1)|wt . (14)\nTo some extent, this part is similar to re-weight based methods (Ren et al., 2018; Shu et al., 2019). However, LPM has two significant advantages. Firstly, re-weight based methods can not remove the damages caused by incorrect labels because they assign every noisy training sample a positive weight while LPM potentially has the ability to take full advantage of noisy samples positively. Secondly, LPM can generate comparatively credible labels for other usages while re-weight or some other methods can not. Algorithm. 1 shows all the steps of our algorithm." }, { "heading": "3.4 CONVERGENCE OF LPM", "text": "Here we show theoretically that the loss functions will converge to critical points under some mild conditions. The detailed proof of the following theorems will be provided in Appendix C.\nTheorem 1 Suppose the loss function loss is L-Lipschitz smooth, and λ(·) is differential with a δbounded gradient, twice differential with its Hessian bounded byB with respect to θ. Let the learning rate αt = min{1, kT }, for some k > 0, such that k T < 1 and learning rate βt a monotone descent sequence, βt = min{ 1L , c√ T } for some c > 0, such that L ≤ c√ T and ∑∞ t=1 βt ≤ ∞, ∑∞ t=1 β 2 t ≤ ∞. Then the clean loss of Aggregation Net can achieve ‖∇θLc(ŵ(θt))‖22 ≤ in O(1/ 2) steps. More specifically,\nmin 0≤t≤T ‖∇θLc(ŵ(θt))‖22 ≤ O( C√ T ). (15)\nTheorem 2 Under the conditions of Theorem 1, with the gradient of loss bounded by ρ, then\nlim t→∞\n‖∇wtLtr(wt, θt+1)‖22 = 0. (16)\nAlgorithm 1: LPM. Line 2-12: label propagation; Line 13-22: label aggregation. Data: D,Dtrain,Dclean, max epochs T , LP iterations K in every epoch,A,feature matrix X ,GNNs feature extractor f , Aggregation Network g, expanding clean set for LP Dc Result: Robust GNNs parameters wT\n1 Dc = Dclean 2 for t = 0, 1, 2, ..., T − 1 do 3 for ∀v ∈ D do hv = f(xv;wt); 4 for (i, j) ∈ {1, 2, ..., n}2 do Wi,j = Ai,jd(hi,hj)+ε ; 5 for k = 0, 1, 2, ...,K − 1 do 6 Y (k+1) = D−1/2WD−1/2Y (k), y\n(k+1) j = y (0) j (∀ node j ∈ Dc)\n7 end 8 Dselect = Dleft = ∅; 9 for ∀ node i ∈ Dtrain do\n10 if onehot(y(K)i ) = yi do Dselect = node {i} ∪ Dselect; 11 else do Dleft = node {i} ∪ Dleft; 12 end 13 Dc = Dc ∪ Dselect 14 wt ← one-step optimization of wt with the selected nodes Dselect ; 15 for ∀ node j ∈ Dleft do 16 ŷj = f(xj ;wt); 17 l1 = loss(ŷj , yj); l2 = loss(ŷj , ỹj); 18 λj = g(l1 ‖ l2; θt); 19 yj = λjyj + (1− λj)ỹj , λj ∈ [0, 1];Ltrj (w, θ) = loss(ŷj(w), yj(θ)); 20 end 21 ŵt(θt) = wt − α|Dleft| ∑ (xj ,yj)∈Dleft ∇wL tr j (w, θt)|wt ; 22 Lc(ŵt(θt)) = 1 |Dclean| ∑\n(xi,yi)∈Dclean loss(f(xi; ŵt(θt)), yi); 23 θt+1 = θt − β∇θLc(ŵ(θ))|θt ; 24 wt+1 = wt − α|Dleft| ∑ (xj ,yj)∈Dleft ∇wL tr j (w; θt+1)|wt . 25 end" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND IMPLEMENTATION DETAILS", "text": "We validate our method on six benchmark datasets, namely citation networks (Sen et al., 2008) including Cora, Citeseer and Pubmed. Coauthor-Phy dataset (Shchur et al., 2018) is also utilized in our experiments, but the results are shown in Appendix A due to the limited space. Summary of the graph datasets mentioned above are shown in Table. 1. The Clothing1M (Xiao et al., 2015) and Webvision (Li et al., 2017a) dataset are utilized to validate the effectiveness of our method in real-world label noise settings. We take a kNN graph (k = 5) as the graph structure so that GNNs can be applied in these two datasets, which follows previous work (Franceschi et al., 2019). More details about our preprocessing on Clothing1M and Webvision datasets can be seen in Appendix B.\nThe experiments are conducted with two types of label noise: uniform noise and flip noise following previous works (Zhang et al., 2016a; Shu et al., 2019). The former means that the label of each sample is independently changed to a random class with probability p, and the latter means that the label is independently flipped to a similar class with total probability p. The ratio of training, validation, and test nodes are set as 4:4:2. Only nearly 25 nodes with clean labels in the validation set are provided as the clean set in each dataset and we ensure that each class has the same number of samples. For example, we use 8 clean samples per label class for Pubmed. GCN (Kipf & Welling, 2016) serves as the base classification network model in our experiments and it is trained using Adam (Kingma & Ba, 2014) with an initial learning rate 0.01 and a weight decay 5× 10−4, except that the weight decay equals to 0 in Clothing1M and Coauthor-Phy datasets.\nWe compare LPM with multiple baselines using the same network architecture. These baselines are typical and some of them achieve state-of-the-arts performance on image datasets, which include:\nBase model, referring to the GCN that directly trained on noisy training nodes; Meta-learning based methods L2RW (Ren et al., 2018), MW-Nets (Shu et al., 2019); Typical and effective method Co-teaching plus (Yu et al., 2019); Robust loss function against label noise GCE loss (Zhang & Sabuncu, 2018) and APL (Ma et al., 2020); The most recent method based on co-training JoCoR (Wei et al., 2020). For those baselines that don’t need clean sets (Base model, Co-teaching plus, GCE loss, JoCoR and APL), we finetune (denoted by FT in this paper) them on the initial clean sets after the model was trained on training sets for a fair comparison. More experimental details about LPM and all baselines are available in the Appendix B." }, { "heading": "4.2 RESULTS", "text": "Table. 2 shows the results on Cora and Citeseer with different levels of uniform noise ranging from 0% to 80%. Every experiment are repeated 5 times with different random seeds. Finally, we report the best test accuracy across all epochs averaged over 5 repetitions for each experiment. As can be seen in Table. 2, our method gets the best performance across all the datasets and all noise rates, except the second for 0% uniform noise rate. Our method performs even better when the labels are corrupted at high rate. Table. 3 shows the performance on Cora, Citeseer and Pubmed with different levels of flip noise ranging from 0% to 40%. It can be seen that our method also outperforms state-of-the-arts methods under flip noise across different noise rate, except that the second for 0% flip noise rate. Our method outperforms the corresponding second best method by a large margin when the noise rate is 0.4. As can be seen in Table. 4, our method can also perform better than other baselines in datasets with real-world label noise. We also experiment with Graph Attention Networks (Velickovic et al., 2017) as the feature extractor and classifier, the results shown in Appendix A demonstrate that our method can also perform well with other GNNs.\nTable 4: Comparison with baselines in test accuracy (%) on Clothing1M and Webvision. Mean accuracy (± std) over 5 repetitions are reported. The best is highlighted in bold.\nMethods Basemodel GCN+FT L2RW MW-Nets GCEloss+FT JoCoR+FT Ours Clothing1M 35.83±0.03 38.05±0.13 53.5±0.08 54.15±0.23 56.9±0.08 56.3±0.12 57.35±0.11 Webvision 32.43±0.05 34.58±0.08 50.12±0.16 52.42±0.25 53.45±0.13 54.12±0.22 55.43±0.17\nFigure 3: Comparsion of the true-labeled samples rate in Dtrain and Dselect in various datasets." }, { "heading": "4.3 ANALYSIS OF THE NECESSITY AND EFFECTIVENESS OF DIFFERENT PARTS", "text": "We design five experiments to validate the necessity and effectiveness of different components of our algorithm. Firstly, we compare the ratio of truelabeled nodes in Dselect with Dtrain in the last epoch to validate the effectiveness of LP. Figure. 3 shows the ratio of true-labeled nodes in Dselect in the last epoch and Dtrain under uniform noise on various datasets. It can be found that nearly all the nodes selected by LP are true-labeled even if most training nodes are mislabeled, which demonstrates the great ability of LP to select true-labeled nodes from noisy training nodes. Secondly, we remove the label aggregation in LPM to validate its necessity and the result shows that the performance of our method become much worse without label aggregation. It is necessary to mine the potential information from the left noisy training nodes after LP selection. Besides, we validate the effectiveness by replacing the learned aggregation coefficients λ with random\nnumbers between 0 and 1. It is obvious that the aggregation coefficients λ optimized by meta learning outperform random λ. Also, we assign the percentage of clean nodes of each label class as λ (tuned) for comparison. These validate the effectiveness of the meta-learning based label aggregation. The results of above two experiments are shown in Table. 5. We denote the average of λ of clean nodes and noisy nodes in Dleft as λclean and λnoise respectively, ∆λ = λclean − λnoise. We plot the variation of ∆λ during training stage in Figure. 4. It can be observed that λclean > λnoise across the training stage and the margin between λclean and λnoise grows larger with the training process, which suggests that λ optimized by our method is valid." }, { "heading": "4.4 IMPACT OF FINETUNING AND NOISE RATE", "text": "We would like to investigate how our baselines can perform without finetuning. As can be seen in Figure. 5, the performance of the baselines will degenerate relatively significantly without finetuning across different noise rate. This illustrates that some baselines (without finetuning) that are designed for image datasets may perform relatively poor on graph-structured data and this motivates our work which trains GNNs robustly utilizing the structure information of graph data. Besides, We can also observe that our method only drops nearly 9% when the flip noise rate increased from 0% to 40%, whereas the baseline has dropped nearly 20% - 30%, which illustrates that our method is more robust, especially at high noise rate. At 0% noise, our method only slightly underperforms reweights besed methods. This is reasonable because the original labels are all correct but our method will inevitably perturb a few clean labels while the re-weights based methods will not." }, { "heading": "4.5 SIZE OF THE CLEAN SET", "text": "We try to strike a balance and understand when finetuning will be effective. As can be seen in Figure. 6, our method can also perform better even if the size of clean set is extremely small. The overall test accuracy does not grow much when the size of clean set is large enough. Besides, the test accuracy of baselines with fintuning will increase significantly when the size of clean set grows larger. This suggests that finetuning will be valid when the size of clean set grows larger because GNNs can achieve good performance with relatively less samples (Kipf & Welling, 2016; Veličković et al., 2017). From this perspective, our method can also serve as complementary for finetuning based methods when the size of clean set is large enough." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this work, we proposed a robust framwork for GNNs against label noise. This is the first method that specially designed for label noise problem existing in utilizing GNNs to classify graph nodes and it outperforms state-of-the-arts methods in graph-structured data, which may serve as the beginning for future research towards robust GNNs against label noise. As a future work, we may design an inductive robust method. Besides, better methods that don’t need clean sets are also the goals of us." }, { "heading": "A APPENDIX : ADDITIONAL EXPERIMENT RESULTS", "text": "We also take Graph Attention Networks (GAT) as the feature extractor and classifier and the results shown in Table. A.6 validate that our method can also perform well with various GNNs. Besides, LPM can also perform better than other baselines in larger graph dataset Coauthor-Phy, the results can be seen in Table. A.7. We also demonstrate confusion matrices of Basemodel and LPM in Figure. A.4, which visually show that our method can improve the robustness against label noise of GNNs by a large margin." }, { "heading": "B APPENDIX : ADDITIONAL DETAILS OF OUR EXPERIMENTS", "text": "Original Clothing1M and Webvision datasets are all large-scale datasets with real-world label noise. We randomly choose 5000 images in 10 classes from original datasets and every image serves as a node in the graph, a kNN graph (k=5) is treated as the graph structure so that GNNs can be applied in Clothing1M datasets. This setting is similar to some previous works which also aim to apply GNNs in datasets without graph structure. ResNet-50 with ImageNet pretrained weights is utilized by us to extract feature vectors for all the images. Table. A.8 shows the different hyper-parameters in LPM experiments for different datasets. In all the experiments, 25 true-labeled nodes are utilized as the initial clean sets or as the samples for\nfinetuning and the total epoch of all the experiments is 300. In Co-teaching plus experiment, the initial epoch is 270, the forget rate is 0.1 and 5 epochs for linear drop rate ,the exponent of the forget rate is 1. For MW-Nets, the dimension of the meta net’s middle layer is 100 and the learning rate is 5 × 10−3. q for GCEloss is 0.1. The combination of Normalized Focal Loss and Mean Absolute Error is utilized in APL experiments, the weight of Normalized Focal Loss is 0.1 and the weight of Mean Absolute Error is 10. For JoCoR experiments, the epochs for linear drop rate is 5 and the exponent of the forget rate is 2. The balance coefficient between conventional supervised learning loss and contrastive loss is 0.01. The learning rate and weight decay of Graph Attention Networks are 0.01 and 5 × 10−4. The dimension of hidden layer of GAT is 16 and the number of head attentions is 8. The alpha of the leaky relu is 0.2 and the dropout rate is 0.5. Throughout this work we implemented gradient based meta-learning algorithms in PyTorch using the Higher library (Grefenstette et al., 2019)." }, { "heading": "C APPENDIX : CONVERGENCE OF LPM", "text": "Our proof of the convergence of LPM mainly follow some previous works (Ren et al., 2018; Shu et al., 2019) that utilize meta-learning to reweight noisy training samples. As is illustrated in some previous works (Zhou et al., 2004; Zhu et al., 2005), LPA will converge to a fixed point. Namely, Dselect and Dleft will converge to fixed sets. In our proof, the final | Dleft | and final | Dclean | are denoted with n and m for easier illustration. Loss function loss is denoted by l in this proof. Here we first rewrite the forward and backward equations as follows:\nŷj = f(xj ;wt) = yj(w)|wt (17) λj = g(l(yj , ŷj) ‖ l(ỹj , ŷj); θt) = λj(θ;wt)|θt (18)\nLtr(wt; θt) = 1\nn n∑ j=1 l(λjyj + (1− λj)ỹj , ŷj) (19)\nŵt(θt) = wt − α∇wLc(w; θt)|wt (20)\nŷi = f(xi; ŵt) = yi(ŵ;xi)|ŵt (21)\nLc(ŵ)|ŵt = 1\nm m∑ i=1 Lci (ŵ)|ŵt = 1 m m∑ i=1 Lci (ŵt(θ))|θt = 1 m m∑ i=1 l(yi, ŷi) (22)\nθt+1 = θt − β∇θLc(ŵ(θ))|θt (23) wt+1 = wt − α∇wLtr(w; θt+1)|wt (24)\n(xj , yj) is node from the final left training set Dleft; (xi, yi) is node from the final clean set Dclean; f is the GCN for classification with its weights w; g is the Aggregation Net whose input are the nodes from clean set with its weights θ; Lc is the loss on clean sets. Ltr is the final training loss. l(y, ŷ) is the loss (such as Cross Entropy) which satisfies linearity given by\nl(λy1 + (1− λ)y2, ŷ) = λl(y1, ŷ) + (1− λ)l(y2, ŷ).\nDerivation of the equation of updating the weights in Aggregation Net\n1\nm m∑ i=1 ∇θLci (ŵ(θ))|θt = 1 m m∑ i=1 ∂Lci (ŵ) ∂ŵ |ŵt n∑ j=1 ∂ŵt(θ) ∂λj |θt ∂λj(θ;wt) ∂θ |θt . (25)\nAccording to Equation (20)\nŵt(θ)|θt = wt − α∇wt 1\nn n∑ j=1 l(λjyj + (1− λj)ỹj , ŷj)\n∂ŵt(θ)\n∂λj |θt = −\nα n ∇wt ∂l(λjyj + (1− λj)ỹj , ŷj) ∂λj |θt\n∂ŵt(θ)\n∂λj |θt = −\nα n ∇wt ∂[λj l(yj , ŷj) + (1− λj)l(ỹj , ŷj)] ∂λj |θt\n∂ŵt(θ)\n∂λj |θt = −\nα n ∇wt(l(yj , ŷj)− l(ỹj , ŷj))|θt\n∂ŵt(θ)\n∂λj |θt = −\nα\nn ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt\nTherefore, Equation (25) can be written as\n1\nm m∑ i=1 ∇θLci (ŵ(θ))|θt\n=− α mn m∑ i=1 ∂Lci (ŵ) ∂ŵ |ŵt n∑ j=1 ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt ∂λj(θ;wt) ∂θ |θt\n=− α n n∑ j=1 ( 1 m m∑ i=1 ∂Lci (ŵ) ∂ŵ |Tŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt) ∂λj(θ;wt) ∂θ |θt\n=− α n n∑ j=1 ( 1 m m∑ i=1 Gij) ∂λj(θ;wt) ∂θ |θt ,\nwhere Gij = ∂Lci (ŵ) ∂ŵ | T ŵt ∂(l(yj ,ŷj)−l(ỹj ,ŷj)) ∂wt |wt .\nLemma 1. Suppose the loss function l is L-Lipschitz smooth, and λ(·) is differential with a δ-bounded gradient, twice differential with its Hessian bounded by B with respcet to θ, and the loss function l(·, ·) have ρ-bounded gradients with respect to the parameter w. Then the gradient of w with respect to Lci (ŵ) is Lipschitz continuous.\nProof. The supposition is equivalent to the following inequalities,\n‖∇ŵLc(ŵ)|w1 −∇ŵLc(ŵ)|w2‖ ≤ L‖w1 − w2‖, (26)\nfor any w1, w2;\n‖∇θλ(θ;wt)‖ ≤ ρ; (27) ‖∇2θ2λ(θ;wt)‖ ≤ B; (28)\n‖∇wl(yi, ŷi((ŵt(w);xi)))‖ ≤ δ. (29)\nThe gradient of θ with respect to loss on clean set reads\n∇θLci (ŵ(θ))|θt\n=− α n n∑ j=1 ∂Lci (ŵ) ∂ŵ |Tŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt ∂λj(θ;wt) ∂θ |θt\n=− α n n∑ j=1 Gij ∂λj(θ;wt) ∂θ |θt\nTaking the gradient of θ in both sides of the equation, we have\n∇2θ2Lci (ŵ(θ))|θt = − α\nn n∑ j=1 ( ∂Gij ∂θ |θt ∂λj(θ;wt) ∂θ |θt + (Gij) ∂2λj(θ;wt) ∂θ2 |θt).\nFor the first term in summation,\n‖∂Gij ∂θ |θt ∂λj(θ;wt) ∂θ |θt‖\n≤δ‖ ∂ ∂ŵ ( ∂Lci (ŵ) ∂θ |Tθt)| T ŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt‖\n=δ‖ ∂ ∂ŵ (−α n n∑ k=1 ∂Lci (ŵ) ∂ŵ |Tŵt ∂(l(yk, ŷk)− l(ỹk, ŷk)) ∂wt |wt ∂λj(θ;wt) ∂θ |θt)|Tŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt‖\n=δ‖(−α n n∑ k=1 ∂2Lci (ŵ) ∂ŵ2 |Tŵt ∂(l(yk, ŷk)− l(ỹk, ŷk)) ∂wt |wt ∂λk(θ;wt) ∂θ |θt)|Tŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt‖\n≤δα‖∂ 2Lci (ŵ)\n∂ŵ2 |ŵt‖‖ ∂(l(yk, ŷk)− l(ỹk, ŷk)) ∂wt |wt‖‖ ∂λk(θ;wt) ∂θ |θt)|ŵt‖‖ ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt‖\n≤4αLρ2δ2.\nAnd for the second term,\n‖(Gij) ∂2λj(θ;wt)\n∂θ2 |θt‖ = ‖\n∂Lci (ŵ)\n∂ŵ |Tŵt ∂(l(yj , ŷj)− l(ỹj , ŷj)) ∂wt |wt ∂2λj(θ;wt) ∂θ2 |θt‖ ≤2Bρ2.\nTherefore,\n‖∇2θ2Lci (ŵ(θ))|θt‖ ≤ 4α2Lρ2δ2 + 2αρ2B. (30)\nLet Lv = 4α2Lρ2δ2 + 2αρ2B ,Based on Lagrange mean value theorem, we have\n‖∇θLc(ŵt(θ1))−∇θLc(ŵt(θ2))‖ ≤ Lv‖θ1 − θ2‖,\nfor all θ1, θ2.\nTheorem 1. Suppose the loss function l is L-Lipschitz smooth, and λ(·) is differential with a δbounded gradient, twice differential with its Hessian bounded byB with respect to θ. Let the learning rate αt = min{1, kT }, for some k > 0, such that k T < 1 and learning rate βt a monotone descent sequence, βt = min{ 1L , c√ T } for some c > 0, such that L ≤ c√ T and ∑∞ t=1 βt ≤ ∞, ∑∞ t=1 β 2 t ≤ ∞. Then the loss of Aggregation Net can achieve ‖∇θLc(ŵ(θt))‖22 ≤ in O(1/ 2)steps. More specifically,\nmin 0≤t≤T ‖∇θLc(ŵ(θt))‖22 ≤ O( C√ T ). (31)\nProof. The iteration for updating the parameter θ reads\nθt+1 = θt − β∇θLc(ŵt(θ))|θt .\nIn two successive iteration, observe that\nLc(ŵt+1(θt+1))− Lc(ŵt(θt)) =[Lc(ŵt+1(θt+1))− Lc(ŵt(θt+1))] + [Lc(ŵt(θt+1))− Lc(ŵt(θt))]. (32)\nFor the first term, given that loss function on clean set is Lipschitz smooth, we have\nLc(ŵt+1(θt+1))− Lc(ŵt(θt+1))\n≤ < ∇Lc(ŵt(θt+1)), ŵt+1(θt+1)− ŵt(θt+1) > + L\n2 ‖ŵt+1(θt+1)− ŵt(θt+1)‖22.\nAccording to Equation (20) and (23),\nŵt+1(θt+1)− ŵt(θt+1) = − αt n n∑ j=1 [λj∇wl(yj , ŷj) + (1− λj)∇wl(ỹj , ŷj)]|wt+1 ,\nand thus,\n‖Lc(ŵt+1(θt+1))− Lc(ŵt(θt+1))‖ ≤ αtρ2 + L\n2 α2tρ 2,\nsince the first gradient of loss function is bounded by ρ. By the Lipschitz continuity of Lc(ŵt(θ)) according to Lemma 1., it can be obtained that\nLc(ŵt(θt+1))− Lc(ŵt(θt))\n≤〈∇θtLc(ŵt(θt)), θt+1 − θt〉+ L\n2 ‖θt+1 − θt‖22\n= 〈∇θtLc(ŵt(θt)),−βt∇θtLc(ŵt(θt))〉+ Lβ2t\n2 ‖∇θtLc(ŵt(θt))‖22\n=− (βt − Lβ2t\n2 )‖∇θtLc(ŵt(θt))‖22.\nTherefore, the Equation (32) satisfies\nLc(ŵt+1(θt+1))− Lc(ŵt(θt)) ≤ αtρ2 + L\n2 α2tρ\n2 − (βt − Lβ2t\n2 )‖∇θtLc(ŵt(θt))‖22\n(βt − Lβ2t\n2 )‖22∇θtLc(ŵt(θt))‖22 ≤ αtρ2 +\nL 2 α2tρ 2 − Lc(ŵt+1(θt+1)) + Lc(ŵt(θt)).\nSumming up above inequalities from 1 to T , we have\nT∑ t=1 (βt − Lβ2t 2 )‖∇θtLc(ŵt(θt))‖22 ≤Lc(ŵ1(θ1)) + T∑ t=1 (αtρ 2 + L 2 α2tρ 2)\nT∑ t=1 (βt − Lβ2t 2 ) min t ‖∇θtLc(ŵt(θt))‖22 ≤Lc(ŵ1(θ1)) + T∑ t=1 (αtρ 2 + L 2 α2tρ 2).\nFurthermore,\nmin t ‖∇θtLc(ŵt(θt))‖22 ≤\nLc(ŵ1(θ1)) + ∑T t=1(αtρ 2 + L2 α 2 tρ\n2)∑T t=1(βt − Lβ2t 2 )\n≤ 2Lc(ŵ1(θ1)) +\n∑T t=1(2αtρ\n2 + Lα2tρ 2)∑T\nt=1(2βt − Lβ2t )\n≤ 2Lc(ŵ1(θ1)) +\n∑T t=1(2αtρ\n2 + Lα2tρ 2)∑T\nt=1(βt)\n≤2L c(ŵ1(θ1)) + α1ρ 2T (2 + L)\nTβt\n= 2Lc(ŵ1(θ1)\nT\n1 βt + α1ρ\n2(2 + L)\nβt\n≤2L c(ŵ1(θ1)\nT max{L,\n√ T\nk }+ min{1, k T }max{L,\n√ T\nk }ρ2(2 + L)\n≤2L c(ŵ1(θ1)\nc √ T\n+ kρ2(2 + L)\nc √ T\n= O( 1 T ).\nIt holds for ∑T t=1(βt) ≤ ∑T t=1(2βt − Lβ2t ). In conclusion, it proves that the algorithm can always achieve min0≤t≤T ‖∇θLc(ŵ(θt))‖22 ≤ O( 1√T ) in T steps.\nLemma 2. Let (an)1≤n, (bn)1≤n be two non-negative real sequences such that the series ∑∞ ii an\ndiverges, the series ∑∞ ii anbn converges, and there exists K > 0 such that ‖bn+1 − bn‖ ≤ Kan. Then the seqences (bn)1≤n converges to 0. Proof. See the proof of Lemma A.5 in [Stochastic majorization-minimization algorithms for ].\nTheorem 2. Suppose the loss function l is L-Lipschitz smooth and have ρ-bounded gradients with respect to training data and clean set, and λ(·) is differential with a δ-bounded gradient twice differential with its Hessian bounded by B with respect to θ. Let the learning rate αt = min{1, kT }, for some k > 0, such that kT < 1 and learning rate βt a monotone descent sequence, βt = min{ 1 L , c√ T } for some c > 0, such that L ≤ c√ T and ∑∞ t=1 βt ≤ ∞, ∑∞ t=1 β 2 t ≤ ∞. Then\nlim t→∞\n‖∇wtLtr(wt; θt+1)‖22 = 0.\nProof. It is obvious that at satisfy ∑∞ t=0 at =∞, ∑∞ t=0 at ≤ ∞. In Eq. 18, 19, 20, and the linearity of L, we rewrite the update of w as\nwt+1 = wt − αt∇Ltr(wt; θt+1)\n= wt − αt n n∑ j=1 λj(θt+1;wt)∇wt l(yj , ŷj(wt)) + (1− λj(θt+1;wt))∇wt l(ỹj , ŷj(wt)).\nFirst, we have the difference of the loss function on training set between two iterations,\nLtr(wt+1; θt+2)− Ltr(wt; θt+1) =[Ltr(wt+1; θt+2)− Ltr(wt+1; θt+1)] + [Ltr(wt+1; θt+1)− Ltr(wt; θt+1)]. (33)\nFor the first term in Eq.33, by the L-Lipschitz-smooth and ρ−bounded gradients of λ with respect to training and clean set,\nLtr(wt+1; θt+2)− Ltr(wt+1; θt+1)\n= 1\nn n∑ j=1 (λj(θt+2;wt+1)− λj(θt+1;wt+1))l(yj , ŷ(wt+1)) + (λj(θt+1;wt+1)− λj(θt+2;wt+1))l(ỹj , ŷj(wt+1))\n≤ 1 n n∑ j=1 ( 〈 ∂λj(θ;wt+1) ∂θ |θt+1 , θt+2 − θt+1 〉 + δ 2 ‖θt+2 − θt+1‖22)(l(yj , ŷ(wt+1)) + l(yj , ŷ(wt+1)))\n= 1\nn n∑ j=1 ( 〈 ∂λj(θ;wt+1) ∂θ |θt+1 ,−βt∇θtL(ŵt(θt)) 〉 + δβ2t 2 ‖∇θtL(ŵt(θt))‖22)(l(yj , ŷ(wt+1)) + l(yj , ŷ(wt+1))).\nFor the second term in Eq. 33,\nLtr(wt+1; θt+1)− Ltr(wt; θt+1) ≤ 〈 ∇wtLtr(wt; θt+1), wt+1 − wt 〉 + L\n2 ‖wt+1 − wt‖22\n=− (αt − La2t\n2 )‖∇wtLtr(wt; θt+1)‖22.\nTherefore, we have\nLtr(wt+1; θt+2)− Ltr(wt; θt+1)\n≤ 1 n n∑ j=1 ( 〈 ∂λj(θ;wt+1) ∂θ |θt+1 ,−βt∇θtLc(ŵt(θt)) 〉\n+ δβ2t 2 ‖∇θtL(ŵt(θt))‖22)(l(yj , ŷ(wt+1)) + l(yj , ŷ(wt+1))) −(αt − Lα2t\n2 )‖∇wtLtr(wt; θt+1)‖22.\nSumming up the inequalities in both sides from t = 1 to∞, we have\nlim t→∞\n‖Ltr(wt+1; θt+2)− Ltr(w1; θ2)‖\n≤ ∞∑ t=1 −βt n n∑ j=1 [‖∂λj(θ;wt+1) ∂θ |θt+1‖2‖∇θtLc(ŵt(θt))‖2(‖l(yj , ŷ(wt+1))‖2 + ‖l(yj , ŷ(wt+1))‖2)\n+ ∞∑ t=1 δβ2t 2 n∑ j=1 ‖∇θtLc(ŵt(θt))‖22](‖l(yj , ŷ(wt+1))‖2 + ‖l(yj , ŷ(wt+1))‖2)\n− ∞∑ t=1 (αt − Lα2t 2 )‖∇wtLtr(wt; θt+1)‖22.\nRearrange the terms of the inequality, we obtain ∞∑ t=1 αt‖∇wtLtr(wt; θt+1)‖22\n+ ∞∑ t=1 βt n n∑ j=1 ‖∂λj(θ;wt+1) ∂θ |θt+1‖2‖∇θtLc(ŵt(θt))‖2(‖l(yj , ŷ(wt+1))‖2 + ‖l(yj , ŷ(wt+1))‖2)\n≤Lα 2 t\n2 ‖∇wtLtr(wt; θt+1)‖22\n+ ∞∑ t=1 δβ2t 2 n∑ j=1 ‖∇θtLc(ŵt(θt))‖22](‖l(yj , ŷ(wt+1))‖2 + ‖l(yj , ŷ(wt+1))‖2)\n− lim t→∞ ‖Ltr(wt+1; θt+2)‖2 + ‖Ltr(w1; θ2)‖2\n≤ ∞∑ t=1 Lαt 2 ρ2 + ‖Ltr(w1; θ2)‖2 + ∞∑ t=1 δβ2t 2 (2Mρ2)− lim t→∞ ‖Ltr(wt+1; θt+2)‖2\n≤∞. The inequality next to last holds since our loss function is bounded by M , and the last one holds for∑∞ t=1 α 2 t and ∑∞ t=1 β 2 t are finite.\nIn addition, since ∞∑ t=1 βt n n∑ j=1 ‖∂λj(θ;wt+1) ∂θ |θt+1‖2‖∇θtLc(ŵt(θt))‖2(‖l(yj , ŷ(wt+1))‖2 + ‖l(yj , ŷ(wt+1))‖2)\n≤2Mρδ ∞∑ t=1 βt ≤ ∞,\nwe can obtain that ∞∑ t=1 αt‖∇wtLtr(wt; θt+1)‖22 ≤ ∞. (34)\nIn the other hand, based on the inequality: (‖a‖+ ‖b‖)(‖a‖ − ‖b‖) ≤ ‖a+ b‖‖a− b‖,\nwe have |‖∇Ltr(wt+1; θt+2)‖22 − ‖∇Ltr(wt; θt+1)‖22|\n=(‖∇Ltr(wt+1; θt+2)‖2 + ‖∇Ltr(wt; θt+1)‖2)(‖∇Ltr(wt+1; θt+2)‖2 − ‖∇Ltr(wt; θt+1)‖2) ≤‖∇Ltr(wt+1; θt+2) +∇Ltr(wt; θt+1)‖2‖‖2‖∇Ltr(wt+1; θt+2)−∇Ltr(wt; θt+1)‖2 ≤(‖∇Ltr(wt+1; θt+2)‖2 + ‖∇Ltr(wt; θt+1)‖2)‖∇Ltr(wt+1; θt+2)−∇Ltr(wt; θt+1)‖2) ≤2Lρ‖(wt+1, θt+2)− (wt, θt+1)‖2 ≤2Lραtβt‖(∇Ltr(wt, θt+1),∇Lc(wt, θt+1))‖2 ≤2 √ 2Lρ2β1αt\n=Cαt.\nFor Eq. 34 which reads ∞∑ t=1 αt‖∇wtLtr(wt; θt+1)‖22 ≤ ∞,\nsince ∑∞ t=0 αt = ∞, and there exists K = C > 0, such that |‖∇Ltr(wt+1; θt+2)‖22 − ‖∇Ltr(wt; θt+1)‖22| ≤ Cαt, by Lemma 2., we can conclude that lim t→∞ ‖∇wtLtr(wt; θt+1)‖22 = 0,\nwhich indicates that the gradient of loss on training set of our algorithm will finally achieve to zero, and thus the iteration of w enables training loss to converge." } ]
2,020
null
SP:280c877eeaeb18c931ef41182155ce29a95adb06
[ "This paper is proposing to build ensembles of deep models, components of which have different hyperparameter (HP) configurations. This is done by first running Hyperband to create a large pool, and then run a greedy algorithm to construct an ensemble. This algorithm is termed Dykstra's algorithm on a certain graph, but it is of course simply just the default greedy algorithm, which is almost by default used to create an ensemble from a pool. The correct reference for this is [1], and this is just what people do when they create ensembles. The paper also misses a number of relevant recent work to build ensembles of deep models, at least [2, 3]. There is nothing new here, except maybe that Caruana's algorithm can now also be called Dykstra's." ]
Ensemble Deep Learning improves accuracy over a single model by combining predictions from multiple models. It has established itself to be the core strategy for tackling the most difficult problems, like winning Kaggle challenges. Due to the lack of consensus to design a successful deep learning ensemble, we introduce Hyperband-Dijkstra, a new workflow that automatically explores neural network designs with Hyperband and efficiently combines them with Dijkstra’s algorithm. This workflow has the same training cost than standard Hyperband running except sub-optimal solutions are stored and are candidates to be selected in the ensemble selection step (recycling). Next, to predict on new data, the user gives to Dijkstra the maximum number of models wanted in the ensemble to control the tradeoff between accuracy and inference time. Hyperband is a very efficient algorithm allocating exponentially more resources to the most promising configurations. It is also capable to propose diverse models due to its pure-exploration nature, which allows Dijkstra algorithm with a smart combination of diverse models to achieve a strong variance and bias reduction. The exploding number of possible combinations generated by Hyperband increases the probability that Dijkstra finds an accurate combination which fits the dataset and generalizes on new data. The two experimentation on CIFAR100 and on our unbalanced microfossils dataset show that our new workflow generates an ensemble far more accurate than any other ensemble of any ResNet models from ResNet18 to ResNet152.
[ { "affiliations": [], "name": "HYPERPARAMETER OPTI" }, { "affiliations": [], "name": "DEEP LEARNING" } ]
[ { "authors": [ "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "J. Mach. Learn. Res.,", "year": 2012 }, { "authors": [ "James S. Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyperparameter optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Shagnik Das" ], "title": "A brief note on estimates of binomial coefficients", "venue": "http://page.mi. fu-berlin.de/shagnik/notes/binomials.pdf,", "year": 2020 }, { "authors": [ "E.W. Dijkstra" ], "title": "A note on two problems in connexion with graphs", "venue": "Numer. Math.,", "year": 1959 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "BOHB: robust and efficient hyperparameter optimization at scale", "venue": "CoRR, abs/1807.01774,", "year": 2018 }, { "authors": [ "Michael Gashler", "Christophe Giraud-Carrier", "Tony Martinez" ], "title": "Decision tree ensemble: Small heterogeneous is better than large homogeneous", "venue": "Seventh International Conference on Machine Learning and Applications, pp. 900–905,", "year": 2008 }, { "authors": [ "Felipe O. Giuste", "Juan C. Vizcarra" ], "title": "Cifar-10 image classification using feature ensembles, 2020", "venue": null, "year": 2020 }, { "authors": [ "Antonio Gulli", "Sujit Pal" ], "title": "Deep learning with Keras", "venue": "Packt Publishing Ltd,", "year": 2017 }, { "authors": [ "Peter E. Hart", "Nils J. Nilsson", "Bertram Raphael" ], "title": "Correction to ”a formal basis for the heuristic determination of minimum cost paths", "venue": "ISSN 01635719", "year": 1972 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Matthew Hoffman", "Eric Brochu", "Nando de Freitas" ], "title": "Portfolio allocation for bayesian optimization", "venue": "In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence,", "year": 2011 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get M for free", "venue": "CoRR, abs/1704.00109,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M. Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Population based training of neural networks", "venue": "CoRR, abs/1711.09846,", "year": 2017 }, { "authors": [ "Travis Johnston", "Steven Young", "David Hughes", "Robert Patton", "Devin White" ], "title": "Optimizing convolutional neural networks for cloud detection", "venue": "pp. 1–9,", "year": 2017 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Levente Kocsis", "Csaba Szepesvári" ], "title": "Bandit based monte-carlo planning", "venue": null, "year": 2006 }, { "authors": [ "Alexander Kolesnikov", "Lucas Beyer", "Xiaohua Zhai", "Joan Puigcerver", "Jessica Yung", "Sylvain Gelly", "Neil Houlsby" ], "title": "Big transfer (bit): General visual representation learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": null, "year": 2017 }, { "authors": [ "Yuansong Liao", "John Moody" ], "title": "Constructing heterogeneous committees using input feature grouping: Application to economic forecasting", "venue": null, "year": 1999 }, { "authors": [ "Richard Liaw", "Eric Liang", "Robert Nishihara", "Philipp Moritz", "Joseph E Gonzalez", "Ion Stoica" ], "title": "Tune: A research platform for distributed model selection and training", "venue": "arXiv preprint arXiv:1807.05118,", "year": 2018 }, { "authors": [ "Y. Liu", "X. Yao" ], "title": "Ensemble learning via negative correlation", "venue": "Neural Networks,", "year": 1999 }, { "authors": [ "Matthew Rocklin" ], "title": "Dask: Parallel Computation with Blocked algorithms and Task Scheduling", "venue": null, "year": 2015 }, { "authors": [ "Philipp Moritz", "Robert Nishihara", "Stephanie Wang", "Alexey Tumanov", "Richard Liaw", "Eric Liang", "Melih Elibol", "Zongheng Yang", "William Paul", "Michael I. Jordan", "Ion Stoica" ], "title": "Ray: A distributed framework for emerging ai applications", "venue": "In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation,", "year": 2018 }, { "authors": [ "Robert M. Patton", "J. Travis Johnston", "Steven R. Young", "Catherine D. Schuman", "Thomas E. Potok", "Derek C. Rose", "Seung-Hwan Lim", "Junghoon Chae", "Le Hou", "Shahira Abousamra", "Dimitris Samaras", "Joel Saltz" ], "title": "Exascale Deep Learning to Accelerate Cancer Research", "venue": "arXiv e-prints, art. arXiv:1909.12291,", "year": 1909 }, { "authors": [ "Hieu Pham", "Melody Y. Guan", "Barret Zoph", "Quoc V. Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "CoRR, abs/1802.03268,", "year": 2018 }, { "authors": [ "Lutz Prechelt" ], "title": "Early stopping-but when? In Neural Networks: Tricks of the Trade, This Book is an Outgrowth", "venue": "NIPS Workshop,", "year": 1996 }, { "authors": [ "Maarten Schadd", "Mark Winands", "H. Herik", "Guillaume Chaslot", "Jos Uiterwijk" ], "title": "Single-player monte-carlo tree search", "venue": "pp. 1–12,", "year": 2008 }, { "authors": [ "Holger Schwenk", "Y. Bengio" ], "title": "Boosting neural networks", "venue": "Neural computation, 12:1869–87,", "year": 2000 }, { "authors": [ "Peter Sollich", "Anders Krogh" ], "title": "Learning with ensembles: How overfitting can be useful", "venue": null, "year": 1995 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Nadav Voloch" ], "title": "Optimal paths of knapsack-set vertices on a weight-independent graph", "venue": "WSEAS Transactions on Computers, 16:163–171,", "year": 2017 }, { "authors": [ "D.H. Wolpert", "W.G. Macready" ], "title": "No free lunch theorems for optimization", "venue": "Trans. Evol. Comp,", "year": 1997 }, { "authors": [ "David H. Wolpert" ], "title": "Stacked generalization", "venue": "Neural Networks,", "year": 1992 }, { "authors": [ "Saining Xie", "Ross B. Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks. CoRR, abs/1611.05431, 2016", "venue": "URL http: //arxiv.org/abs/1611.05431", "year": 2016 }, { "authors": [ "Tune Liaw" ], "title": "witch spread experiments to run on GPUs. Deep Learning training and data augmentation was coded with the framework", "venue": "It runs above the Ray client-server Moritz et al", "year": 2018 }, { "authors": [ "He" ], "title": "Regarding the optimization method, we use adam optimizer Kingma & Ba (2014) due to its well known performance and its low learning rate tuning requirement. Hyperparameters labeled as ”mutable” can be updated during the training, for example the learning rate can change but the architecture cannot. PBT algorithm is the only one algorithm tested to discover a schedule", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Ensemble Deep Learning improves accuracy over a single model by combining predictions from multiple models. It has established itself to be the core strategy for tackling the most difficult problems, like winning Kaggle challenges. Due to the lack of consensus to design a successful deep learning ensemble, we introduce Hyperband-Dijkstra, a new workflow that automatically explores neural network designs with Hyperband and efficiently combines them with Dijkstra’s algorithm. This workflow has the same training cost than standard Hyperband running except sub-optimal solutions are stored and are candidates to be selected in the ensemble selection step (recycling). Next, to predict on new data, the user gives to Dijkstra the maximum number of models wanted in the ensemble to control the tradeoff between accuracy and inference time. Hyperband is a very efficient algorithm allocating exponentially more resources to the most promising configurations. It is also capable to propose diverse models due to its pure-exploration nature, which allows Dijkstra algorithm with a smart combination of diverse models to achieve a strong variance and bias reduction. The exploding number of possible combinations generated by Hyperband increases the probability that Dijkstra finds an accurate combination which fits the dataset and generalizes on new data. The two experimentation on CIFAR100 and on our unbalanced microfossils dataset show that our new workflow generates an ensemble far more accurate than any other ensemble of any ResNet models from ResNet18 to ResNet152." }, { "heading": "1 INTRODUCTION", "text": "Ensemble machine learning is a popular method to use predictions and combine them for a successful and optimal classification.\nIn the light of its success in Kaggle competition, all top-5 solutions published in the last seven image recognition challenges use at least one ensemble method. The average and median number of individual models used by ensemble is between 7 and 8. Appendix A summarized these 17 solutions.\nDespite its recent popularity among practitioners, there is no consensus on how to apply ensemble in the context of deep neural network. The overall work on ensemble Machine Learning (non-deep) was carried out in the 1990s and 2000s. The implementation of Deep Learning on GPU appeared less than 10 years ago. The outbreak of multi-GPU servers allows to effectively train and evaluate many neural networks simultaneously but also deploy ensemble deep architectures.\nAnother recent trend to improve accuracy is the transfer learning or use external similar data source Kolesnikov et al. (2019). Instead we search a new model-oriented method which can be applied on new kind of problems where no similar dataset exists.\nHyperband-Dijkstra is an innovative way to benefit from this increasing computing power. It consists in unifying the two already proven efficient but contradictory approaches: hyperparameter optimization (HPO) and ensemble. First, one explores and trains models until finding the optimal solution and wasting sub-optimal ones while the other one uses a population of trained models to predict more accurately.\nHyperband-Dijkstra creates an ensemble based on hyperband which is able to generate a huge number of trained deep models. Then, Dijkstra yields efficient combinations between them. As far as we know, it was never proposed to use Dijkstra’s algorithm to find a subset of k previously trained models in a greater population.\nAfter that, we describe and discuss interesting properties and experimental results on two datasets:\n• Hyperband-Dijkstra is able to generate better ensemble than any ensemble of ResNet models. • We show that Dijkstra algorithm is better to aggregate k trained models than a naive strategy\nconsisting in taking the top k models based on their validation accuracy. • We show that our workflow (with ensemble of size ≥ 2) keeps benefiting of hyperband\nrunning after many days while a standard use of hyperband (consisting in taking only the best model) stops improving much earlier." }, { "heading": "2 RELATED WORKS", "text": "In this section we briefly review the main ideas from prior work that are relevant to our method.\nEnsemble. Authors Sollich & Krogh (1995) laid the foundation stone about the idea that over-fitted machine learning algorithms can be averaged out to get more accurate results. This phenomenon is explained by the Law of Large Numbers which claims that the average of the results obtained from a large number of trials should be close to the expected value. These results are especially interesting for deep learning models because they are machine learning models which are the most affected to random effects (over-fitting) due to their huge amount of parameters.\nMany ensemble algorithms have been invented such as Wolpert (1992), Breiman (1996) or boosting Schwenk & Bengio (2000). Some other methods are neural networks specific like negative correlation learning Liu & Yao (1999), dropout Srivastava et al. (2014) or snapshot learning Huang et al. (2017). There is today no consensus on the way to do ensembles like shown in the appendix A.\nIn case the architecture of models in the ensemble is biased - for example all models contained are not deep enough or not wide enough to capture relevant features in the data - exploiting parametric diversity will not efficiently improve the results. That is why authors Liao & Moody (1999) Gashler et al. (2008) promote more and more diversity, not only based on the random weights initialisation but based on different machine learning algorithms such as neural network and decision tree in the same ensemble to maximize diversity and therefore the accuracy.\nKnapsack problem. A Combinatorial Optimization problem consists in searching for a solution in a discrete set so that a function is optimized. In many such problems, exhaustive search is not tractable, that is why approximate methods are used. Dijkstra’s algorithm Dijkstra (1959) is a path finding algorithm which locally selects the next best node until it reaches the final node. A* Hart et al. (1972) is an informed algorithm which first expands the most promising node to converge faster than Dijkstra. This knowledge is used only if an appropriate heuristic function is available. Otherwise, in absence of this knowledge, Dijkstra and A* are equivalent. More recently, SP-MCTS Schadd et al. (2008) is a probabilistic approach which runs many tree explorations based on the Upper Confident bound applied to Tree (UCT) Kocsis & Szepesvári (2006) formula to guide exploration/exploitation to catch a maximum of information on one node before selecting it.\nHyperparameter Optimization. The empirical nature of research in Deep Learning leads us to try many models, optimization settings and pre-processing settings to find the best suited one for data. No Free Lunch theorem Wolpert & Macready (1997) proves that no hyperparameter optimization can show a superior performance in all cases. Nevertheless, methods have been developed and have shown a stable performance on supervised deep learning dataset.\nDiscrete-space search enables to search the best model description to a given neural network. Under this umbrella, we can find : the number of units per layer, regularization parameters, batch size, type of initialization, optimizer strategy, learning rate. Plenty of approaches exist with a different theoretical background, a pure-exploration approach Bergstra & Bengio (2012), Li et al. (2017), smart computing resources allocation strategies Li et al. (2017) Falkner et al. (2018), a priori based Hoffman et al. (2011), a posteriori based Bergstra et al. (2011) or genetic inspired Jaderberg et al.\n(2017). Those methods are not exclusive, for example BOHB Falkner et al. (2018) mixes Bayesian Optimization strategy to Hyperband.\nAnother automatic approach exists like graph-space search Pham et al. (2018). It consists in finding the best architecture (graph) of neural networks. It provides a maximum degree of freedom in the construction of the neural network architecture. Due to the infinity of combinations, scientists implement several constraints to limit the possibilities of graph generation, save computation cost and preserve the correctness of generated graphs. All hyper-parameters, like optimization settings and data pre-preprocessing are given by user to drive into this graph-space. Due to this complexity and because only models architectures are explored, we decide to not follow this path.\nParallel hyperparameter optimization. All HPO strategies presented in this paper are asynchronous so their deployment is ideal on multi-GPU or multi-node GPU HPC. Distributed clientserver softwares Matthew Rocklin (2015) , Moritz et al. (2018) allow to simultaneously spread those training candidate models and evaluate them. Those frameworks allow also serve them in parallel.\nMulti-objective goal. Authors Johnston et al. (2017) discovered that many neural networks have a comparable accuracy. Literature lets us imagine that the hyper-parameter function topology has two plateaus : where the optimizer algorithm converges and where it does not. This flatness can be used to optimize a secondary goal such as model size, time-to-prediction, power consumption and so on. Authors Patton et al. (2019) propose a multi-objective optimization to not only search an accurate model but also faster ones.\nEarly Stopping. A common practice exists to speed up HPO running like Early Stopping. They consists in resources reallocation strategies by considering learning dynamic of DNN. Prechelt (1998) Li et al. (2017). Early stopping is also known to be a regularization method that stops the training when the validation accuracy plateaus is symptomatic and that it will not generalize well (overfitting)." }, { "heading": "3 PROPOSED WORKFLOW", "text": "In this section we will first see the workflow proposed before going into a more detailed explanation step by step." }, { "heading": "3.1 DETAIL OF THE WORKFLOW", "text": "As shown in figure 1, the proposed workflow consists in using hyperband and not only saving the best one on the disk but the sub-optimal one too. Second, a combinatorial optimization algorithm (Dijkstra’s) finds the best one regarding the maximum number of models desired by the user (noted K). Dijkstra’s algorithm computes the validation loss of candidates ensemble to evaluate how well a solution will generalize on the test database.\nThe final accuracy depends on the running time of hyperband and the number of models chosen in the ensemble. Experiments results are shown in section 4.\nThe workflow we introduce is simple. We use Hyperband algorithm and the distributed framework Ray Moritz et al. (2018) and then our combinatorial optimization Disjkstra’s algorithm is a natural choice to ensemble models. The simplicity of the chosen algorithm and re-using existing frameworks reinforce our claims that this work is easy to test on a new dataset." }, { "heading": "3.2 STEP 1 - HYPERBAND TO GENERATE MANY MODELS", "text": "Hyperband relies on an iterative selection of the most promising models to allocate resources, allowing it to exponentially evaluate more configurations than strategies which do not use progressive results during training. Hyperband is a technique that makes minimal assumptions unlike prior configuration evaluation approaches. Its pure-exploration nature combined with conservative resource allocation strategies can sweep better the hyperparameter space than other strategies like blackbox bayesian optimization. This diversity of models sampled are ideal to combine them for Dijkstra’s algorithm and make better ensemble Liao & Moody (1999) Gashler et al. (2008).\nWe only store models trained at least half maximum epochs. This allows to reduce the number of models saved and thus the number of possible combinations by focusing on the most promising models explored by Hyperband." }, { "heading": "3.3 STEP 2 - DIJKSTRA’S ALGORITHM TO COMBINE MODELS", "text": "We discuss that finding the best combination of K among a larger population is first modeled as a graph. We then prove that no exact solution can be applied because of the computing complexity of the problem. That is why we propose Dijsktra’s algorithm, a simple and popular approach." }, { "heading": "3.3.1 PROBLEM MODELING AND INTUITION", "text": "The solution space can be modeled as a tree with the empty ensembles as the root. Every node represents a deep learning model added and any path an ensemble. All nodes can be a terminal and not only leaves. To evaluate and compare their score, we use the formula 1. It calculates the cross entropy between validation labels, averaging predictions and the current ensemble I of size k.\nscoreI = CE(y, 1\nk ∑ i∈I ỹi) (1)\nFigure 2 is a simplified example of three models and their combinations on error distribution. The modeled tree associated is shown in figure 3. The problem is to find the best combination to be as close as possible to the center (0; 0). We observe that the best individual model c (d{c} = 0.22) is not always the best one to combine with the others (d{a,b} = 0.07). That is why smart algorithms are needed. We also eliminate the simple idea that combining all models systematically leads to the best solution (d{a,b,c} = 0.12)." }, { "heading": "3.3.2 PROBLEM COMPLEXITY", "text": "Finding the best ensemble of maximum size K among n models with 1 ≤ K ≤ n is a case of the ’knapsack problem’ belonging to the NP (non-deterministic polynomial) problem’s family. There is no known exact method except brut-forcing all possibilities.\nThe number of possible subsets of size k among n items is computed with the binomial coefficient. This binomial formula is known Das (2020) to be asymptotically polynomial when n is increased and k is fixed. When the user puts the maximum size of ensembles K to explore, all combinations k such as 1 ≤ k ≤ K are also candidates. Therefore the quantity of candidates ensembles is given by ∑K\nk=1 ( n k ) . This formula also has a polynomial behavior when k is fixed and n increases. For\nexample, for K = 4 and a population of n = 100, adding only one new model in the population increases the number of combinations from 4.09 million to 4.25 million.\nThis polynomial behavior has two exceptions : when K = 1 (linear) and when K = n (exponential). K = n allows the research of big ensembles for a maximum precision despite inference time. The number of ways to construct a non-empty model from a catalogue of N models is formally described by the equation ∑N k=1 ( N k ) = 2N − 1. We have two options for each model : using it or not (2N\npossibilities). We exclude the empty ensemble (-1). It means that for each new model found by hyperband, the quantity of combinations is multiplied by 2. The combination of ensembles with a catalogue of 100 models is ≈ 1.27e30. Due to this combinatorial explosion, we understand the need of an approximation search algorithm." }, { "heading": "3.3.3 APPROXIMATE SOLUTION WITH DIJKSTRA’S ALGORITHM", "text": "This huge number of ensembles combined to the fact that relationships between model predictions and labels are complex (figure 2 and formula 1) and that no heuristic is known makes Dijkstra’s algorithm a natural choice for this class of problems Voloch (2017). Dijkstra’s algorithm is a Dynamic Programming procedure, meaning it makes and memorizes successive approximation choices.\nWhile the number of possibilities requires approximate solutions, this huge number of candidate ensembles has the advantage of ensuring that better combinations should be found compared to a naive aggregation approach. This is confirmed in the results in section 4.\nOnce a model is found based on the running by Disjkstra’s algorithm, we can combine predictions of models on new data or evaluate it on the test dataset. As training and evaluating, models predicting can be distributed on different GPUs but the averaging require all models finished their prediction." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "We experiment our workflow on CIFAR100 Krizhevsky (2009) and microfossils datasets both presented in appendix B.3. On these two datasets there are 16 hyper-parameters to explore. Experimental settings are explained in appendices B for reproducibility purpose but this level of detail is not required to understand our works or results. It is possible that larger hyperparameters value ranges may positively influence again the results obtained.\nIn this section, we evaluate different workflows by evaluating various HPO strategies, different combinatorial optimizations. We also different settings like the number of models in produced ensemble and effect of HPO running time on results." }, { "heading": "4.1 VISUALIZATION OF ARCHITECTURE SAMPLING RESULTS", "text": "As others have highlighted, no correlation exists between the accuracy and computing cost of a model on image recognition. We display it in the figure 4 results of random sampling of hyperparameter space on the CIFAR100 dataset. This is the reason why we propose a target function which measure efficiency of one model based on its accuracy and its inference time. The implemented formula is: CE(y, ỹ)+WI with I the inference time on 2,000 images expressed in seconds and W\na scaling factor arbitrary choosen such as W = 0.001. Hyperband minimizing this target function increases the concentration of computing resources on more efficient models. The natural efficiency of Hyperband combined to the fact that we use this multi-criteria target function allows to increase the number of models explored in 6 days by factor 3.2 compared to Random Sampling + Early Stopping (plateau detection).\nAnother Early Stopping method used consist in detecting after one epoch if the models perform better than random predictions on the validation dataset. It shows very effective to detect early which models are unable to learn and free GPUs for other models. Experiments in figure 5 show that about 14% of models diverge so we can save quasi-entirely their running." }, { "heading": "4.2 COMPARISON OF VARIOUS HPO STRATEGIES AND DIJKSTRA’S ALGORITHM", "text": "We evaluate the accuracy of our workflow on CIFAR100 by replacing Hyperband with various HPO strategies in table 1. Retraining the same deep learning architecture from scratch can yield significant distance in different run time, that is why we compare different HPO strategies and different popular ResNet architectures as well in table 2. We observe that Hyperband generally performs well to take the best one and also to aggregate ensembles compared to all other methods. It confirms our claim in the previous section on Hyperband computing efficiency and the ability to generate good ensembles.\nWe also observe that most of HPO strategies discovered better models than the best ResNet models found. For both benchmarks, we observe that ResNet18 compared to other ResNet architectures, lead to better models but when we combine them, ResNet34 is a better choice. We conjecture that ResNet18 leads to a lower parametric diversity compared to ResNet34 models because of the lower number of layers.\nAnother remark is that a proportion of 14% of randomly chosen models diverge while 100% of ResNet models converge. It shows that ResNet are robust handcrafted models but a random process can find more accurate ones on a new dataset.\nThe same results and conclusion on the microfossils dataset are reached in tables 3 and 4." }, { "heading": "4.3 HYPERBAND AND VARIOUS COMBINATORIAL STRATEGIES", "text": "Different combinations algorithms are tested in figures 6 and 7 by varying the number of models from 1 to 16. The total population of models only contains models trained during at least 50 epochs by Hyperband.\nWe tested two naive strategies. The first one consists in drawing randomly ensembles of K models and the second one in taking the top-K. Dijkstra’s algorithm generally finds better solutions than naive strategies.\nWe also evaluate SP-MCTS, a tree search algorithm based on Monte-Carlo. To test SP-MCTS, the solution space was modeled as an unfolded tree representation leading to nodes redundancy, so equivalent nodes were implemented to index the same score. Based on preliminary experiments, SP-MCTS is set to run 1000×K with K the maximum desired number of models to favor accuracy over SP-MCTS computing cost. With a single-threading implementation, Dijkstra’s algorithm takes only 25 seconds to find an ensemble of K = 16 among 160 models while SP-MCTS is x580 slower.\nIn the microfossils dataset, Dijkstra’s algorithm falls to a local minimum and uses the same 10 models when K > 10. SP-MCTS do not falls into this trap and keeps benefiting of an increasing power.\nFigure 8: Different combinatorial optimization algorithm tested" }, { "heading": "4.4 EFFECT OF COMPUTING INTENSITY ON THE FINAL ACCURACY", "text": "Our workflow benefits more of computing intensity than standard Hyperband like shown in figures 9 and 10.\nAfter 24 hours, standard Hyperband (consisting in taking only the best model) converges while our worflow with K > 2 keeps benefiting of the models generated. On the CIFAR100 dataset, we identify that ensembles of 12 and 16 models benefit linearly of the computing time. Their accuracy begins to 77% and increases of +0.4% every 24h00.\nMoreove, we observe that adding more models systematically leads to an increasing of accuracy but this trend declines. We show that the benefit is obvious from 1 to 2 models (+3.9%) but the improvement is small from 6 to 16 (+1%).\nFigure 11: Varying the max number of models in ensembles in function of Hyperband running time\nCONCLUSION\nDue to the experimental nature of deep learning and the increasing of available computing power like multi-GPUs servers, it allows to sweep deep learning architectures. The standard usage of hyper parameter optimization consists in training hundreds of models and keeping only the best one, which leads to an important waste of energy.\nWe show that Hyperband efficiently generates diverse architectures coupled by a significant number of combinations between them. That is the reason why a smart selection of models with Dijkstra allows to build accurate ensembles. This workflow benefits of the increasing computational power and proposes a general approach to unify hyper-parameter search and resembling models. On two datasets, it has been also showed that our ensembles are more accurate than a naively build ensemble. Our workflow also yields an ensemble more accurate than any other ensemble of ResNet models." }, { "heading": "A ENSEMBLE AWARDS IN PAST PUBLIC IMAGE RECOGNITION CHALLENGES", "text": "URL : https://ndres.me/kaggle-past-solutions\n• Name: Understanding Clouds from Satellite Images • Description: Can you classify cloud structures from satellites? • Participation: 1538 teams • Prize: $10K • Dead line: 2019-11-19 • 1st place – Use averaging predictions of 3 segmentations models; averaging 9 segmenta-\ntions models • 3rd place – Use majority vote of 4 segmentations models • 4th place – Averaging of 10 models • 5th place – Weighted averaging of 3 segmentations models\n• Name: RSNA Intracranial Hemorrhage Detection • Description: Identify acute intracranial hemorrhage and its subtypes • Participation: 1345 teams • Prize: $25K • Dead line: 2019-10-28 • 2st place – 15 bagging LSTMs (3 bootstraps) • 3rd place - Use weighted averaging predictions of 17 models • 5th place – Stacking of 9 segmentation models\n• Name: Lyft 3D Object Detection for Autonomous Vehicles • Description: Can you advance the state of the art in 3D object detection? • Participation: 547 teams • Prize: $25K • Dead line: 2019-11-3 • 3rd place – 3 faster-rcnn models with Soft NMS\n• Name: Severstal Steel Defect Detection • Description: Can you detect and classify defects in steel? • Participation: 2431 teams • Prize: $120K • Dead line: 2019-10-18 • 1st place – Ensemble of 4 classifications (which one ?); Ensemble of 9 segmenta-\ntions(which one ?) • 4th place – Ensemble of 9 segmentations (which one ?)\n• Name: Kuzushiji Recognition • Description: Opening the door to a thousand years of Japanese culture • Participation: 293 teams • Prize: $15K • Dead line: 2019-10-15 • 1st place – Ensemble of 2 R-CNN • 2nd place – 1 Faster-RCNN, Stacking with an ensemble of XGBoost and LightGBM aver-\naging • 3rd place – Hard voting of 5 models; NMS with 2 models\n• Name: The 3rd YouTube-8M Video Understanding Challenge • Description: Temporal localization of topics within video • Participation: 283 teams • Prize: $25K • Dead line: 2019-10-04 • 1st place – 17 averaging models + smooth out predictions • 2nd place – 7 models weighted averaging, weights fixed manually • 3rd place – Stacking of 12 models\n• Name: APTOS 2019 Blindness Detection • Description: Detect diabetic retinopathy to stop blindness before it’s too late • Participation: 2931 teams • Prize: $50K • 1st place – Ensemble of 8 models with stacking • 4th place – Averaging of 3 models" }, { "heading": "B REPRODUCIBILITY", "text": "B.1 HARDWARE\nComputing nodes used are the same than the Oak Ridge Summit nodes. 6 Tesla-V100 GPUs was used for running each hyperparameter optimization algorithm and generate required populations.\nB.2 SOFTWARE STACK\nHyperparameter optimization framework Tune Liaw et al. (2018) was used. It runs above the Ray client-server Moritz et al. (2018) witch spread experiments to run on GPUs. Deep Learning training and data augmentation was coded with the framework Keras Gulli & Pal (2017) with backend Tensorflow 1.14.0 Abadi et al. (2015).\nB.3 THE TWO DATASET USED\nThe CIFAR100 dataset. CIFAR100 Giuste & Vizcarra (2020) consists to 60,000 32x32 RGB images in 100 classes. For each class, there are 580 training images, 20 validating images and 100 testing images.\nThe Microfossils dataset. Microfossils are extremely useful in age dating, correlation and paleoenvironmental reconstruction to refine our knowledge of geology. Micro-fossil species are identified and counted on large microscope images and thanks to their frequencies we can compute the date of sedimentary rocks.\nTo do reliable statistics, a big amount of objects needs to be identified. That is why we need deep learning to automate this work. Today, between 400 and 800 fields of view (microscopy imagery) need to be shot for 1 rock sample. In each field of view, there are between 300 to 400 objects to identify. Among these objects, there are non-fossils (crystals, rock grains etc...) and others are fossils that we are looking for to study rocks.\nOur dataset contains 91 classes of 224x224 RGB images (after homemade preprocessing). Microfossils are calcareous objects took with polarized light microscopy. The classes are unbalanced. We have from 50 images to 2500 images by class, with a total of 32K images in all the dataset. The F1 score was used and labeled as ’accuracy’ on all benchmarks.\nB.4 HYPERPARAMETER CONFIGURATION SPACE\nThe table 5 shows all hyperparameters properties in this workflow. We use a ResNet Zagoruyko & Komodakis (2016) based architectures due to its simplicity to yield promising and robust models on many datasets . We explore different residual block versions: ”V1”, ”V2” He et al. (2015) and ”next” Xie et al. (2016). Regarding the optimization method, we use adam optimizer Kingma & Ba (2014) due to its well known performance and its low learning rate tuning requirement. Hyperparameters labeled as ”mutable” can be updated during the training, for example the learning rate can change but the architecture cannot. PBT algorithm is the only one algorithm tested to discover a schedule of mutable hyperparameters.\nWe aware that our research may have a limitations. The range of hyper-parameter can be to short compared to good results found in the literature Zagoruyko & Komodakis (2016) like the batch size, width and depth of convolutionnal neural network. Moreover we could explore other optimization strategies like SGD with momentum and also the learning rate decay. Next, dropout is also a promising method we could explore. To finish, on CIFAR100 our maximum number of epochs is 100 and scientists before us usually use 160 epochs.\nB.5 ADAPTATION TO APPLY ON CIFAR100\nThe CIFAR100 dataset contains 32x32 images while usually ResNet are adapted to be used on imagenet (224x244 images). Those different resolution need some adaptation. On the CIFAR100, the first convolutionnal network is replaced from the 7x7 kernel size with a stride of 2, to a 3x3 kernel size with a stride of 1." } ]
2,020
null
SP:4a55b108b8ae5fe388f54028d939a84dcd677c49
[ "This paper is more like a review of singular learning theory and its implication on deep learning. The authors point out that deep neural networks are singular models and ways to characterize generalization error for regular models cannot produce satisfactory results in this setting. Then the authors introduce the singular learning theory, which has been developed for decades. Then, a series of topics for deep learning, such as flatness and generalization, are studied within the framework singular learning theory, with a combination of theoretical analysis and numerical experiments. The paper is clearly written and well organized. " ]
In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied to such models. This is significant for deep learning as neural networks are singular and thus “dividing” by the determinant of the Hessian or employing the Laplace approximation are not appropriate. Despite its potential for addressing fundamental issues in deep learning, singular learning theory appears to have made little inroads into the developing canon of deep learning theory. Via a mix of theory and experiment, we present an invitation to singular learning theory as a vehicle for understanding deep learning and suggest important future work to make singular learning theory directly applicable to how deep learning is performed in practice.
[]
[ { "authors": [ "Shun-ichi Amari", "Tomoko Ozeki", "Hyeyoung Park" ], "title": "Learning and inference in hierarchical models with singularities", "venue": "Systems and Computers in Japan,", "year": 2003 }, { "authors": [ "Miki Aoyagi", "Sumio Watanabe" ], "title": "Resolution of Singularities and the Generalization Error with Bayesian Estimation for Layered Neural Network", "venue": "In IEICE Trans.,", "year": 2005 }, { "authors": [ "Miki Aoyagi", "Sumio Watanabe" ], "title": "Stochastic complexities of reduced rank regression in Bayesian estimation", "venue": "Neural Networks,", "year": 2005 }, { "authors": [ "Sanjeev Arora", "R Ge", "B Neyshabur", "Y Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vijay Balasubramanian" ], "title": "Statistical inference, Occam’s razor and statistical mechanics on the space of probability distributions", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "William M Boothby" ], "title": "An introduction to differentiable manifolds and Riemannian geometry", "venue": "Academic press,", "year": 1986 }, { "authors": [ "Olivier Bousquet", "Stéphane Boucheron", "Gábor Lugosi" ], "title": "Introduction to statistical learning theory", "venue": "In Summer School on Machine Learning,", "year": 2003 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Sgd learns overparameterized networks that provably generalize on linearly separable data", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yuan Cao", "Quanquan Gu" ], "title": "Generalization bounds of stochastic gradient descent for wide and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-SGD: Biasing gradient descent into wide valleys", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Amit Daniely" ], "title": "SGD learns the conjugate kernel class of the network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Simon S Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "Gaussian error linear units (GELUs)", "venue": "arXiv preprint arXiv:1606.08415,", "year": 2016 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory F. Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable", "venue": "empirically. CoRR,", "year": 2017 }, { "authors": [ "Geoffrey E Hinton", "Drew Van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the sixth annual conference on Computational learning theory,", "year": 1993 }, { "authors": [ "Matthew D Hoffman", "Andrew Gelman" ], "title": "The No-U-Turn sampler: adaptively setting path lengths in hamiltonian monte carlo", "venue": "J. Mach. Learn. Res.,", "year": 2014 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Stanislaw Jastrzebski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ], "title": "Three factors influencing minima in SGD", "venue": "arXiv preprint arXiv:1711.04623,", "year": 2017 }, { "authors": [ "Jared Kaplan", "Sam McCandlish", "Tom Henighan", "Tom B Brown", "Benjamin Chess", "Rewon Child", "Scott Gray", "Alec Radford", "Jeffrey Wu", "Dario Amodei" ], "title": "Scaling laws for neural language models", "venue": null, "year": 2001 }, { "authors": [ "Agustinus Kristiadi", "Matthias Hein", "Philipp Hennig" ], "title": "Being Bayesian, even just a bit, fixes overconfidence in ReLU networks", "venue": "arXiv preprint arXiv:2002.10118,", "year": 2020 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wesley J Maddox", "Gregory Benton", "Andrew Gordon Wilson" ], "title": "Rethinking parameter counting in deep models: Effective dimensionality revisited", "venue": "arXiv preprint arXiv:2003.02139,", "year": 2020 }, { "authors": [ "Stephan Mandt", "Matthew D Hoffman", "David M Blei" ], "title": "Stochastic gradient descent as approximate Bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Shinichi Nakajima", "Sumio Watanabe" ], "title": "Variational Bayes Solution of Linear Neural Networks and Its Generalization Performance", "venue": "Neural Computation,", "year": 2007 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li" ], "title": "Towards understanding the role of over-parametrization in generalization of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jeffrey Pennington", "Pratik Worah" ], "title": "The spectrum of the Fisher information matrix of a singlehidden-layer neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mary Phuong", "Christoph H. Lampert" ], "title": "Functional vs. parametric equivalence of ReLU networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tomaso A. Poggio", "Kenji Kawaguchi", "Qianli Liao", "Brando Miranda", "Lorenzo Rosasco", "Xavier Boix", "Jack Hidary", "Hrushikesh Mhaskar" ], "title": "Theory of deep learning III: explaining the non-overfitting", "venue": "puzzle. CoRR,", "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V Le" ], "title": "Swish: a self-gated activation function", "venue": "arXiv preprint arXiv:1710.05941,", "year": 2017 }, { "authors": [ "Umut ŞimŠekli" ], "title": "Fractional Langevin Monte Carlo: exploring Levy driven stochastic differential equations for Markov Chain Monte Carlo", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A Bayesian perspective on generalization and stochastic gradient descent", "venue": "arXiv preprint arXiv:1710.06451,", "year": 2017 }, { "authors": [ "Samuel L Smith", "Daniel Duckworth", "Semon Rezchikov", "Quoc V Le", "Jascha Sohl-Dickstein" ], "title": "Stochastic natural gradient descent draws posterior samples in function space", "venue": "arXiv preprint arXiv:1806.09597,", "year": 2018 }, { "authors": [ "Valentin Thomas", "Fabian Pedregosa", "Bart van Merrinboer", "Pierre-Antoine Mangazol", "Yoshua Bengio", "Nicolas Le Roux" ], "title": "Information matrices and generalization", "venue": "[cs, stat],", "year": 2019 }, { "authors": [ "Sumio Watanabe" ], "title": "Almost All Learning Machines are Singular", "venue": "IEEE Symposium on Foundations of Computational Intelligence,", "year": 2007 }, { "authors": [ "Sumio Watanabe" ], "title": "Algebraic Geometry and Statistical Learning Theory", "venue": null, "year": 2009 }, { "authors": [ "Sumio Watanabe" ], "title": "A Widely Applicable Bayesian Information Criterion", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Sumio Watanabe" ], "title": "Mathematical Theory of Bayesian Statistics", "venue": null, "year": 2018 }, { "authors": [ "Gilad Yehudai", "Ohad Shamir" ], "title": "On the power and limitations of random features for understanding neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In Proceedings of the 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yao Zhang", "Andrew M. Saxe", "Madhu S. Advani", "Alpha A. Lee" ], "title": "Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning", "venue": "Molecular Physics,", "year": 2018 } ]
[ { "heading": null, "text": "In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied to such models. This is significant for deep learning as neural networks are singular and thus “dividing” by the determinant of the Hessian or employing the Laplace approximation are not appropriate. Despite its potential for addressing fundamental issues in deep learning, singular learning theory appears to have made little inroads into the developing canon of deep learning theory. Via a mix of theory and experiment, we present an invitation to singular learning theory as a vehicle for understanding deep learning and suggest important future work to make singular learning theory directly applicable to how deep learning is performed in practice." }, { "heading": "1 INTRODUCTION", "text": "It has been understood for close to twenty years that neural networks are singular statistical models (Amari et al., 2003; Watanabe, 2007). This means, in particular, that the set of network weights equivalent to the true model under the Kullback-Leibler divergence forms a real analytic variety which fails to be an analytic manifold due to the presence of singularities. It has been shown by Sumio Watanabe that the geometry of these singularities controls quantities of interest in statistical learning theory, e.g., the generalisation error. Singular learning theory (Watanabe, 2009) is the study of singular models and requires very different tools from the study of regular statistical models. The breadth of knowledge demanded by singular learning theory – Bayesian statistics, empirical processes and algebraic geometry – is rewarded with profound and surprising results which reveal that singular models are different from regular models in practically important ways. To illustrate the relevance of singular learning theory to deep learning, each section of this paper illustrates a key takeaway idea1.\nThe real log canonical threshold (RLCT) is the correct way to count the effective number of parameters in a deep neural network (DNN) (Section 4). To every (model, truth, prior) triplet is associated a birational invariant known as the real log canonical threshold. The RLCT can be understood in simple cases as half the number of normal directions to the set of true parameters. We will explain why this matters more than the curvature of those directions (as measured for example by eigenvalues of the Hessian) laying bare some of the confusion over “flat” minima.\nFor singular models, the Bayes predictive distribution is superior to MAP and MLE (Section 5). In regular statistical models, the 1) Bayes predictive distribution, 2) maximum a posteriori (MAP) estimator, and 3) maximum likelihood estimator (MLE) have asymptotically equivalent generalisation error (as measured by the Kullback-Leibler divergence). This is not so in singular models. We illustrate in our experiments that even “being Bayesian” in just the final layers improves generalisation over MAP. Our experiments further confirm that the Laplace approximation of the predictive distribution Smith & Le (2017); Zhang et al. (2018) is not only theoretically inappropriate but performs poorly.\nSimpler true distribution means lower RLCT (Section 6). In singular models the RLCT depends on the (model, truth, prior) triplet whereas in regular models it depends only on the (model, prior) pair. The RLCT increases as the complexity of the true distribution relative to the supposed model increases. We verify this experimentally with a simple family of ReLU and SiLU networks.\n1The code to reproduce all experiments in the paper will be released on Github. For now, see the zip file." }, { "heading": "2 RELATED WORK", "text": "In classical learning theory, generalisation is explained by measures of capacity such as the l2 norm, Radamacher complexity, and VC dimension (Bousquet et al., 2003). It has become clear however that these measures cannot capture the empirical success of DNNs (Zhang et al., 2017). For instance, over-parameterised neural networks can easily fit random labels (Zhang et al., 2017; Du et al., 2018; Allen-Zhu et al., 2019b) indicating that complexity measures such as Rademacher complexity are very large. There is also a slate of work on generalisation bounds in deep learning. Uniform convergence bounds (Neyshabur et al., 2015; Bartlett et al., 2017; Neyshabur & Li, 2019; Arora et al., 2018) usually cannot provide non-vacuous bounds. Data-dependent bounds (Brutzkus et al., 2018; Li & Liang, 2018; Allen-Zhu et al., 2019a) consider the “classifiability” of the data distribution in generalisation analysis of neural networks. Algorithm-dependent bounds (Daniely, 2017; Arora et al., 2019; Yehudai & Shamir, 2019; Cao & Gu, 2019) consider the relation of Gaussian initialisation and the training dynamics of (stochastic) gradient descent to kernel methods (Jacot et al., 2018).\nIn contrast to many of the aforementioned works, we are interested in estimating the conditional distribution q(y|x). Specifically, we measure the generalisation error of some estimate q̂n(y|x) in terms of the Kullback-Leibler divergence between q and q̂n, see (8). The next section gives a crash course on singular learning theory. The rest of the paper illustrates the key ideas listed in the introduction. Since we cover much ground in this short note, we will review other relevant work along the way, in particular literature on “flatness”, the Laplace approximation in deep learning, etc." }, { "heading": "3 SINGULAR LEARNING THEORY", "text": "To understand why classical measures of capacity fail to say anything meaningful about DNNs, it is important to distinguish between two different types of statistical models. Recall we are interested in estimating the true (and unknown) conditional distribution q(y|x) with a class of models {p(y|x,w) : w ∈ W} where W ⊂ Rd is the parameter space. We say the model is identifiable if the mapping w 7→ p(y|x,w) is one-to-one. Let q(x) be the distribution of x. The Fisher information matrix associated with the model {p(y|x,w) : w ∈W} is the matrix-valued function on W defined by\nI(w)ij =\n∫ ∫ ∂\n∂wi [log p(y|x,w)] ∂ ∂wj [log p(y|x,w)]q(y|x)q(x)dxdy,\nif this integral is finite. Following the conventions in Watanabe (2009), we have the following bifurcation of statistical models. A statistical model p(y|x,w) is called regular if it is 1) identifiable and 2) has positive-definite Fisher information matrix. A statistical model is called strictly singular if it is not regular.\nLet ϕ(w) be a prior on the model parameters w. To every (model, truth, prior) triplet, we can associate the zeta function, ζ(z) = ∫ K(w)zϕ(w) dw, z ∈ C, where K(w) is the Kullback-Leibler (KL) divergence between the model p(y|x,w) and the true distribution q(y|x):\nK(w) := ∫ ∫ q(y|x) log q(y|x)\np(y|x,w) q(x) dx dy. (1)\nFor a (model, truth, prior) triplet (p(y|x,w), q(y|x), ϕ), let −λ be the maximum pole of the corresponding zeta function. We call λ the real log canonical threshold (RLCT) (Watanabe, 2009) of the (model, truth, prior) triplet. The RLCT is the central quantity of singular learning theory.\nBy Watanabe (2009, Theorem 6.4) the RLCT is equal to d/2 in regular statistical models and bounded above by d/2 in strictly singular models if realisability holds: let\nW0 = {w ∈W : p(y|x,w) = q(y|x)} be the set of true parameters, we say q(y|x) is realisable by the model class if W0 is non-empty. The condition of realisability is critical to standard results in singular learning theory. Modifications to the theory are needed in the case that q(y|x) is not realisable, see the condition called relatively finite variance in Watanabe (2018).\nNeural networks in singular learning theory. Let W ⊆ Rd be the space of weights of a neural network of some fixed architecture, and let f(x,w) : RN ×W −→ RM be the associated function.\nWe shall focus on the regression task and study the model\np(y|x,w) = 1 (2π)M/2 exp ( − 12‖y − f(x,w)‖ 2 )\n(2)\nbut singular learning theory can also apply to classification, for instance. It is routine to check (see Appendix A.1) that for feedforward ReLU networks not only is the model strictly singular but the matrix I(w) is degenerate for all nontrivial weight vectors and the Hessian of K(w) is degenerate at every point of W0.\nRLCT plays an important role in model selection. One of the most accessible results in singular learning theory is the work related to the widely-applicable Bayesian information criterion (WBIC) Watanabe (2013), which we briefly review here for completeness. Let Dn = {(xi, yi)}ni=1 be a dataset of input-output pairs. Let Ln(w) be the negative log likelihood\nLn(w) = − 1\nn n∑ i=1 log p(yi|xi, w) (3)\nand p(Dn|w) = exp(−nLn(w)). The marginal likelihood of a model {p(y|x,w) : w ∈ W} is given by p(Dn) = ∫ W p(Dn|w)ϕ(w) dw and can be loosely interpreted as the evidence for the model. Between two models, we should prefer the one with higher model evidence. However, since the marginal likelihood is an intractable integral over the parameter space of the model, one needs to consider some approximation.\nThe well-known Bayesian Information Criterion (BIC) derives from an asymptotic approximation of − log p(Dn) using the Laplace approximation, leading to BIC = nLn(wMLE) + d2 log n. Since we want the marginal likelihood of the data for some given model to be high one should almost never adopt a DNN according to the BIC, since in such models d may be very large. However, this argument contains a serious mathematical error: the Laplace approximation used to derive BIC only applies to regular statistical models, and DNNs are not regular. The correct criterion for both regular and strictly singular models was shown in Watanabe (2013) to be nLn(w0)+λ log nwherew0 ∈W0 and λ is the RLCT. Since DNNs are highly singular λ may be much smaller than d/2 (Section 6) it is possible for DNNs to have high marginal likelihood – consistent with their empirical success." }, { "heading": "4 VOLUME DIMENSION, EFFECTIVE DEGREES OF FREEDOM, AND FLATNESS", "text": "Volume codimension. The easiest way to understand the RLCT is as a volume codimension (Watanabe, 2009, Theorem 7.1). Suppose that W ⊆ Rd and W0 is nonempty, i.e., the true distribution is realisable. We consider a special case in which the KL divergence in a neighborhood of every point v0 ∈W0 has an expression in local coordinates of the form\nK(w) = d′∑ i=1 ciw 2 i , (4)\nwhere the coefficients c1, . . . , cd′ > 0 may depend on v0 and d′ may be strictly less than d. If the model is regular then this is true with d = d′ and if it holds for d′ < d then we say that the pair (p(y|x,w), q(y|x)) is minimally singular. It follows that the set W0 ⊆ W of true parameters is a regular submanifold of codimension d′ (that is, W0 is a manifold of dimension d− d′ where W has dimension d). Under this hypothesis there are, near each true parameter v0 ∈ W0, exactly d − d′ directions in which v0 can be varied without changing the model p(y|x,w) and d′ directions in which varying the parameters does change the model. In this sense, there are d′ effective parameters near v0.\nThis number of effective parameters can be computed by an integral. Consider the volume of the set of almost true parameters V (t, v0) = ∫ K(w)<t\nϕ(w)dw where the integral is restricted to a small closed ball around v0. As long as the prior ϕ(w) is non-zero on W0 it does not affect the relevant features of the volume, so we may assume ϕ is constant on the region of integration in the first d′ directions and normal in the remaining directions, so up to a constant depending only on d′ we have\nV (t, v0) ∝ td ′/2\n√ c1 · · · cd′\n(5)\nand we can extract the exponent of t in this volume in the limit\nd′ = 2 lim t→0\nlog { V (at, v0)/V (t, v0) } log(a)\n(6)\nfor any a > 0, a 6= 1. We refer to the right hand side of (6) as the volume codimension at v0. The function K(w) has the special form (4) locally with d′ = d if the statistical model is regular (and realisable) and with d′ < d in some singular models such as reduced rank regression (Appendix A.2). While such a local form does not exist for a singular model generally (in particular for neural networks) nonetheless under natural conditions (Watanabe, 2009, Theorem 7.1) we have V (t, v0) = ctλ + o(tλ) where c is a constant. We assume that in a sufficiently small neighborhood of v0 the point RLCT λ at v0 (Watanabe, 2009, Definition 2.7) is less than or equal to the RLCT at every point in the neighborhood so that the multiplicity m = 1, see Section 7.6 of (Watanabe, 2009) for relevant discussion. It follows that the limit on the right hand side of (6) exists and is equal to λ. In particular λ = d′/2 in the minimally singular case.\nNote that for strictly singular models such as DNNs 2λ may not be an integer. This may be disconcerting but the connection between the RLCT, generalisation error and volume dimension strongly suggests that 2λ is nonetheless the only geometrically meaningful “count” of the effective number of parameters near v0.\nRLCT and likelihood vs temperature. Again working with the model in (2), consider the expectation over the posterior at temperature T as defined in (17) of the negative log likelihood (3)\nE(T ) = E1/Tw [ nLn(w) ] = E1/Tw [ 1 2 n∑ i=1 ‖yi − f(xi, w)‖2 ] + nM 2 log(2π) .\nNote that when n is large Ln(v0) ≈ M2 log(2π) for any v0 ∈ W0 so for T ≈ 0 the posterior concentrates around the set W0 of true parameters and E(T ) ≈ nM2 log(2π). Consider the increase ∆E = E(T + ∆T )− E(T ) corresponding to an increase in temperature ∆T . It can be shown that ∆E ≈ λ∆T where the reader should see (Watanabe, 2013, Corollary 3) for a precise statement. As the temperature increases, samples taken from the tempered posterior are more distant from W0 and the error E will increase. If λ is smaller then for a given increase in temperature the quantity E increases less: this is one way to understand intuitively why a model with smaller RLCT generalises better from the dataset Dn to the true distribution.\nFlatness. It is folklore in the deep learning community that flatness of minima is related to generalisation (Hinton & Van Camp, 1993; Hochreiter & Schmidhuber, 1997) and this claim has been revisited in recent years (Chaudhari et al., 2017; Smith & Le, 2017; Jastrzebski et al., 2017; Zhang et al., 2018). In regular models this can be justified using the lower order terms of the asymptotic expansion of the Bayes free energy (Balasubramanian, 1997, §3.1) but the argument breaks down in strictly singular models, since for example the Laplace approximation of Zhang et al. (2018) is invalid. The point can be understood via an analysis of the version of the idea in (Hochreiter & Schmidhuber, 1997). Their measure of entropy compares the volume of the set of parameters with tolerable error t0 (our almost true parameters) to a standard volume\n− log [V (t0, v0)\nt d/2 0\n] = d− d′\n2 log(t0) +\n1 2 d∑ i=1 log ci . (7)\nHence in the case d = d′ the quantity − 12 ∑ i log(ci) is a measure of the entropy of the set of true parameters near w0, a point made for example in Zhang et al. (2018). However when d′ < d this conception of entropy is inappropriate because of the d − d′ directions in which K(w) is flat near v0, which introduce the t0 dependence in (7)." }, { "heading": "5 GENERALISATION", "text": "The generalisation puzzle (Poggio et al., 2018) is one of the central mysteries of deep learning. Theoretical investigations into the matter is an active area of research Neyshabur et al. (2017). Many of the recent proposals of capacity measures for neural networks are based on the eigenspectrum of the (degenerate) Hessian, e.g., Thomas et al. (2019); Maddox et al. (2020). But this is not appropriate for singular models, and hence for DNNs.\nSince we are interested in learning the distribution, our notion of generalisation is slightly different, being measured by the KL divergence. Precise statements regarding the generalisation behavior in singular models can be made using singular learning theory. Let the network weights be denoted θ rather than w for reasons that will become clear. Recall in the Bayesian paradigm, prediction proceeds via the so-called Bayes predictive distribution, p(y|x,Dn) = ∫ p(y|x, θ)p(θ|Dn) dθ.More commonly encountered in deep learning practice are the MAP and MLE point estimators. While in a regular statistical model, the three estimators 1) Bayes predictive distribution, 2) MAP, and 3) MLE have the same leading term in their asymptotic generalisation behavior, the same is not true in singular models. More precisely, let q̂n(y|x) be some estimate of the true unknown conditional density q(y|x) based on the dataset Dn. The generalisation error of the predictor q̂n(y|x) is\nG(n) := KL(q(y|x)||q̂n(y|x)) = ∫ ∫\nq(y|x) log q(y|x) q̂n(y|x) q(x) dy dx. (8)\nTo account for sampling variability, we will work with the average generalisation error, EnG(n), where En denotes expectation over the dataset Dn. By Watanabe (2009, Theorem 1.2 and Theorem 7.2), we have\nEnG(n) = λ/n+ o(1/n) if q̂n is the Bayes predictive distribution, (9) where λ is the RLCT corresponding to the triplet (p(y|x, θ), q(y|x), ϕ(θ)). In contrast, we should note that Zhang et al. (2018) and Smith & Le (2017) rely on the Laplace approximation to explain the generalisation of the Bayes predictive distribution though both works acknowledge the Laplace approximate is inappropriate. For completeness, a quick sketch of the derivation of (9) is provided in Appendix A.4. Now by (Watanabe, 2009, Theorem 6.4) we have\nEnG(n) = C/n+ o(1/n) if q̂n is the MAP or MLE, (10) whereC (different for MAP and MLE) is the maximum of some Gaussian process. For regular models, the MAP, MLE, and the Bayes predictive distribution have the same leading term for EnG(n) since λ = C = d/2. However in singular models, C is generally greater than λ, meaning we should prefer the Bayes predictive distribution for singular models.\nThat the RLCT has such a simple relationship to the Bayesian generalisation error is remarkable. On the other hand, the practical implications of (19) are limited since the Bayes predictive distribution is intractable. While approximations to the Bayesian predictive distribution, say via variational inference, might inherit a similar relationship between generalisation and the (variational) RLCT, serious theoretical developments will be required to rigorously establish this. The challenge comes from the fact that for approximate Bayesian predictive distributions, the free energy and generalisation error may have different learning coefficients λ. This was well documented in the case of a neural network with one hidden layer (Nakajima & Watanabe, 2007).\nWe set out to investigate whether certain very simple approximations of the Bayes predictive distribution can already demonstrate superiority over point estimators. Suppose the input-target relationship is modeled as in (2) but we write θ instead of w. We set q(x) = N(0, I3). For now consider the realisable case, q(y|x) = p(y|x, θ0) where θ0 is drawn randomly according to the default initialisation in PyTorch when model (2) is instantiated. We calculate EnG(n) using multiple datasets Dn and a large testing set, see Appendix A.5 for more details.\nSince f is a hierarchical model, let’s write it as fθ(·) = h(g(·; v);w) with the dimension of w being relatively small. Let θMAP = (vMAP, wMAP) be the MAP estimate for θ using batch gradient descent. The idea of our simple approximate Bayesian scheme is to freeze the network weights at the MAP estimate for early layers and perform approximate Bayesian inference for the final layers2. e.g., freeze the parameters of g at vMAP and perform MCMC over w. Throughout the experiments, g : R3 → R3 is a feedforward ReLU block with each hidden layer having 5 hidden units and h : R3 → R3 is either BAx or BReLU(Ax) where A ∈ R3×r, B ∈ Rr×3. We set r = 3. We shall consider 1 or 5 hidden layers for g.\nTo approximate the Bayes predictive distribution, we perform either the Laplace approximation or the NUTS variant of HMC (Hoffman & Gelman, 2014) in the last two layers, i.e., performing inference over A,B in h(g(·; vMAP);A,B). Note that MCMC is operating in a space of 18 di-\n2This is similar in spirit to Kristiadi et al. (2020) who claim that even “being Bayesian a little bit” fixes overconfidence. They approach this via the Laplace approximation for the final layer of a ReLU network. It is also worth noting that Kristiadi et al. (2020) do not attempt to formalise what it means to ”fix overconfidence”; the precise statement should be in terms of G(n).\nmensions in this case, which is small enough for us to expect MCMC to perform well. We also implemented the Laplace approximation and NUTS in the last layer only, i.e. performing inference overB in h2(h1(g(·; vMAP);AMAP);B). Further implementation details of these approximate Bayesian schemes are found in Appendix A.5.\nFrom the outset, we expect the Laplace approximation over w = (A,B) to be invalid since the model is singular. We do however expect the last-layer-only Laplace approximation over B to be sound. Next, we expect the MCMC approximation in either the last layer or last two layers to be superior to the Laplace approximations and to the MAP. We further expect the last-two-layers MCMC to have better generalisation than the last-layer-only MCMC since the former is closer to the Bayes predictive distribution. In summary, we anticipate the following performance order for\nthese five approximate Bayesian schemes (from worst to best): last-two-layers Laplace, last-layeronly Laplace, MAP, last-layer-only MCMC, last-two-layers MCMC.\nThe results displayed in Figure 1 are in line with our stated expectations above, except for the surprise that the last-layer-only MCMC approximation is often superior to the last-two-layers MCMC approximation. This may arise from the fact that MCMC finds the singular setting in the last-twolayers more challenging. In Figure 1, we clarify the effect of the network architecture by varying the following factors: 1) either 1 or 5 layers in g, and 2) ReLU or identity activation in h. Table 1 is a companion to Figure 1 and tabulates for each approximation scheme the slope of 1/n versus EnG(n), also known as the learning coefficient. The R2 corresponding to the linear fit is also provided. In Appendix A.5, we also show the corresponding results when 1) the data-generating mechanism and the assumed model do not satisfy the condition of realisability and/or 2) the MAP estimate is obtained via minibatch stochastic gradient descent instead of batch gradient descent." }, { "heading": "6 SIMPLE FUNCTIONS AND COMPLEX SINGULARITIES", "text": "In singular models the RLCT may vary with the true distribution (in contrast to regular models) and in this section we examine this phenomenon in a simple example. As the true distribution becomes more complicated relative to the supposed model, the singularities of the analytic variety of true parameters should become simpler and hence the RLCT should increase (Watanabe, 2009, §7.6). Our experiments are inspired by (Watanabe, 2009, §7.2) where tanh(x) networks are considered and the true distribution (associated to the zero network) is held fixed while the number of hidden nodes is increased.\nConsider the model p(y|x,w) in (2) where f(x,w) = c+ ∑H i=1 qi ReLU(〈wi, x〉+bi) is a two-layer ReLU network with weight vector w = ({wi}Hi=1, {bi}Hi=1, {qi}Hi=1, c) ∈ R4H+1 and wi ∈ R2, bi ∈ R, qi ∈ R for 1 ≤ i ≤ H . We let W be some compact neighborhood of the origin.\nGiven an integer 3 ≤ m ≤ H we define a network sm ∈W and qm(y|x) := p(y|x, sm) as follows. Let g ∈ SO(2) stand for rotation by 2π/m, set w1 = √ g (1, 0)T . The components of sm are the vectors wi = gi−1w1 for 1 ≤ i ≤ m and wi = 0 for i > m, bi = − 13 and qi = 1 for 1 ≤ i ≤ m and bi = qi = 0 for i > m, and finally c = 0. The factor of 13 ensures the relevant parts of the decision boundaries lie within X = [−1, 1]2. We let q(x) be the uniform distribution on X and define qm(x, y) = qm(y|x)q(x). The functions f(x, sm) are graphed in Figure 2. It is intuitively clear that the complexity of these true distributions increases with m.\nWe let ϕ be a normal distribution N(0, 502) and estimate the RLCTs of the triples (p, qm, ϕ). We conducted the experiments with H = 5, n = 1000. For each m ∈ {3, 4, 5}, Table 2 shows the\nestimated RLCT. Algorithm 1 in Appendix A.3 details the estimation procedure which we base on (Watanabe, 2013, Theorem 4). As predicted the RLCT increases with m verifying that in this case, the simpler true distributions give rise to more complex singularities.\nNote that the dimension of W is d = 21 and so if the model were regular the RLCT would be 10.5. It can be shown that when m = H the set of true parameters W0 ⊆ W is a regular submanifold of dimension m. If such a model were minimally singular its RLCT would be 12 ((4m + 1) −m) = 1 2 (3m+ 1). In the case m = 5 we observe an RLCT more than an order of magnitude less than the value 8 predicted by this formula. So the functionK does not behave like a quadratic form nearW0.\nStrictly speaking it is incorrect to speak of the RLCT of a ReLU network because the function K(w) is not necessarily analytic (Example A.4). However we observe empirically that the predicted linear relationship between Eβw[nLn(w)] and 1/β holds in our small ReLU networks (see the R 2 values in Table 2) and that the RLCT estimates are close to those for the two-layer SiLU network (Hendrycks & Gimpel, 2016) which is analytic (the SiLU or sigmoid weighted linear unit is σ(x) = x(1 + e−τx)−1 which approaches the ReLU as τ → ∞. We use τ = 100.0 in our experiments). The competitive performance of SiLU on standard benchmarks (Ramachandran et al., 2017) shows that the non-analyticity of ReLU is probably not fundamental." }, { "heading": "7 FUTURE DIRECTIONS", "text": "Deep neural networks are singular models, and that’s good: the presence of singularities is necessary for neural networks with large numbers of parameters to have low generalisation error. Singular learning theory clarifies how classical tools such as the Laplace approximation are not just inappropriate in deep learning on narrow technical grounds: the failure of this approximation and the existence of interesting phenomena like the generalisation puzzle have a common cause, namely the existence of degenerate critical points of the KL function K(w). Singular learning theory is a promising foundation for a mathematical theory of deep learning. However, much remains to be done. The important open problems include:\nSGD vs the posterior. A number of works (ŞimŠekli, 2017; Mandt et al., 2017; Smith et al., 2018) suggest that mini-batch SGD may be governed by SDEs that have the posterior distribution as its stationary distribution and this may go towards understanding why SGD works so well for DNNs.\nRLCT estimation for large networks. Theoretical RLCTs have been cataloged for small neural networks, albeit at significant effort3 (Aoyagi & Watanabe, 2005b;a). We believe RLCT estimation in these small networks should be standard benchmarks for any method that purports to approximate the Bayesian posterior of a neural network. No theoretical RLCTs or estimation procedure are known for modern DNNs. Although MCMC provides the gold standard it does not scale to large networks. The intractability of RLCT estimation for DNNs is not necessarily an obstacle to reaping the insights offered by singular learning theory. For instance, used in the context of model selection, the exact value of the RLCT is not as important as model selection consistency. We also demonstrated the utility of singular learning results such as (9) and (10) which can be exploited even without knowledge of the exact value of the RLCT.\nReal-world distributions are unrealisable. The existence of power laws in neural language model training (Hestness et al., 2017; Kaplan et al., 2020) is one of the most remarkable experimental results in deep learning. These power laws may be a sign of interesting new phenomena in singular learning theory when the true distribution is unrealisable." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NEURAL NETWORKS ARE STRICTLY SINGULAR", "text": "Many-layered neural networks are strictly singular (Watanabe, 2009, §7.2). The degeneracy of the Hessian in deep learning has certainly been acknowledged in e.g., Sagun et al. (2016) which recognises the eigenspectrum is concentrated around zero and in Pennington & Worah (2018) which deliberately studies the Fisher information matrix of a single-hidden-layer, rather than multilayer, neural network.\nWe first explain how to think about a neural network in the context of singular learning theory. A feedforward network of depth c parametrises a function f : RN −→ RM of the form\nf = Ac ◦ σc−1 ◦Ac−1 · · ·σ1 ◦A1 where the Al : Rdl−1 −→ Rdl are affine functions and σl : Rdl −→ Rdl is coordinate-wise some fixed nonlinearity σ : R −→ R. Let W be a compact subspace of Rd containing the origin, where Rd is the space of sequences of affine functions (Al)cl=1 with coordinates denoted w1, . . . , wd so that f may be viewed as a function f : RN × W −→ RM . We define p(y|x,w) as in (2). We assume the true distribution is realisable, q(y|x) = p(y|x,w0) and that a distribution q(x) on RN is fixed with respect to which p(x, y) = p(y|x)q(x) and q(x, y) = q(y|x)q(x). Given some prior ϕ(w) on W we may apply singular learning theory to the triplet (p, q, ϕ).\nBy straightforward calculations we obtain\nK(w) = 12 ∫ ‖f(x,w)− f(x,w0)‖2q(x)dx (11)\n∂2\n∂wi∂wj K(w) = ∫ 〈 ∂ ∂wi f(x,w), ∂∂wj f(x,w) 〉 q(x)dx\n+ ∫ 〈 f(x,w)− f(x,w0), ∂ 2\n∂wi∂wj f(x,w)\n〉 q(x)dx (12)\nI(w)ij = 1\n2(M−3)/2π(M−2)/2\n∫ 〈 ∂ ∂wi f(x,w), ∂∂wj f(x,w) 〉 q(x)dx (13)\nwhere 〈−,−〉 is the dot product. We assume q(x) is such that these integrals exist. It will be convenient below to introduce another set of coordinates forW . Let wljk denote the weight from the kth neuron in the (l− 1)th layer to the jth neuron in the lth layer and let blj denote the bias of the jth neuron in the lth layer. Here 1 ≤ l ≤ c and the input is layer zero. Let ulj and alj denote the value of the jth neuron in the lth layer before and after activation, respectively. Let ul and al denote the vectors with values ulj and a l j , respectively. Let dl denote the number of neurons in the lth layer. Then\nulj = dl−1∑ k=1 wljka l−1 k + b l j , 1 ≤ l ≤ c, 1 ≤ j ≤ dl\nalj = σ(u l j) 1 ≤ l < c, 1 ≤ j ≤ dl\nwith the convention that a0 = x is the input and uc = y is the output.\nIn the case where σ = ReLU the partial derivatives ∂∂wj f do not exist on all of R N . However given w ∈ W we let D(w) denote the complement in RN of the union over all hidden nodes of the associated decision boundary, that is\nRN \\ D(w) = ⋃\n1≤l<c ⋃ 1≤j≤dl {x ∈ RN : ulj(x) = 0} .\nThe partial derivative ∂∂wj f exists on the open subset {(x,w) : x ∈ D(w)} of R N ×W .\nLemma A.1. Suppose σ = ReLU and there are c > 1 layers. For any hidden neuron 1 ≤ j ≤ dl in layer l with 1 ≤ l < c there is a differential equation{ dl−1∑\nk=1\nwljk ∂\n∂wljk + blj\n∂ ∂blj − dl+1∑ i=1 wl+1ij ∂ ∂wl+1ij } f = 0\nwhich holds on D(w) for any fixed w ∈W .\nProof. Without loss of generality assume M = 1, to simplify the notation. Let ei ∈ Rdl+1 denote a unit vector and let H(x) = ddx ReLU(x). Writing ∂f ∂ul+1 for a gradient vector\n∂f ∂wl+1ij = 〈 ∂f ∂ul+1 , ∂ul+1 ∂wl+1ij 〉 = 〈 ∂f ∂ul+1 , aljei 〉 = ∂f ∂ul+1i uljH(u l j)\n∂f ∂wljk = 〈 ∂f ∂ul+1 , ∂ul+1 ∂wljk 〉 = 〈 ∂f ∂ul+1 , dl+1∑ i=1 wl+1ij a l−1 k H(u l j)ei 〉 = dl+1∑ i=1 ∂f ∂ul+1i wl+1ij a l−1 k H(u l j)\n∂f ∂blj = 〈 ∂f ∂ul+1 , ∂ul+1 ∂blj 〉 = 〈 ∂f ∂ul+1 , dl+1∑ i=1 wl+1ij H(u l j)ei 〉 = dl+1∑ i=1 ∂f ∂ul+1i wl+1ij H(u l j).\nThe claim immediately follows.\nLemma A.2. Suppose σ = ReLU, c > 1 and that w ∈ W has at least one weight or bias at a hidden node nonzero. Then the matrix I(w) is degenerate and if w ∈ W0 then the Hessian of K at w is also degenerate.\nProof. Let w ∈ W be given, and choose a hidden node where at least one of the incident weights (or bias) is nonzero. Then Lemma A.1 gives a nontrivial linear dependence relation ∑ i λi ∂ ∂wi f = 0 as functions on D(w). The rows of I(w) satisfy the same linear dependence relation. At a true parameter the second summand in (12) vanishes so by the same argument the Hessian is degenerate.\nRemark A.3. Lemma A.2 implies that every true parameter for a nontrivial ReLU network is a degenerate critical point of K. Hence in the study of nontrivial ReLU networks it is never appropriate to divide by the determinant of the Hessian of K at a true parameter, and in particular Laplace or saddle-point approximations at a true parameter are invalid.\nThe well-known positive scale invariance of ReLU networks (Phuong & Lampert, 2020) is responsible for the linear dependence of Lemma A.1, in the precise sense that the given differential operator is the infinitesimal generator (Boothby, 1986, §IV.3) of the scaling symmetry. However, this is only one source of degeneracy or singularity in ReLU networks. The degeneracy, as measured by the RLCT, is much lower than one would expect on the basis of this symmetry alone (see Section 6).\nExample A.4. In general the KL functionK(w) for ReLU networks is not analytic. For the minimal counterexample, let q(x) be uniform on [−N,N ] and zero outside and consider\nK(b) = ∫ q(x)(ReLU(x− b)− ReLU(x))2dx .\nIt is easy to check that up to a scalar factor\nK(b) =\n{ − 23b\n3 + b2N 0 ≤ b ≤ N − 13b\n3 + b2N −N ≤ b ≤ 0 so that K is C2 but not C3 let alone analytic." }, { "heading": "A.2 REDUCED RANK REGRESSION", "text": "For reduced rank regression, the model is\np(y|x,w) = 1 (2πσ2)N/2 exp\n( − 1\n2σ2 |y −BAx|2\n) ,\nwhere x ∈ RM , y ∈ RN , A an M ×H matrix and B an H×N matrix; the parameter w denotes the entries of A and B, i.e. w = (A,B), and σ > 0 is a parameter which for the moment is irrelevant.\nIf the true distribution is realisable then there is w0 = (A0, B0) such that q(y|x) = p(y|x,w0). Without loss of generality assume q(x) is the uniform density. In this case the KL divergence from p(y|x,w) to q(y|x) is\nK(w) = ∫ q(y|x) log q(y|x)\np(y|x,w) dxdy = ‖BA−B0A0‖2 (1 + E(w))\nwhere the errorE is smooth andE(w) = O(‖BA−B0A0‖2) in any region where ‖BA−B0A0‖ < C, so K(w) is equivalent to ‖BA − B0A0‖2. We write K(w) = ‖BA − B0A0‖2 for simplicity below.\nNow assume that B0A0 is symmetric and that B0 is square, i.e. N = H . Then the zero locus of K(w) is explicitly given as follows W0 = {(A,B) : detB 6= 0 and A = B−1B0A0}. It follows that W0 is globally a graph over GL(H;R). Indeed, the set (B−1B0A0, B) with B ∈ GL(H;R) is exactly W0. Thus W0 is a smooth H2-dimensional submanifold of RH\n2 × RH×M . To prove that W0 is minimally singular in the sense of Section 4 it suffices to show that rank(D2A,BK) ≥ HM whereD2A,BK denotes the Hessian, but as it is no more difficult to do so, we find explicit local coordinates (u, v) near an arbitrary point (A,B) ∈ W0 for which {v = 0} = W0 and K(u, v) = a(u, v)|u|2 in this neighborhood, where a is a C∞ function with a ≥ c > 0 for some c. Write\nA(v) = (B + v)−1B0A0.\nThen u, v 7→ (A(v) + u,B + v) gives local coordinates on RH2 × RH×M near (A,B), and K(u, v) = |(B + v)((B + v)−1B0A0 + u)−B0A0|\n= |B0A0 + (B + v)u−B0A0|2\n= |(B + v)u|2, so for v sufficiently small (and hence B + v invertible) we can take a(u, v) = |(B + v)u|2/|u|2." }, { "heading": "A.3 RLCT ESTIMATION", "text": "In this section we detail the estimation procedure for the RLCT used in Section 6. Let Ln(w) be the negative log likelihood as in (3). Define the data likelihood at inverse temperature β > 0 to be\nAlgorithm 1 RLCT via Theorem 4 in Watanabe (2013) Input: range of β’s, set of training sets T each of size n, approximate samples {w1, . . . , wR} from pβ(w|Dn) for each training set Dn and each β for training set Dn ∈ T do\nfor β in range of β’s do Approximate Eβw[nLn(w)] with 1R ∑R i=1 nLn(wr) where w1, . . . , wR are approximate sam-\nples from pβ(w|Dn) end for Perform generalised least squares to fit λ in (18), call result λ̂(Dn)\nend for Output: 1|T | ∑ Dn∈T λ̂(Dn)\npβ(Dn|w) = Πni=1p(yi|xi, w)β which can also be written pβ(Dn|w) = exp(−βnLn(w)). (14) The posterior distribution, at inverse temperature β, is defined as\npβ(w|Dn) = Πni=1p(yi|xi, w)βϕ(w)∫ W Πni=1p(yi|xi, w)βϕ(w) = pβ(Dn|w)ϕ(w) pβ(Dn) (15)\nwhere ϕ is the prior distribution on the network weights w and pβ(Dn) = ∫ W pβ(Dn|w)ϕ(w) dw (16) is the marginal likelihood of the data at inverse temperature β. Finally, denote the expectation of a random variable R(w) with respect to the tempered posterior pβ(w|Dn) as\nEβw[R(w)] = ∫ W R(w)pβ(w|Dn) dw (17)\nIn the main text, we drop the superscript in the quantities (14), (15), (16), (17) when β = 1, e.g., p(Dn) rather than p1(Dn). Assuming the conditions of Theorem 4 in Watanabe (2013) hold, we have\nEβw[nLn(w)] = nLn(w0) + λ\nβ + Un\n√ λ\n2β +Op(1) (18)\nwhere β0 is a positive constant and Un is a sequence of random variables satisfying EnUn = 0. In Algorithm 1, we describe an estimation procedure for the RLCT based on the asymptotic result in (18).\nFor the estimates in Table 2 the a posteriori distribution was approximated using the NUTS variant of Hamiltonian Monte Carlo (Hoffman & Gelman, 2014) where the first 1000 steps were omitted and 20, 000 samples were collected. Each λ̂(Dn) estimate in Algorithm 1 was performed by linear regression on the pairs {(1/βi,Eβiw [nLn(w)])}5i=1 where the five inverse temperatures βi are centered on the inverse temperature 1/ log(20000)." }, { "heading": "A.4 CONNECTION BETWEEN RLCT AND GENERALISATION", "text": "For completeness, we sketch the derivation of (9) which gives the asymptotic expansion of the average generalisation error EnG(n) of the Bayes prediction distribution in singular models. The exposition is an amalgamation of various works published by Sumio Watanabe, but is mostly based on the textbook (Watanabe, 2009).\nTo understand the connection between the RLCT and G(n), we first define the so-called Bayes free energy as\nF (n) = − log p(Dn)\nwhose expectation admits the following asymptotic expansion (Watanabe, 2009): EnF (n) = EnnSn + λ log n+ o(log n)\nwhere Sn = − 1n ∑n i=1 log q(yi|xi) is the entropy. The expected Bayesian generalisation error is related to the Bayes free energy as follows EnG(n) = EF (n+ 1)− EF (n) Then for the average generalisation error, we have EnG(n) = λ/n+ o(1/n). (19) Since models with more complex singularities have smaller RLCTs, this would suggest that the more singular a model is, the better its generalisation (assuming one uses the Bayesian predictive distribution for prediction). In this connection it is interesting to note that simpler (relative to the model) true distributions lead to more singular models (Section 6)." }, { "heading": "A.5 DETAILS FOR GENERALISATION ERROR EXPERIMENTS", "text": "Simulated data The distribution of x ∈ R3 is set to q(x) = N(0, I3). In the realisable case, y ∈ R3 is drawn according to q(y|x) = p(y|x, θ0). In the nonrealisable setting, we set q(y|x) ∝ exp{−||y− hw0(x)||2/2}, where w0 = (A0, B0) is drawn according to the PyTorch model initialisation of h. MAP training The MAP estimator is found via gradient descent using the mean-squared-error loss with either the full data set or minibatch set to 32. Training was set to 5000 epochs. No form of early stopping was employed.\nCalculating the generalisation error Using a held-out-test set Tn′ = {(x′i, y′i)}n ′\ni=1, we calculate the average generalisation error as\n1\nn′ n′∑ i=1 log q(y′i|x′i)− En 1 n′ n′∑ i=1 log q̂n(y ′ i|x′i) (20)\nAssume the held-out test set is large enough so that the difference between EnG(n) and (20) is negligible. We will refer to them interchangeably as the average generalisation error. In our experiments we use n′ = 10, 000 and 30 draws of the dataset Dn to estimate En. Last layer(s) inference Without loss of generality, we discuss performing inference in the w parameters of h while freezing the parameters of g at the MAP estimate. The steps easily extend to performing inference over the final layer only of f = h ◦ g. Let x̃i = gvMAP(xi). Define a new transformed dataset D̃n = {(x̃i, yi)}ni=1. We take the prior on w to be standard Gaussian. Define the posterior over w given D̃n as: p(w|D̃n) ∝ p(D̃n|w)ϕ(w) = Πni=1 exp{−||yi − hw(x̃i)||2/2}ϕ(w) (21) Define the following approximation to the Bayesian predictive distribution\np̃(y|x,Dn) = ∫ p(y|x, (vMAP, w))p(w|D̃n) dw.\nLet w1, . . . , wR be some approximate samples from p(w|D̃n). Then we approximate p̃(y|x,Dn) with\n1\nR R∑ r=1 p(y|x, (vMAP, wr))\nwhere R is a large number, set to 1000 in our experiments. We consider the Laplace approximation and the NUTS variant of HMC for drawing samples from p(w|D̃n):\n• Laplace in the last layer(s) Recall θMAP = (vMAP, wMAP) is the MAP estimate for fθ trained with the data Dn. With the Laplace approximation, we draw w1, . . . wR from the Gaussian\nN(wMAP,Σ)\nwhere Σ = (−∇2 log p(w|D̃n)|wMAP)−1 is the inverse Hessian4 of the negative log posterior evaluated at the MAP estimate of the mode. • MCMC in the last layer(s) We used the NUTS variant of HMC to draw samples from\n(21) with the first 1000 samples discarded.. Our implementation used the pyro package in PyTorch.\n4Following Kristiadi et al. (2020), the code for the exact Hessian calculation is borrowed from https: //github.com/f-dangel/hbp" } ]
2,020
null
SP:f478e45dfb8dcd578090da3010b2b1df73595b66
[ "This work studies the decision boundaries of neural networks (NN) with piecewise linear (ReLU) activation functions from a tropical geometry perspective. Leveraging the work of [1], the authors show that NN decision boundaries form subsets of tropical hypersurfaces. This geometric characterization of NN decision boundaries is then leveraged to better understand the lottery ticket hypothesis, and prune deep NNs. The authors also allude to the use of tropical geometric perspectives on NN decision boundaries for the generation of adversarial samples, but do not explicitly discuss it in any detail within the main text of the paper." ]
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to characterize the decision boundaries of a simple network of the form (Affine, ReLU, Affine). Our main finding is that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of these zonotopes are functions of the network parameters. This geometric characterization provides new perspectives to three tasks. (i) We propose a new tropical perspective to the lottery ticket hypothesis, where we view the effect of different initializations on the tropical geometric representation of a network’s decision boundaries. (ii) Moreover, we propose new tropical based optimization reformulations that directly influence the decision boundaries of the network for the task of network pruning. (iii) At last, we briefly discuss the reformulation of the generation of adversarial attacks in a tropical sense, where we elaborate on this in detail in the supplementary material.1
[]
[ { "authors": [ "Marianne Akian", "Stphane Gaubert", "Alexander Guterman" ], "title": "Tropical polyhedra are equivalent to mean payoff games", "venue": "International Journal of Algebra and Computation,", "year": 2009 }, { "authors": [ "Xavier Allamigeon", "Pascal Benchimol", "Stphane Gaubert", "Michael Joswig" ], "title": "Tropicalizing the simplex algorithm", "venue": "SIAM J. Discrete Math. 29:2,", "year": 2015 }, { "authors": [ "Diego Ardila", "Atilla P. Kiraly", "Sujeeth Bharadwaj", "Bokyung Choi", "Joshua J. Reicher", "Lily Peng", "Daniel Tse", "Mozziyar Etemadi", "Wenxing Ye", "Greg Corrado", "David P. Naidich", "Shravya Shetty" ], "title": "End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography", "venue": "Nature Medicine,", "year": 2019 }, { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Hans-Peter Beise", "Steve Dias Da Cruz", "Udo Schröder" ], "title": "On decision regions of narrow deep neural networks", "venue": "arXiv preprint arXiv:1807.01194,", "year": 2018 }, { "authors": [ "Leonard Berrada", "Andrew Zisserman", "M Pawan Kumar" ], "title": "Trusting svm for piecewise linear cnns", "venue": "arXiv preprint arXiv:1611.02185,", "year": 2016 }, { "authors": [ "Adel Bibi", "Modar Alfadly", "Bernard Ghanem" ], "title": "Analytic expressions for probabilistic moments of pl-dnn with gaussian input", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Erwan Brugallé", "Kristin Shaw" ], "title": "A bit of tropical geometry", "venue": "The American Mathematical Monthly,", "year": 2014 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": null, "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Patter Recognition Conference (CVPR),", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In ICLR. OpenReview.net,", "year": 2019 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of 13th International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Peter Gritzmann", "Bernd Sturmfels" ], "title": "Minkowski addition of polytopes: computational complexity and applications to gröbner bases", "venue": "SIAM Journal on Discrete Mathematics,", "year": 1993 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William J. Dally" ], "title": "Learning both weights and connections for efficient neural networks", "venue": null, "year": 2015 }, { "authors": [ "Babak Hassibi", "David G Stork" ], "title": "Second order derivatives for network pruning: Optimal brain surgeon", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Warren He", "Bo Li", "Dawn Song" ], "title": "Decision boundary analysis of adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "G. Hinton", "L. Deng", "D. Yu", "G.E. Dahl", "A. Mohamed", "N. Jaitly", "A. Senior", "V. Vanhoucke", "P. Nguyen", "T.N. Sainath", "B. Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Ilia Itenberg", "Grigory Mikhalkin", "Eugenii I Shustin" ], "title": "Tropical algebraic geometry", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Michael Joswig", "Georg Loho" ], "title": "Monomial tropical cones for multicriteria optimization", "venue": "AIP Conference Proceedings,", "year": 2019 }, { "authors": [ "Michael Joswig", "Benjamin Schröter" ], "title": "The tropical geometry of shortest paths", "venue": "arXiv preprint arXiv:1904.01082,", "year": 2019 }, { "authors": [ "Marc Khoury", "Dylan Hadfield-Menell" ], "title": "On the geometry of adversarial examples", "venue": "arXiv preprint arXiv:1811.00525,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Yu Li", "Peter Richtarik", "Lizhong Ding", "Xin Gao" ], "title": "On the decision boundary of deep neural networks", "venue": "arXiv preprint arXiv:1808.05385,", "year": 2018 }, { "authors": [ "Qinghua Liu", "Xinyue Shen", "Yuantao Gu" ], "title": "Linearized admm for nonconvex nonsmooth optimization with convergence analysis", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "D. Maclagan", "B. Sturmfels" ], "title": "Introduction to Tropical Geometry", "venue": "Graduate Studies in Mathematics. American Mathematical Society,", "year": 2015 }, { "authors": [ "Ngoc Mai Tran", "Josephine Yu" ], "title": "Product-mix auctions and tropical geometry", "venue": "Mathematics of Operations Research,", "year": 2015 }, { "authors": [ "D Melzer" ], "title": "On the expressibility of piecewise-linear continuous functions as the difference of two piecewise-linear convex functions", "venue": "In Quasidifferential Calculus. Springer,", "year": 1986 }, { "authors": [ "Grigory Mikhalkin" ], "title": "Enumerative tropical algebraic geometry in r2", "venue": "Journal of the American Mathematical Society,", "year": 2004 }, { "authors": [ "Guido Montufar", "Razvan Pascanu", "Kyunghyun Cho", "Y Bengio" ], "title": "On the number of linear regions of deep neural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": null, "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2011 }, { "authors": [ "Kristof Schütt", "Farhad Arbabzadah", "Stefan Chmiela", "Klaus-Robert Müller", "Alexandre Tkatchenko" ], "title": "Quantum-chemical insights from deep tensor neural networks", "venue": "Nature Communications,", "year": 2017 }, { "authors": [ "Abigail See", "Minh-Thang Luong", "Christopher D. Manning" ], "title": "Compression of neural machine translation models via pruning", "venue": null, "year": 2016 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": "Cambridge university press,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Georgios Smyrnis", "Petros Maragos" ], "title": "Tropical polynomial division and neural networks", "venue": "arXiv preprint arXiv:1911.12922,", "year": 2019 }, { "authors": [ "Georgios Smyrnis", "Petros Maragos" ], "title": "Multiclass neural network minimization via tropical newton polytope approximation", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kerrek Stinson", "David F Gleich", "Paul G Constantine" ], "title": "A randomized algorithm for enumerating zonotope vertices", "venue": "arXiv preprint arXiv:1602.06620,", "year": 2016 }, { "authors": [ "Kaidi Xu", "Sijia Liu", "Pu Zhao", "Pin-Yu Chen", "Huan Zhang", "Deniz Erdogmus", "Yanzhi Wang", "Xue Lin" ], "title": "Structured adversarial attack: Towards general implementation and better interpretability", "venue": "arXiv preprint arXiv:1808.01664,", "year": 2018 }, { "authors": [ "Liwen Zhang", "Gregory Naitzat", "Lek-Heng Lim" ], "title": "Tropical geometry of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jian Zhou", "Christopher Y. Park", "Chandra L. Theesfeld", "Aaron Wong", "Yuan Yuan", "Claudia Scheckel", "John Fak", "Julien Funk", "Kevin Yao", "Yoko Tajima", "Alan Packer", "Robert Darnell", "Olga G. Troyanskaya" ], "title": "Whole-genome deep-learning analysis identifies contribution of noncoding mutations to autism risk", "venue": "Nature Genetics,", "year": 2019 }, { "authors": [ "Xu" ], "title": "Problem (9) with a penalty method on the linear equality constraints, where each penalty step is solved with ADMM Boyd et al. (2011) in a similar fashion to the work", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "2019) showed that the linearized ADMM converges for some non-convex problems. Therefore, by linearizing L and adding Bergman divergence term ηk/2‖z− z‖22", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Neural Networks (DNNs) have demonstrated outstanding performance across a variety of research domains, including computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), natural language processing (Bahdanau et al., 2015; Devlin et al., 2018), quantum chemistry Schütt et al. (2017), and healthcare (Ardila et al., 2019; Zhou et al., 2019) to name a few (LeCun et al., 2015). Nevertheless, a rigorous interpretation of their success remains elusive (ShalevShwartz & Ben-David, 2014). For instance, in an attempt to uncover the expressive power of DNNs, the work of Montufar et al. (2014) studied the complexity of functions computable by DNNs that have piecewise linear activations. They derived a lower bound on the maximum number of linear regions. Several other works have followed to improve such estimates under certain assumptions (Arora et al., 2018). In addition, and in attempt to understand some of the subtle behaviours DNNs exhibit, e.g. the sensitive reaction of DNNs to small input perturbations, several works directly investigated the decision boundaries induced by a DNN for classification. The work of Moosavi-Dezfooli et al. (2019) showed that the smoothness of these decision boundaries and their curvature can play a vital role in network robustness. Moreover, the expressiveness of these decision boundaries at perturbed inputs was studied in He et al. (2018), where it was shown that these boundaries do not resemble the boundaries around benign inputs. The work of Li et al. (2018) showed that under certain assumptions, the decision boundaries of the last fully connected layer of DNNs will converge to a linear SVM. Also, Beise et al. (2018) showed that the decision regions of DNNs with width smaller than the input dimension are unbounded.\nMore recently, and due to the popularity of the piecewise linear ReLU as an activation function, there has been a surge in the number of works that study this class of DNNs in particular. As a result, this has incited significant interest in new mathematical tools that help analyze piecewise linear functions, such as tropical geometry. While tropical geometry has shown its potential in many applications such as dynamic programming (Joswig & Schröter, 2019), linear programming (Allamigeon et al., 2015), multi-objective discrete optimization (Joswig & Loho, 2019), enumerative geometry (Mikhalkin, 2004), and economics (Akian et al., 2009; Mai Tran & Yu, 2015), it has only been recently used\n1Code regenerating all our experiments is attached in the supplementary material.\nto analyze DNNs. For instance, the work of Zhang et al. (2018) showed an equivalency between the family of DNNs with piecewise linear activations and integer weight matrices and the family of tropical rational maps, i.e. ratio between two multi-variate polynomials in tropical algebra. This study was mostly concerned about characterizing the complexity of a DNN by counting the number of linear regions, into which the function represented by the DNN can divide the input space. This was done by counting the number of vertices of a polytope representation recovering the results of Montufar et al. (2014) with a simpler analysis. More recently, Smyrnis & Maragos (2019) leveraged this equivalency to propose a heuristic for neural network minimization through approximating the tropical rational map.\nContributions. In this paper, we take the results of Zhang et al. (2018) several steps further and present a novel perspective on the decision boundaries of DNNs using tropical geometry. To that end, our contributions are three-fold. (i) We derive a geometric representation (convex hull between two zonotopes) for a super set to the decision boundaries of a DNN in the form (Affine, ReLU, Affine). (ii) We demonstrate a support for the lottery ticket hypothesis (Frankle & Carbin, 2019) from a geometric perspective. (iii) We leverage the geometric representation of the decision boundaries, referred to as the decision boundaries polytope, in two interesting applications: network pruning and adversarial attacks. For tropical pruning, we design a geometrically inspired optimization to prune the parameters of a given network such that the decision boundaries polytope of the pruned network does not deviate too much from its original network counterpart. We conduct extensive experiments with AlexNet (Krizhevsky et al., 2012) and VGG16 (Simonyan & Zisserman, 2014) on SVHN (Netzer et al., 2011), CIFAR10, and CIFAR 100 (Krizhevsky & Hinton, 2009) datasets, in which 90% pruning rate is achieved with a marginal drop in testing accuracy. For tropical adversarial attacks, we show that one can construct input adversaries that can change network predictions by perturbing the decision boundaries polytope." }, { "heading": "2 PRELIMINARIES TO TROPICAL GEOMETRY", "text": "For completeness, we first provide preliminaries to tropical geometry (Itenberg et al., 2009; Maclagan & Sturmfels, 2015). Definition 1. (Tropical Semiring2) The tropical semiring T is the triplet {R ∪ {−∞},⊕, }, where ⊕ and define tropical addition and tropical multiplication, respectively. They are denoted as:\nx⊕ y = max{x, y}, x y = x+ y, ∀x, y ∈ T." }, { "heading": "It can be readily shown that −∞ is the additive identity and 0 is the multiplicative identity.", "text": "Given the previous definition, a tropical power can be formulated as x a = x x · · · x = a.x, for x ∈ T, a ∈ N, where a.x is standard multiplication. Moreover, a tropical quotient can be defined as: x y = x− y, where x− y is standard subtraction. For ease of notation, we write x a as xa. Definition 2. (Tropical Polynomials) For x ∈ Td, ci ∈ R and ai ∈ Nd, a d-variable tropical polynomial with n monomials f : Td → Td can be expressed as:\nf(x) = (c1 xa1)⊕ (c2 xa2)⊕ · · · ⊕ (cn xan), ∀ ai 6= aj when i 6= j.\nWe use the more compact vector notation xa = xa11 x a2 2 · · · x ad d . Moreover and for ease of notation, we will denote ci xai as cixai throughout the paper. Definition 3. (Tropical Rational Functions) A tropical rational is a standard difference or a tropical quotient of two tropical polynomials: f(x)− g(x) = f(x) g(x).\nAlgebraic curves or hypersurfaces in algebraic geometry, which are the solution sets to polynomials, can be analogously extended to tropical polynomials too. Definition 4. (Tropical Hypersurfaces) A tropical hypersurface of a tropical polynomial f(x) = c1x\na1 ⊕ · · · ⊕ cnxan is the set of points x where f is attained by two or more monomials in f , i.e. T (f) := {x ∈ Rd : cixai = cjxaj = f(x), for some ai 6= aj}.\nTropical hypersurfaces divide the domain of f into convex regions, where f is linear in each region. Also, every tropical polynomial can be associated with a Newton polytope.\n2A semiring is a ring that lacks an additive inverse.\nDefinition 5. (Newton Polytopes) The Newton polytope of a tropical polynomial f(x) = c1xa1 ⊕ · · · ⊕ cnxan is the convex hull of the exponents ai ∈ Nd regarded as points in Rd, i.e.\n∆(f) := ConvHull{ai ∈ Rd : i = 1, . . . , n and ci 6= −∞}.\nA tropical polynomial determines a dual subdivision, which can be constructed by projecting the collection of upper faces (UF) inP(f) := ConvHull{(ai, ci) ∈ Rd×R : i = 1, . . . , n} onto Rd. That is to say, the dual subdivision determined by f is given as δ(f) := {π(p) ⊂ Rd : p ∈ UF(P(f))}, where π : Rd×R→ Rd is the projection that drops the last coordinate. It has been shown by Maclagan & Sturmfels (2015) that the tropical hypersurface T (f) is the (d-1)-skeleton of the polyhedral complex dual to δ(f). This implies that each node of the dual subdivision δ(f) corresponds to one region in Rd where f is linear. This is exemplified in Figure 1 with three tropical polynomials, and to see this clearly, we will elaborate on the first tropical polynomial example f(x, y) = x⊕ y ⊕ 0. Note that as per Definition 4, the tropical hypersurface is the set of points (x, y) where x = y, y = 0, and x = 0. This indeed gives rise to the three solid red lines indicating the tropical hypersurfaces. As for the dual subdivision δ(f), we observe that x⊕y⊕0 can be written as (x1 y0)⊕(x0 y1)⊕(x0 y0). Thus, and since the monomials are bias free (ci = 0), then P(f) = ConvHull{(1, 0, 0), (0, 1, 0), (0, 0, 0)}. It is then easy to see that δ(f) = ConvHull{(1, 0), (0, 1), (0, 0)}, since UP(P(f)) = P(f), which is the black triangle in solid lines in Figure 1. One key observation in all three examples in Figure 1 is that the number of regions where f is linear (that is 3, 6 and 10, respectively) is equal to the number of nodes in the corresponding dual subdivisions. Second, the tropical hypersurfaces are parallel to the normals to the edges of the dual subdivision polytope. This observation will be essential for the remaining part of the paper. Several other observations are summarized by Brugallé & Shaw (2014). Moreover, Zhang et al. (2018) showed an equivalency between tropical rational maps and a family of neural network f : Rn → Rk with piecewise linear activations through the following theorem.\nTheorem 1. (Tropical Characterization of Neural Networks, (Zhang et al., 2018)). A feedforward neural network with integer weights and real biases with piecewise linear activation functions is a function f : Rn → Rk, whose coordinates are tropical rational functions of the input, i.e., f(x) = H(x) Q(x) = H(x)−Q(x), where H and Q are tropical polynomials.\nWhile this is new in the context of tropical geometry, it is not surprising, since any piecewise linear function can be written as a difference of two max functions over a set of hyperplanes (Melzer, 1986).\nBefore any further discussion, we first recap the definition of zonotopes.\nDefinition 6. Let u1, . . . ,uL ∈ Rn. The zonotope formed by u1, . . . ,uL is defined as Z(u1, . . . ,uL) := { ∑L i=1 xiu\ni : 0 ≤ xi ≤ 1}. Equivalently, Z can be expressed with respect to the generator matrix U ∈ RL×n, where U(i, :) = ui> as ZU := {U>x : ∀x ∈ [0, 1]L}.\nAnother common definition for a zonotope is the Minkowski sum of the set of line segments {u1, . . . ,uL} (refer to appendix), where a line segment of the vector ui in Rn is defined as {αui : ∀α ∈ [0, 1]}. It is well-known that the number of vertices of a zonotope is polynomial in the number of line segments, i.e. |vert (ZU) | ≤ 2 ∑n−1 i=0 ( L−1 i ) = O ( Ln−1 ) (Gritzmann & Sturmfels, 1993)." }, { "heading": "3 DECISION BOUNDARIES OF NEURAL NETWORKS AS POLYTOPES3", "text": "In this section, we analyze the decision boundaries of a network in the form (Affine, ReLU, Affine) using tropical geometry. For ease, we use ReLUs as the non-linear activation, but any other piecewise linear function can also be used. The functional form of this network is: f(x) = Bmax (Ax + c1,0) + c2, where max(.) is an element-wise operator. The outputs of the network f are the logit scores. Throughout this section, we assume4 that A ∈ Zp×n, B ∈ Z2×p, c1 ∈ Rp and c2 ∈ R2. For ease of notation, we only consider networks with two outputs, i.e. B2×p, where the extension to a multi-class output follows naturally and is discussed in the appendix. Now, since f is a piecewise linear function, each output can be expressed as a tropical rational as per Theorem 1. If f1 and f2 refer to the first and second outputs respectively, we have f1(x) = H1(x) Q1(x) and f2(x) = H2(x) Q2(x), where H1, H2, Q1 and Q2 are tropical polynomials. In what follows and for ease of presentation, we present our main results where the network f has no biases, i.e. c1 = 0 and c2 = 0, and we leave the generalization to the appendix. Theorem 2. For a bias-free neural network in the form f(x) : Rn → R2, where A ∈ Zp×n and B ∈ Z2×p, let R(x) = H1(x) Q2(x)⊕H2(x) Q1(x) be a tropical polynomial. Then: • Let B = {x ∈ Rn : f1(x) = f2(x)} define the decision boundaries of f , then B ⊆ T (R(x)). • δ (R(x)) = ConvHull (ZG1 ,ZG2). ZG1 is a zonotope in Rn with line segments {(B+(1, j) + B−(2, j))[A+(j, :),A−(j, :)]}pj=1 and shift (B−(1, :) + B+(2, :))A−, where A+ = max(A, 0) and A− = max(−A, 0). ZG2 is a zonotope in Rn with line segments {(B−(1, j) + B+(2, j))[A+(j, :),A−(j, :)]}pj=1 and shift (B+(1, :) + B−(2, :))A−.\nDigesting Theorem 2. This theorem aims at characterizing the decision boundaries (where f1(x) = f2(x)) of a bias-free neural network of the form (Affine, ReLU, Affine) through the lens of tropical geometry. In particular, the first result of Theorem 2 states that the tropical hypersurface T (R(x)) of the tropical polynomial R(x) is a superset to the set of points forming the decision boundaries, i.e. B. Just as discussed earlier and exemplified in Figure 1, tropical hypersurfaces are associated with a corresponding dual subdivision polytope δ(R(x)). Based on this, the second result of Theorem 2 states that this dual subdivision is precisely the convex hull of two zonotopes denoted as ZG1 and ZG2 , where each zonotope is only a function of the network parameters A and B. Theorem 2 bridges the gap between the behaviour of the decision boundaries B, through the superset T (R(x)), and the polytope δ (R(x)), which is the convex hull of two zonotopes. It is worthwhile to mention that Zhang et al. (2018) discussed a special case of the first part of Theorem 2 for a neural network with a single output and a score function s(x) to classify the output. To the best of our knowledge, this work is the first to propose a tropical geometric formulation of a superset containing the decision boundaries of a multi-class classification neural network. In particular, the first result of Theorem 2 states that one can perhaps study the decision boundaries, B, directly by studying their superset T (R(x)). While studying T (R(x)) can be equally difficult, the second result of Theorem 2 comes in handy. First, note that, since the network is bias-free, π becomes an identity mapping with δ(R(x)) = ∆(R(x)), and thus the dual subdivision δ(R(x)), which is the Newton polytope ∆(R(x)) in this case, becomes a well-structured geometric object that can be exploited to preserve\n3All proofs are left for the appendix. 4Without loss of generality, as one can very well approximate real weights as fractions and multiply by the\nleast common multiple of the denominators as discussed in Zhang et al. (2018).\ndecision boundaries as per the second part of Theorem 2. Now, based on the results of Maclagan & Sturmfels (2015) (Proposition 3.1.6) and as discussed in Figure 1, the normals to the edges of the polytope δ(R(x)) (convex hull of two zonotopes) are in one-to-one correspondence with the tropical hypersurface T (R(x)). Therefore, one can study the decision boundaries, or at least their superset T (R(x)), by studying the orientation of the dual subdivision δ(R(x)). While Theorem 2 presents a strong relation between a polytope (convex hull of two zonotopes) and the decision boundaries, it remains unclear how such a polytope can be efficiently constructed. Although the number of vertices of a zonotope is polynomial in the number of its generating line segments, fast algorithms for enumerating these vertices are still restricted to zonotopes with line segments starting at the origin Stinson et al. (2016). Since the line segments generating the zonotopes in Theorem 2 have arbitrary end points, we present the next result that transforms these line segments into a generator matrix of line segments starting from the origin as in Definition 6. This result is essential for an efficient computation of the zonotopes in Theorem 2. Proposition 1. The zonotope formed by p line segments in Rn with arbitrary end points {[ui1,ui2]} p i=1 is equivalent to the zonotope formed by the line segments {[ui1 − ui2,0]} p i=1 with a shift of ∑p i=1 u i 2.\nWe can now represent with the following corollary the arbitrary end point line segments forming the zonotopes in Theorem 2 with generator matrices, which allow us to leverage existing algorithms that enumerate zonotope vertices Stinson et al. (2016). Corollary 1. The generators of ZG1 ,ZG2 in Theorem 2 can be defined as G1 = Diag[(B+(1, :)) + (B−(2, :))]A and G2 = Diag[(B+(2, :)) + (B−(1, :))]A, both with shift (B−(1, :) + B+(2, :) + B+(1, :) + B−(2, :))A−, where Diag(v) arranges v in a diagonal matrix.\nNext, we show several applications for Theorem 2 by leveraging the tropical geometric structure." }, { "heading": "4 TROPICAL PERSPECTIVE TO THE LOTTERY TICKET HYPOTHESIS", "text": "The lottery ticket hypothesis was recently proposed by Frankle & Carbin (2019), in which the authors surmise the existence of sparse trainable sub-networks of dense, randomly-initialized, feedforward networks that when trained in isolation perform as well as the original network in a similar number of iterations. To find such sub-networks, Frankle & Carbin (2019) propose the following simple algorithm: perform standard network pruning, initialize the pruned network with the same initialization that was used in the original training setting, and train with the same number of epochs. They hypothesize that this results in a smaller network with a similar accuracy. In other words, a sub-network can have decision boundaries similar to those of the original larger network. While we do not provide a theoretical reason why this pruning algorithm performs favorably, we utilize the geometric structure that arises from Theorem 2 to reaffirm such behaviour. In particular, we show that the orientation of the dual subdivision δ(R(x)) (referred to as decision boundaries polytope), where the normals to its edges are parallel to T (R(x)) that is a superset to the decision boundaries, is preserved after pruning with the proposed initialization algorithm of Frankle & Carbin (2019). Conversely, pruning with a different initialization at each iteration results in a significant variation in the orientation of the decision boundaries polytope and ultimately in reduced accuracy.\nTo this end, we train a neural network with 2 inputs (n = 2), 2 outputs, and a single hidden layer with 40 nodes (p = 40). We then prune the network by removing the smallest x% of the weights. The pruned network is then trained using different initializations: (i) the same initialization as the original network (Frankle & Carbin, 2019), (ii) Xavier (Glorot & Bengio, 2010), (iii) standard Gaussian, and\n(iv) zero mean Gaussian with variance 0.1. Figure 3 shows the decision boundaries polytope, i.e. δ(R(x)), as we perform more pruning (increasing the x%) with different initializations. First, we show the decision boundaries by sampling and classifying points in a grid with the trained network (first subfigure). We then plot the decision boundaries polytope δ(R(x)) as per the second part of Theorem 2 denoted as original polytope (second subfigure). While there are many overlapping vertices in the original polytope, the normals to some of the edges (the major visible edges) are parallel to the decision boundaries shown in the first subfigure of Figure 3. We later show the decision boundaries polytope for the same network under different levels of pruning. One can observe that the orientation of δ(R(x)) for all different initialization schemes deviates much more from the original polytope as compared to the lottery ticket initialization. This gives an indication that lottery ticket initialization indeed preserves the decision boundaries, since it preserves the orientation of the decision boundaries polytope throughout the evolution of pruning. An alternative means to study the lottery ticket could be to directly observe the polytopes representing the functional form of the network, i.e. δ(H{1,2}(x)) and δ(Q{1,2}(x)), in lieu of the decision boundaries polytopes. However, this strategy may fail to provide a conclusive analysis of the lottery ticket, since there can exist multiple polytopes δ(H{1,2}(x)) and δ(Q{1,2}(x)) for networks with the same decision boundaries. This highlights the importance of studying the decision boundaries directly. Additional discussions and experiments are left for the appendix." }, { "heading": "5 TROPICAL NETWORK PRUNING", "text": "Network pruning has been identified as an effective approach to reduce the computational cost and memory usage during network inference. While it dates back to the work of LeCun et al. (1990) and Hassibi & Stork (1993), network pruning has recently gained more attention. This is due to the fact that most neural networks over-parameterize commonly used datasets. In network pruning, the task is to find a smaller subset of the network parameters, such that the resulting smaller network has similar decision boundaries (and thus supposedly similar accuracy) to the original over-parameterized network. In this section, we show a new geometric approach towards network pruning. In particular and as indicated by Theorem 2, preserving the polytope δ(R(x)) preserves a superset to the decision boundaries, T (R(x)), and thus the decision boundaries themselves. Motivational Insight. For a single hidden layer neural network, the dual subdivision to the decision boundaries is the polytope that is the convex hull of two zonotopes, where each is formed by taking the Minkowski sum of line segments (Theorem 2). Figure 4 shows an example, where pruning a neuron in the network has no effect on the dual subdivision polytope and hence no effect on performance. This occurs, since the tropical hypersurface T (R(x)) before and after pruning is preserved, thus, keeping the decision boundaries the same.\nProblem Formulation. In light of the motivational insight, a natural question arises: Given an over-parameterized binary output neural network f(x) = Bmax (Ax,0), can one construct a new neural network, parameterized by sparser weight matrices à and B̃, such that this smaller network has a dual subdivision δ(R̃(x)) that preserves the decision boundaries of the original network?\nTo address this question, we propose the following optimization problem to compute à and B̃:\nmin Ã,B̃\nd ( δ(R̃(x)), δ(R(x)) ) = min Ã,B̃ d ( ConvHull ( ZG̃1 ,ZG̃2 ) ,ConvHull (ZG1 ,ZG2) ) . (1)\nThe function d(.) defines a distance between two geometric objects. Since the generators G̃1 and G̃2 are functions of à and B̃ (as per Theorem 2), this optimization problem can be challenging to solve. However, for pruning purposes, one can observe from Theorem 2 that if the generators G̃1 and G̃2 had fewer number of line segments (rows), this corresponds to a fewer number of rows in the weight matrix à (sparser weights). So, we observe that if G̃1 ≈ G1 and G̃2 ≈ G2, then ˜δ(R(x)) ≈ δ(R(x)), and thus the decision boundaries tend to be preserved as a consequence. Therefore, we propose the following optimization problem as a surrogate to the one in Problem (1):\nmin Ã,B̃\n1\n2 (∥∥∥G̃1 −G1∥∥∥2 F + ∥∥∥G̃2 −G2∥∥∥2 F ) + λ1 ∥∥∥G̃1∥∥∥ 2,1 + λ2 ∥∥∥G̃2∥∥∥ 2,1 . (2)\nThe matrix mixed norm for C ∈ Rn×k is defined as ‖C‖2,1 = ∑n i=1 ‖C(i, :)‖2, which encourages the matrix C to be row sparse, i.e. complete rows of C are zero. The first two terms in Problem\n(2) aim at approximating the original dual subdivision δ(R(x)) by approximating the underlying generator matrices, G1 and G2. This aims to preserve the orientation of the decision boundaries of the newly constructed network. On the other hand, the second two terms in Problem (2) act as regularizers to control the sparsity of the constructed network by controlling the sparsity in the number of line segments. We observe that Problem (2) is not quadratic in its variables, since as per Corollary, 1 G̃1 = Diag[ReLU(B̃(1, :)) + ReLU(−B̃(2, :))]à and G̃2 = Diag[ReLU(B̃(2, : )) + ReLU(−B̃(1, :))]Ã. However, since Problem (2) is separable in the rows of à and B̃, we solve Problem (2) via alternating optimization over these rows, where each sub-problem can be shown to be convex and exhibits a closed-form solution leading to a very efficient solver. For ease of notation, we refer to ReLU(B̃(i, :)) and ReLU(−B̃(i, :)) as B̃+(i, :) and B̃−(i, :), respectively. As such, the per row update for à (first linear layer) is given as follows:\nÃ(i, :) = max 1− 1 2 λ1 √ ci1 + λ2 √ ci2\n1 2 (c i 1 + c i 2)\n1∥∥∥ci1G1(i,:)+ci2G2(i,:)1 2 (c i 1+c i 2) ∥∥∥ 2 , 0 (ci1G1(i, :) + ci2G2(i, :) 1 2 (c i 1 + c i 2) ) ,\nwhere ci1 is the i th element of c1 = ReLU(B(1, :)) + ReLU(−B(2, :)) and c2 = ReLU(B(2, : )) + ReLU(−B(1, :)). Similarly, the closed form update for the jth element of the second linear layer is as follows:\nB̃+(1, j) = max ( 0,\nÃ(j, :)>G̃1+(j, :)− λ‖Ã(j, :)‖2 ‖Ã(j, :)‖22\n) ,\nwhere G1+ = Diag(B+(1, :))A. A similar argument can be used to update the variables B̃+(2, :), B̃−(1, :), and B̃−(2, :). The details of deriving the aforementioned update steps and the extension to the multi-class case are left to the appendix. Note that all updates are cheap, as they are expressed in a closed form single step. In all subsequent experiments, we find that running the alternating optimization for a single iteration is sufficient to converge to a reasonable solution, thus, leading to a very efficient overall solver. Extension to Deeper Networks. While the theoretical results in Theorem 2 and Corollary 1 only hold for a shallow network in the form of (Affine, ReLU, Affine), we propose a greedy heuristic to prune much deeper networks by applying the aforementioned optimization for consecutive blocks of (Affine, ReLU, Affine) starting from the input and ending at the output of the network. This extension from a theoretical study of 2 layer network was observed in several works such as (Bibi et al., 2018).\nExperiments on Tropical Pruning. Here, we evaluate the performance of the proposed pruning approach as compared to several classical approaches on several architectures and datasets. In particular, we compare our tropical pruning approach against Class Blind (CB), Class Uniform (CU) and Class Distribution (CD) (Han et al., 2015; See et al., 2016). In Class Blind, all the parameters of a layer are sorted by magnitude where the x% with smallest magnitude are pruned. In contrast, Class Uniform prunes the parameters with smallest x% magnitudes per node in a layer. Lastly, Class Distribution performs pruning of all parameters for each node in the layer, just as in Class Uniform, but the parameters are pruned based on the standard deviation σc of the magnitude of the parameters per node. Since fully connected layers in deep neural networks tend to have much higher memory complexity than convolutional layers, we restrict our focus to pruning fully connected layers. We train AlexNet and VGG16 on SVHN, CIFAR10, and CIFAR100 datasets. We observe that we can prune more than 90% of the classifier parameters for both networks without affecting the accuracy. Since pruning is often a single block within a larger compression scheme that in many cases involves inexpensive fast fine tuning, we demonstrate experimentally that our approach can is competitive\nand sometimes outperforms other methods even when all parameters or when only the biases are fine-tuned after pruning. These experiments in addition to many others are left for the appendix.\nSetup. To account for the discrepancy in input resolution, we adapt the architectures of AlexNet and VGG16, since they were originally trained on ImageNet (Deng et al., 2009). The fully connected layers of AlexNet and VGG16 have sizes (256,512,10) and (512,512,10) for SVHN and CIFAR10, respectively, and with the last dimension increased to 100 for CIFAR100. All networks were trained to baseline test accuracy of (92%,74%,43%) for AlexNet on SVHN, CIFAR10, and CIFAR100, respectively and (92%,92%,70%) for VGG16. To evaluate the performance of pruning and following previous work Han et al. (2015), we report the area under the curve (AUC) of the pruning-accuracy plot. The higher the AUC is, the better the trade-off is between pruning rate and accuracy. For efficiency purposes, we run the optimization in Problem 2 for a single alternating iteration to identify the rows in à and elements of B̃ that will be pruned.\nResults. Figure 5 shows the comparison between our tropical approach and the three popular pruning schemes on both AlexNet and VGG16 over the different datasets. Our proposed approach can indeed prune out as much as 90% of the parameters of the classifier without sacrificing much of the accuracy. For AlexNet, we achieve much better performance in pruning as compared to other methods. In particular, we are better in AUC by 3%, 3%, and 2% over other pruning methods on SVHN, CIFAR10 and CIFAR100, respectively. This indicates that the decision boundaries can indeed be preserved by preserving the dual subdivision polytope. For VGG16, we perform similarly well on both SVHN and CIFAR10 and slightly worse on CIFAR100. While the performance achieved here is comparable to the other pruning schemes, if not better, we emphasize that our contribution does not lie in outperforming state-of-the-art pruning methods, but in giving a new geometry-based perspective to network pruning. More experiments were conducted where only network biases or only the classifier are fine-tuned after pruning. Retraining only biases can be sufficient, as they do not contribute to the orientation of the decision boundaries polytope (and effectively the decision boundaries), but only to its translation. Discussions on biases and more results are left for the appendix. Comparison Against Tropical Geometry Approaches. A recent tropical geometry inspired approach was proposed to address the problem of network pruning. In particular, Smyrnis & Maragos (2019; 2020) (SM) proposed an interesting yet heuristic algorithm to directly approximate the tropical rational by approximating the Newton polytope. For fair comparison and following the setup of SM, we train LeNet on MNIST and monitor the test accuracy as we prune its neurons. We report (neurons kept, SM, ours) triplets in (%) as follows: (100, 98.60, 98.84), (90, 95.71, 98.82), (75, 95.05, 98.8), (50, 95.52, 98.71), (25, 91.04, 98.36), (10, 92.79, 97.99), and (5, 92.93, 94.91). It is clear that tropical pruning outperforms SM by a margin that reaches 7%. This demonstrates that our theoretically motivated approach is still superior to more recent pruning approaches." }, { "heading": "6 TROPICAL ADVERSARIAL ATTACKS", "text": "DNNs are notorious for being sensitive to imperceptible noise at their inputs referred to as adversarial attacks. Several works investigated DNNs’ decision boundaries in the presence of such adversaries. For instance, Khoury & Hadfield-Menell (2018) analyzed the high dimensional geometry of adversar-\nial examples by means of manifold reconstruction while He et al. (2018) crafted adversarial attacks by estimating the distance to the decision boundaries using random search directions. In this work, we show how Theorem 2 can be leveraged to construct a tropical geometric adversarial attack. Due to the space limitation, we leave the extensive formulation, the algorithm to find the adversary, and the experimental results on synthetic and real datasets to the appendix." }, { "heading": "7 CONCLUSION", "text": "We leverage tropical geometry to characterize the decision boundaries of neural networks in the form (Affine, ReLU, Affine) and relate it to geometric objects such as zonotopes. We then provide a tropical perspective to support the lottery ticket hypothesis, prune networks, and design adversarial attacks. A natural extension is a compact derivation for the characterization of the decision boundaries of convolutional neural networks and graphical convolutional networks." }, { "heading": "8 PRELIMINARIES AND DEFINITIONS.", "text": "Fact 1. P +̃Q = {p+ q,∀p ∈ P and q ∈ Q} is the Minkowski sum between two sets P and Q. Fact 2. Let f be a tropical polynomial and let a ∈ N. Then\nP(fa) = aP(f).\nLet both f and g be tropical polynomials. Then\nFact 3. P(f g) = P(f)+̃P(g). (3)\nFact 4. P(f ⊕ g) = ConvexHull ( V (P(g)) ∪ V (P(g)) ) . (4)\nNote that V(P(f)) is the set of vertices of the polytope P(f). Definition 7. Upper Face of a Polytope P : UF(P ) is an upper face of polytope P in Rn if x+ten /∈ P for any x ∈ UF(P ), t > 0 where en is a canonical vector. Formally,\nUF(P ) = {x : x ∈ P, x + ten /∈ P ∀t > 0}" }, { "heading": "9 EXAMPLES", "text": "We revise the second example in Figure 1. Note that the two dimensional tropical polynomial f(x, y) can be written as follows:\nf(x, y) = (x⊕ y ⊕ 0) ((x 1)⊕ (y 1)⊕ 0) = (x x 1)⊕ (x y 1)⊕ (x 0)⊕ (y x 1)⊕ (y y 1) ⊕ (y 0)⊕ (0 x 1)⊕ (0 y 1)⊕ (0 0)\n= (x2 1)⊕ (x y 1)⊕ (x)⊕ (y x 1)⊕ (y2 1)⊕ (y)⊕ (x 1)⊕ (y 1)⊕ (0) = (x2 1)⊕ (x y 1)⊕ (x)⊕ (y2 1)⊕ (y 1)⊕ (0) = (x2 y0 1)⊕ (x y 1)⊕ (x y0 0)⊕ (x0 y2 1)⊕ (x0 y 1)⊕ (x0 y0 0)\nFirst equality follows since multiplication is distributive in rings and semi rings. The second equality follows since 0 is the multiplication identity. The penultimate equality follows since y 1 ≥ y, x y 1 ≥ x y 1 and x ≥ x 1 ∀ x, y. Therefore, the tropical hypersurface T (f) is defined as the of (x, y) where f achieves its maximum at least twice in its monomials. That is to say,\nT (f) ={f(x, y) = (x2 1) = (x y 1)} ∪ {f(x, y) = x2 1 = x}∪ {(f(x, y) = x = 0} ∪ {f(x, y) = x = x y 1}∪ {f(x, y) = y 1 = 0} ∪ {f(x, y) = y 1 = x y 1}∪ {f(x, y) = y 1 = y2 1} ∪ {f(x, y) = y2 1 = x y 1}.\nThis set T (f) is shown by the red lines in the second example in Figure 1. As for constructing the dual subdivision δ(f), we project the upperfaces in the newton polygon P(f) to R2. Note that P(f) with biases as per the definition in Section 2 is given as P(f) = ConvHull{(ai, ci) ∈ R2 × R ∀i = 1, . . . , 6} where (ai, ci) are the exponents and biases in the monomials of f , respectively. Therefore, P(f) = ConvHull{(2, 0,−1), (1, 1, 1), (1, 0, 0), (0, 2, 1), (0, 1, 1), (0, 0, 0)} as shown in Figure 6(a). As per Definition 7, the set of upper faces of P is:\nUP(P(f)) = ConvHull{(0, 2, 1), (1, 1, 1), (0, 1, 1)} ∪ ConvHull{(0, 1, 1), (1, 1, 1), (1, 0, 0)} ∪ ConvHull{(0, 1, 1), (1, 0, 0), (0, 0, 0)} ∪ ConvHull{(1, 1, 1), (2, 0,−1), (1, 0, 0)}.\nThis set UP(P(f)) is then projected, through π, to R2 shown in the yellow dashed lines in Figure 6(a) to construct the dual subdivision δ(f) in Figure 6(b). For example, note that the point (0, 2, 1) ∈ UF(f) and thereafter, π(0, 2, 1) = (0, 2, 0) ∈ δ(f)." }, { "heading": "10 PROOF OF THEOREM 2", "text": "Theorem 2. For a bias-free neural network in the form of f(x) : Rn → R2 where A ∈ Zp×n and B ∈ Z2×p, let R(x) = H1(x) Q2(x)⊕H2(x) Q1(x) be a tropical polynomial. Then:\n• Let B = {x ∈ Rn : f1(x) = f2(x)} define the decision boundaries of f , then B ⊆ T (R(x)).\n• δ (R(x)) = ConvHull (ZG1 ,ZG2). ZG1 is a zonotope in Rn with line segments {(B+(1, j) + B−(2, j))[A+(j, :),A−(j, :)]}pj=1 and shift (B−(1, :)+B+(2, :))A−. ZG2 is a zonotope in Rn with line segments {(B−(1, j) + B+(2, j))[A+(j, :),A−(j, :)]}pj=1 and shift (B+(1, :) + B−(2, :))A−. The line segment (B+(1, j) + B−(2, j))[A+(j, : ),A−(j, :)] has end points A+(j, :) and A−(j, :) in Rn and scaled by (B+(1, j) + B−(2, j)).\nNote that A+ = max(A, 0) and A− = max(−A, 0) where the max(.) is element-wise. The line segment (B(1, j)+ + B(2, j)−)[A(j, :)+,A(j, :)−] is one that has the end points A(j, :)+ and A(j, :)− in Rn and scaled by the constant B(1, j)+ + B(2, j)−.\nProof. For the first part, recall from Theorem1 that both f1 and f2 are tropical rationals and hence,\nf1(x) = H1(x)−Q1(x) f2(x) = H2(x)−Q2(x)\nThus\nB = {x ∈ Rn : f1(x) = f2(x)} = {x ∈ Rn : H1(x)−Q1(x) = H2(x)−Q2(x)} = {x ∈ Rn : H1(x) +Q2(x) = H2(x) +Q1(x)} = {x ∈ Rn : H1(x) Q2(x) = H2(x) Q1(x)}\nRecall that the tropical hypersurface is defined as the set of x where the maximum is attained by two or more monomials. Therefore, the tropical hypersurface of R(x) is the set of x where the maximum is attained by two or more monomials in (H1(x) Q2(x)), or attained by two or more monomials in\n(H2(x) Q1(x)), or attained by monomials in both of them in the same time, which is the decision boundaries. Hence, we can rewrite that as\nT (R(x)) = T (H1(x) Q2(x)) ∪ T (H2(x) Q1(x)) ∪ B.\nTherefore B ⊆ T (R(x)). For the second part of the Theorem, we first use the decomposition proposed by Zhang et al. (2018); Berrada et al. (2016) to show that for a network f(x) = Bmax (Ax,0), it can be decomposed as tropical rational as follows\nf(x) = ( B+ −B− ) ( max(A+x,A−x)−A−x ) = [ B+ max(A+x,A−x) + B−A−x\n] − [ B−max(A+x,A−x) + B+A−x ] .\nTherefore, we have that H1(x) +Q2(x) = ( B+(1, :) + B−(2, :) ) max(A+x,A−x)\n+ ( B−(1, :) + B+(2, :) ) A−x\nH2(x) +Q1(x) = ( B−(1, :) + B+(2, :) ) max(A+x,A−x)\n+ ( B+(1, :) + B−(2, :) ) A−x.\nTherefore, note that:\nδ(R(x)) = δ (( H1(x) Q2(x) ) ⊕ ( H2(x) Q1(x) )) 4 = ConvexHull ( δ ( H1(x) Q2(x) ) , δ ( H2(x) Q1(x)\n)) 3 = ConvexHull ( δ ( H1(x) ) +̃δ ( Q2(x) ) , δ ( H2(x) ) +̃δ ( Q1(x) )) .\nNow observe that H1(x) = ∑p j=1 ( B+(1, j) + B−(2, j) ) max ( A+(j, :),A−(j, :)x ) tropically is\ngiven as follows H1(x) = pj=1 [ xA +(j,:) ⊕ xA−(j,:) ]B+(1,j) B−(2,j) , thus we have that :\nδ(H1(x)) = ( B+(1, 1) + B−(2, 1) ) δ ( xA +(1,:) ⊕ xA −(1,:) ) +̃ . . .\n+̃ ( B+(1, p) + B−(2, p) )( δ(xA +(p,:) ⊕ xA −(p,:)) ) = ( B+(1, 1) + B−(2, 1) ) ConvexHull ( A+(1, :),A−(1, :) ) +̃ . . .\n+̃ ( B+(1, p) + B−(2, p) ) ConvexHull ( A+(p, :),A−(p, :) ) .\nThe operator +̃ indicates a Minkowski sum between sets. Note that ConvexHull ( A+(i, :),A−(i, :\n) )\nis the convexhull between two points which is a line segment in Zn with end points that are {A+(i, :),A−(i, :)} scaled with B+(1, i) + B−(2, i). Observe that δ(F1(x)) is a Minkowski sum of line segments which is is a zonotope. Moreover, note that Q2(x) = (B−(1, :) + B+(2, :))A−x tropically is given as follows Q2(x) = pj=1xA −(j,:)(B +(1,j) B−(2,j)) . One can see that δ(Q2(x)) is the Minkowski sum of the points {(B−(1, j) −B+(2, j))A−(j, :)}∀j in Rn (which is a standard sum) resulting in a point. Lastly, δ(H1(x))+̃δ(Q2(x)) is a Minkowski sum between a zonotope and a single point which corresponds to a shifted zonotope. A similar symmetric argument can be applied for the second part δ(H2(x))+̃δ(Q1(x)).\nIt is also worthy to mention that the extension to network with multi class output is trivial. In that case all of the analysis can be exactly applied studying the decision boundary between any two classes (i, j) where B = {x ∈ Rn : fi(x) = fj(x)} and the rest of the proof will be exactly the same." }, { "heading": "11 PROOF OF PROPOSITION 1", "text": "Proposition 1. The zonotope formed by p line segments in Rn with arbitrary end points {[ui1,ui2]} p i=1 is equivalent to the zonotope formed by the line segments {[ui1 − ui2,0]} p i=1 with a shift of ∑p i=1 u i 2.\nProof. Let Uj be a matrix with Uj(:, i) = uij , i = 1, . . . , p, w be a column-vector with w(i) = wi, i = 1, . . . , p and 1p is a column-vector of ones of length p. Then, the zonotope Z formed by the Minkowski sum of line segments with arbitrary end points can be defined as:\nZ = { p∑ i=1 wiu i 1 + (1− wi)ui2;wi ∈ [0, 1], ∀ i } = { U1w −U2w + U21p, w ∈ [0, 1]p\n} = { (U1 −U2)w + U21p, w ∈ [0, 1]p }\n= { (U1 −U2)w, w ∈ [0, 1]p } +̃ { U21p } .\nSince the Minkowski sum of between a polytope and a point is a translation; thereafter, the proposition follows directly from Definition 6.\nCorollary 2. The generators of ZG1 ,ZG2 in Theorem 2 can be defined as G1 = Diag[(B+(1, :)) + (B−(2, :))]A and G2 = Diag[(B+(2, :)) + (B−(1, :))]A, both with shift (B−(1, :) + B+(2, :) + B+(1, :) + B−(2, :))A−, where Diag(v) arranges v in a diagonal matrix.\nProof. This follows directly by applying Proposition 1 to the second bullet point of Theorem 2." }, { "heading": "11.1 OPTIMIZATION OF OBJECTIVE 2 OF THE BINARY CLASSIFIER", "text": "min Ã,B̃\n1\n2 ∥∥∥G̃1 −G1∥∥∥2 F + ∥∥∥∥12G̃2 −G2 ∥∥∥∥2 F + λ1 ∥∥∥G̃1∥∥∥ 2,1 + λ2 ∥∥∥G̃2∥∥∥ 2,1 . (5)\nNote that G̃1 = Diag [ ReLU(B̃(1, :)) + ReLU(−B̃(2, :)) ] Ã, G̃2 = Diag [ ReLU(B̃(2, :)) +\nReLU(−B̃(1, :)) ] Ã. Note that G1 = Diag [ ReLU(B(1, :)) + ReLU(−B(2, :)) ] A and G2 =\nDiag [ ReLU(B(2, :)) + ReLU(−B(1, :)) ] A. For ease of notation, we refer to ReLU(B̃(i, :)) and\nReLU(−B̃(i, :)) as B̃+(i, :) and B̃−(i, :), respectively. We solve the problem with co-ordinate descent an alternate over variables.\nUpdate Ã.\nÃ← arg min Ã\n1\n2 ∥∥∥Diag (c1) Ã−G1∥∥∥2 F + 1 2 ∥∥∥Diag(c2)Ã−G2∥∥∥2 F + λ1 ∥∥∥Diag(c1)Ã∥∥∥ 2,1 + λ2 ∥∥∥Diag(c2)Ã∥∥∥ 2,1 ,\nwhere c1 = ReLU(B(1, :)) + ReLU(−B(2, :)) and c2 = ReLU(B(2, :)) + ReLU(−B(1, :)). Note that the problem is separable per-row of Ã. Therefore, the problem reduces to updating rows of à independently and the problem exhibits a closed form solution.\nÃ(i, :) = arg min Ã(i,:)\n1\n2 ∥∥∥ci1Ã(i, :)−G1(i, :)∥∥∥2 2 + 1 2 ∥∥∥ci2Ã(i, :)−G2(i, :)∥∥∥2 2 + (λ1 √ ci1 + λ2 √ ci2) ∥∥∥Ã(i, :)∥∥∥ 2\n= arg min Ã(i,:)\n1\n2 ∥∥∥∥Ã(i, :)− ci1G1(i, :) + ci2G2(i, :)1 2 (c i 1 + c i 2) ∥∥∥∥2 2 + 1 2 λ1 √ ci1 + λ2 √ ci2 1 2 (c i 1 + c i 2) ∥∥∥Ã(i, :)∥∥∥ 2\n= max 1− 1 2 λ1 √ ci1 + λ2 √ ci2\n1 2 (c i 1 + c i 2)\n1∥∥∥ci1G1(i,:)+ci2G2(i,:)1 2 (c i 1+c i 2) ∥∥∥ 2 , 0 (ci1G1(i, :) + ci2G2(i, :) 1 2 (c i 1 + c i 2) ) .\nUpdate B̃+(1, :).\nB̃+(1, :) = arg min B̃+(1,:)\n1\n2 ∥∥∥Diag(B̃+(1, :)) Ã−C1∥∥∥2 F + λ1 ∥∥∥Diag(B̃+(1, :)) à + C2∥∥∥ 2,1 , s.t. B̃+(1, :) ≥ 0.\nNote that C1 = G1 − Diag ( B̃−(2, :) ) à and where Diag ( B̃−(2, :) ) Ã. Note the problem is\nseparable in the coordinates of B̃+(1, :) and a projected gradient descent can be used to solve the problem in such a way as:\nB̃+(1, j) = arg min B̃+(1,j)\n1\n2 ∥∥∥B̃+(1, j)Ã(j, :)−C1(j, :)∥∥∥2 2 + λ1 ∥∥∥B̃+(1, j)Ã(j, :) + C2(j, :)∥∥∥ 2 , s.t. B̃+(1, j) ≥ 0.\nA similar symmetric argument can be used to update the variables B̃+(2, :), B̃+(1, :) and B̃−(2, :)." }, { "heading": "12 ADAPTING OPTIMIZATION 2 FOR MULTI-CLASS CLASSIFIER", "text": "Note that Theorem 2 describes a superset to the decision boundaries of a binary classifier through the dual subdivision R(x), i.e. δ(R(x)). For a neural network f with k classes, a natural extension for it is to analyze the pair-wise decision boundaries of of all k-classes. Thus, let T (Rij(x)) be the superset to the decision boundaries separating classes i and j. Therefore, a natural extension to the geometric loss in Equation 1 is to preserve the polytopes among all pairwise follows:\nmin Ã,B̃ ∑ ∀[i,j]∈S d ( ConvexHull ( ZG̃(i+,j−) ,ZG̃(j+,i−) ) ,ConvexHull ( ZG(i+,j−) ,ZG(j+,i−) )) . (6)\nThe set S is all possible pairwise combinations of the k classes such that S = {{i, j},∀i 6= j, i = 1, . . . , k, j = 1, . . . , k}. The generator Z(G̃(i,j)) is the zonotope with the generator matrix G̃(i+,j−) = Diag [ ReLU(B̃(i, :)) + ReLU(−B̃(j, :)) ] Ã. However, such an approach is generally\ncomputationally expensive, particularly, when k is very large. To this end, we make the following observation that G̃(i+,j−) can be equivalently written as a Minkowski sum between two sets zonotopes\nwith the generators Gi+ = Diag [ ReLU(B̃(i, :) ] à and Gj− = Diag [ ReLU(B̃j−) ] Ã. That is to\nsay, ZG̃(i+,j−) = ZG̃i++̃ZG̃j− . This follows from the associative property of Minkowski sums given as follows:\nFact 5. Let {Si}ni=1 be the set of n line segments. Then we have that\nS = S1+̃ . . . +̃Sn = P +̃V\nwhere the sets P = +̃j∈C1Sj and V = +̃j∈C2Sj where C1 and C2 are any complementary partitions of the set {Si}ni=1.\nHence, G̃(i+,j−) can be seen a concatenation between G̃i+ and G̃j− . Thus, the objective in 6 can be expanded as follows:\nmin Ã,B̃ ∑ ∀{i,j}∈S d ( ConvexHull ( ZG̃(i+,j−) ,ZG̃(j+,i−) ) ,ConvexHull ( ZG(i+,j−) ,ZG(j+,i−) )) = min\nÃ,B̃ ∑ ∀{i,j}∈S d ( ConvexHull ( ZG̃i+ +̃ZG̃j− ,ZG̃+j +̃ZG̃i− ) ,ConvexHull ( ZGi+ +̃ZGj− ,ZG+j +̃ZGi− )) ≈ min\nÃ,B̃ ∑ ∀[i,j]∈S ∥∥∥(G̃i+ G̃j− ) − ( Gi+ Gj− )∥∥∥2 F + ∥∥∥(G̃i− G̃j+ ) − ( Gi− Gj+ )∥∥∥2 F\n= min Ã,B̃ ∑ ∀{i,j}∈S 1 2 ∥∥∥G̃i+ −Gi+∥∥∥2 F + 1 2 ∥∥∥G̃i− −Gi−∥∥∥2 F + 1 2 ∥∥∥G̃j+ −Gj+∥∥∥2 F + 1 2 ∥∥∥G̃j− −Gj−∥∥∥2 F\n= min Ã,B̃ k − 1 2 k∑ i=1 ∥∥∥G̃i+ −Gi+∥∥∥2 F + ∥∥∥G̃i− −Gi−∥∥∥2 F .\nThe approximation follows in a similar argument to the binary classifier case. The last equality follows from a counting argument. We solve the objective for all multi-class networks in the experiments with alternating optimization in a similar fashion to the binary classifier case. Similarly to the binary classification approach, we introduce the ‖.‖2,1 to enforce sparsity constraints for pruning purposes. Therefore the overall objective has the form:\nmin Ã,B̃\n1\n2 k∑ i=1 ∥∥∥G̃i+ −Gi+∥∥∥2 F + ∥∥∥G̃i− −Gi−∥∥∥2 F + λ (∥∥∥G̃i+∥∥∥ 2,1 + ∥∥∥G̃i−∥∥∥ 2,1 ) .\nFor completion, we derive the updates for à and B̃.\nUpdate Ã.\nà = arg min à k∑ i=1 1 2 (∥∥∥Diag(B̃+(i, :)) Ã−Gi+∥∥∥2 F + ∥∥∥Diag(B̃−(i, :)) Ã−Gi−∥∥∥2 F ) + λ (∥∥∥Diag(B̃+(i, :)) Ã∥∥∥ 2,1 + ∥∥∥Diag(B̃−(i, :)) Ã∥∥∥ 2,1 ) .\nSimilar to the binary classification, the problem is separable in the rows of Ã. and a closed form solution in terms of the proximal operator of `2 norm follows naturally for each Ã(i, :).\nUpdate B̃+(i, :).\nB̃+(i, :) = arg min B̃+(i,:)\n1\n2 ∥∥∥Diag(B̃+(i, :)) Ã− G̃i+∥∥∥2 F + λ ∥∥∥Diag(B̃+(i, :)) Ã∥∥∥ 2,1 , s.t. B̃+(i, :) ≥ 0.\nNote that the problem is separable per coordinates of B+(i, :) and each subproblem is updated as:\nB̃+(i, j) = arg min B̃+(i,j)\n1\n2 ∥∥∥B̃+(i, j)Ã(j, :)− G̃i+(j, :)∥∥∥2 2 + λ ∥∥∥B̃+(i, j)Ã(j, :)∥∥∥ 2 , s.t. B̃+(i, j) ≥ 0\n= arg min B̃+(i,j)\n1\n2 ∥∥∥B̃+(i, j)Ã(j, :)− G̃i+(j, :)∥∥∥2 2 + λ ∣∣∣B̃(i, j)∣∣∣ ∥∥∥Ã(j, :)∥∥∥ 2 , s.t. B̃+(i, j) ≥ 0\n= max ( 0,\nÃ(j, :)>G̃i+(j, :)− λ‖Ã(j, :)‖2 ‖Ã(j, :)‖22\n) .\nA similar argument can be used to update B̃−(i, :) ∀i. Finally, the parameters of the pruned network will be constructed A← à and B← B̃+ − B̃−." }, { "heading": "13 TROPICAL ADVERSARIAL ATTACKS.", "text": "Dual View to Adversarial Attacks. For a classifier f : Rn → Rk and input x0 classified as c, a standard formulation for targeted adversarial attacks to a different class t is defined as:\nmin η D(η) s.t. arg max i fi(x0 + η) = t 6= c (7)\nThis objective aims to compute the lowest energy input noise η (measured by D) such that the the new sample (x0 + η) crosses the decision boundaries of f to a new classification region. Here, we present a dual view to adversarial attacks. Instead of designing a sample noise η such that (x0 + η) belongs to a new decision region, one can instead fix x0 and perturb the network parameters to move the decision boundaries in a way that x0 appears in a new classification region. In particular, let A1 be the first linear layer of f , such that f(x0) = g(A1x0). One can now perturb A1 to alter the decision boundaries and relate this parameter perturbation to the input perturbation as follows:\ng((A1 + ξA1)x0) = g (A1x0 + ξA1x0) = g(A1x0 + A1η) = f(x0 + η). (8)\nFrom this dual view, we observe that traditional adversarial attacks are intimately related to perturbing the parameters of the first linear layer through the linear system: A1η = ξA1x0. The two views and formulations are identical under such condition. With this analysis, Theorem 2 provides explicit means to geometrically construct adversarial attacks by perturbing the decision boundaries. In particular, since the normals to the dual subdivision polytope δ(R(x)) of a given DNN represent the tropical hypersurface T (R(x)), which is a superset to the decision boundaries set B, ξA1 can be designed to sufficiently perturb the dual subdivision resulting in a change in the network prediction of x0 to the targeted class t. Based on this observation, we design an optimization problem that generates two sets of perturbations, an input perturbation and parameter perturbation, that are equivalent to each other.\nFormulation. Based on this observation, we formulate the problem as follows:\nmin η,ξA1\nD1(η) +D2(ξA1) s.t. − loss(g(A1(x0 + η)), t) ≤ −1; ‖η‖∞ ≤ 1;\n− loss(g(A1 + ξA1)x0, t) ≤ −1; (x0 + η) ∈ [0, 1]n, ‖ξA1‖∞,∞ ≤ 2, A1η = ξA1x0. (9)\nThe loss is the standard cross-entropy loss. The first row of constraints ensures that the network prediction is the desired target class t when the input x0 is perturbed by η, and equivalently by perturbing the first linear layer A1 by ξA1 . This is identical to f1 as proposed by Carlini & Wagner (2016). Moreover, the third and fourth constraints guarantee that the perturbed input is feasible and that the perturbation is bounded, respectively. The fifth constraint is to limit the maximum perturbation on the first linear layer, while the last constraint enforces the dual equivalence between input perturbation and parameter perturbation. The function D2 captures the perturbation of the dual subdivision polytope upon perturbing the first linear layer by ξA1 . For a single hidden layer neural network parameterized as (A1 + ξA1) ∈ Rp×n and B ∈ R2×p for the first and second layers respectively, D2 can capture the perturbations in each of the two zonotopes discussed in Theorem 2 and we define it as:\nD2(ξA1) = 1\n2 2∑ j=1 ∥∥Diag(B+(j, :))ξA1∥∥2F + ∥∥Diag(B−(j, :))ξA1∥∥2F . (10) We solve Problem (9) with a penalty method on the linear equality constraints, where each penalty step is solved with ADMM Boyd et al. (2011) in a similar fashion to the work of Xu et al. (2018).\nThe function D2(ξA) captures the perturbation in the dual subdivision polytope such that the dual subdivision of the network with the first linear layer A1 is similar to the dual subdivision of the network with the first linear layer A1 + ξA1 . This can be generally formulated as an approximation to the following distance function d ( ConvHull ( ZG̃1 ,ZG̃2 ) ,ConvHull (ZG1 ,ZG2) ) ,\nwhere G̃1 = Diag [ ReLU(B̃(1, :)) + ReLU(−B̃(2, :)) ] ( à + ξA1 ) , G̃2 = Diag [ ReLU(B̃(2, :\n)) + ReLU(−B̃(1, :)) ] (\nà + ξA1 ) , G1 = Diag [ ReLU(B̃(1, :)) + ReLU(−B̃(2, :)) ] à and G2 =\nDiag [ ReLU(B̃(2, :)) + ReLU(−B̃(1, :)) ] Ã. In particular, to approximate the function d, one can\nuse a similar argument as in used in network pruning 5 such that D2 approximates the generators of the zonotopes directly as follows:\nD2(ξA1) = 1\n2 ∥∥∥G̃1 −G1∥∥∥2 F + 1 2 ∥∥∥G̃2 −G2∥∥∥2 F\n= 1\n2 ∥∥∥Diag(B+(1, :))ξA1∥∥∥2 F + 1 2 ∥∥∥Diag(B−(1, :))ξA1∥∥∥2 F\n+ 1\n2 ∥∥∥Diag(B+(2, :))ξA1∥∥∥2 F + 1 2 ∥∥∥Diag(B−(2, :))ξA1∥∥∥2 F .\nThis can thereafter be extended to multi-class network with k classes as follows D2(ξA1) = 1 2 ∑k j=1 ∥∥∥Diag(B+(j, :))ξA1∥∥∥2 F + ∥∥∥Diag(B−(j, :))ξA1∥∥∥2 F . Following Xu et al. (2018), we take D1(η) = 12 ‖η‖ 2 2. Therefore, we can write 9 as follows:\nmin η,ξA\nD1(η) + k∑ j=1 ∥∥∥Diag(B+(j, :))ξA∥∥∥2 F + ∥∥∥Diag(B−(j, :))ξA∥∥∥2 F .\ns.t. − loss(g(A1(x0 + η)), t) ≤ −1, −loss(g((A1 + ξA1)x0), t) ≤ −1, (x0 + η) ∈ [0, 1]n, ‖η‖∞ ≤ 1, ‖ξA1‖∞,∞ ≤ 2, A1η − ξA1x0 = 0.\nTo enforce the linear equality constraints A1η − ξA1x0 = 0, we use a penalty method, where each iteration of the penalty method we solve the sub-problem with ADMM updates. That is, we solve the following optimization problem with ADMM with increasing λ such that λ → ∞. For ease of notation, lets denote L(x0 + η) = −loss(g(A1(x0 + η)), t), and L̄(A1) = −loss(g((A1 + ξA1)x0), t).\nmin η,z,w,ξA1\n‖η‖22 + k∑ j=1 ∥∥∥Diag(ReLU(B(j, :))ξA1∥∥∥2 F + ∥∥∥Diag(ReLU(−B(j, :)))ξA1∥∥∥2 F\n+ L(x0 + z) + h1(w) + h2(ξA1) + λ‖A1η − ξA1 x0‖22 + L̄(A1). s.t. η = z z = w.\nwhere\nh1(η) = { 0, if (x0 + η) ∈ [0, 1]n, ‖η‖∞ ≤ 1 ∞, else h2(ξA1) = { 0, if ‖ξA1‖∞,∞ ≤ 2 ∞, else .\nThe augmented Lagrangian is given as follows: L(η,w, z, ξA1 ,u,v) := ‖η‖22 + L(x0 + z) + h1(w) + k∑ j=1 ∥∥Diag(B+(j, :))ξA1∥∥2F + ∥∥Diag(B−(j, :))ξA1∥∥2F + L̄(A1) + h2(ξA1) + λ‖A1η − ξA1 x0‖22 + u>(η − z) + v>(w − z)\n+ ρ\n2 (‖η − z‖22 + ‖w − z‖22).\nThereafter, ADMM updates are given as follows:\n{ηk+1,wk+1} = arg min η,w L(η,w, zk, ξkA1 ,u k,vk),\nzk+1 = arg min z L(ηk+1,wk+1, z, ξkA1 ,u k,vk),\nξk+1A1 = arg min ξA1 L(ηk+1,wk+1, zk+1, ξA1 ,uk,vk).\nuk+1 = uk + ρ(ηk+1 − zk+1), vk+1 = vk + ρ(wk+1 − zk+1).\nUpdating η:\nηk+1 = arg min η\n‖η‖22 + λ‖A1η − ξA1 x0‖22 + u>η + ρ\n2 ‖η − z‖22\n= ( 2λA>1 A1 + (2 + ρ)I )−1( 2λA>1 ξ k A1x0 + ρz k − uk ) .\nUpdating w:\nwk+1 = arg min w\nvk > w + h1(w) +\nρ 2 ‖w − zk‖22\n= arg min w\n1\n2 ∥∥∥∥w − (zk − vkρ )∥∥∥∥2\n2\n+ 1\nρ h1(w).\nThe update w is separable in coordinates as follows:\nwk+1 = min(1− x0, 1) : zk − 1/ρvk > min(1− x0, 1) max(−x0,− 1) : zk − 1/ρvk < max(−x0,− 1) zk − 1/ρvk : otherwise\nUpdating z:\nzk+1 = arg min z L(x0 + z)− uk\n> z− vk>z + ρ\n2\n( ‖ηk+1 − z‖22 + ‖wk+1 − z‖22 ) .\nLiu et al. (2019) showed that the linearized ADMM converges for some non-convex problems. Therefore, by linearizing L and adding Bergman divergence term ηk/2‖z− zk‖22, we can then update z as follows:\nzk+1 = 1\nηk + 2ρ\n( ηkzk + ρ ( ηk+1 + 1\nρ uk + wk+1 +\n1 ρ vk ) −∇L(zk + x0) ) .\nIt is worthy to mention that the analysis until this step is inspired by Xu et al. (2018) with modifications to adapt our new formulation.\nUpdating ξA:\nξk+1A = arg min ξA ‖ξA1‖2F + λ‖ξA1x0 −A1η‖22 + L̄(A1) s.t. ‖ξA1‖∞,∞ ≤ 2.\nThe previous problem can be solved with proximal gradient methods.\nExperimental Setup. For the tropical adversarial attacks experiments, there are five different hyper parameters which are\n1 : The upper bound for the infinite norm of δ. 2 : The upper bound for the‖.‖∞,∞of the perturbation on the first linear layer. λ : Regularizer to enforce the equality between input perturbation and first layer perturbation η : Bergman divergence constant. ρ : ADMM constant.\nAlgorithm 1: Solving Problem (9) Input: A1 ∈ Rp×n,B ∈ Rk×p,x0 ∈ Rn, t, λ > 0, γ > 1,K > 0, ξA1 = 0p×n, η1 = z1 = w1 = z1 = u1 = w1 = 0n. Output: η, ξA1 Initialize: ρ = ρ0 while not converged do\nFor all of the experiments, we set the values of 2, λ, η and ρ to 1, 10−3, 2.5 and 1, respectively. As for 1 it is set to 0.1 upon attacking MNIST images of digit 4 set to 0.2 for all other MNIST images.\nMotivational Insight to the Dual View. We train a network with 2 inputs, 50 hidden nodes and 2 outputs on a synthetic dataset where we then then solve Equation 9 for a given x0 shown in black in Figure 7. We show the decision boundaries with and without the perturbation ξA1 at the first linear layer. As show in Figure 7, perturbing an edge of the dual subdivision polytope, by perturbing the first linear layer, corresponds to perturbing the decision boundaries and results in the misclassification of x0. As expected, perturbing different decision boundaries corresponds to perturbing different edges of the dual subdivision. Note that the generated input perturbation η is sufficient as well into fooling the network in classifying x0 +η, and by construction is equivalent to perturb the decision boundaries of the network. We show later another example where we alternate the position of x0 and construct successful adversaries in both the input space, and the parameter space. Furthermore, we conduct experiments on MNIST images in a later section, which show that successful adversarial attacks η can be designed by solving Problem (9). Figure 7 shows another example where the sampled to be attacked is closer to a different decision boundary. Observe how the edge corresponding to that decision boundary of the decision boundary polytope has respectively been altered.\nMNIST Experiments. Here, we design perturbations to misclassify MNIST images. Figure 8 shows several adversarial examples that change the network prediction for digits 8 and 9 to digits 7, 5, and 4, respectively. In some cases, the perturbation η is as small as = 0.1, where x0 ∈ [0, 1]n. Several other adversarial results are reported in Figure 9. We again emphasize that our approach is\nnot meant to be compared with (or beat) state of the art adversarial attacks but rather to provide a novel geometrically inspired perspective that can shed new light in this field." }, { "heading": "14 EXPERIMENTAL DETAILS AND SUPPLEMENTAL RESULTS", "text": "In this section, we describe the settings and the values of the hyper parameters used in the experiments. Moreover, we will show some further supplemental results to the results in the main manuscript paper." }, { "heading": "14.1 TROPICAL VIEW TO THE LOTTERY TICKET HYPOTHESIS.", "text": "We first conduct some further supplemental experiments to those conducted in Section 4. In particular, we conduct further experiments re-affirming the lottery ticket hypothesis on three more synthetic datasets in a similar experimental setup to the one shown in Figure 3. The new supplemental experiments are shown in Figure 10. A similar conclusion is present where the lottery ticket initialization consistently better preserves the decision boundaries polytope compare to other initialization schemes over different percentages of pruning.\nA natural question is whether it is necessary to visualize the dual subdivision polytope of the decision boundaries, i.e. δ(R(x)), whereR(x) = H1(x) Q2(x)⊕H2(x) Q1(x) as opposed to visualizing the tropical polynomials δ(H{1,2}(x)) and δ(Q{1,2}(x)) directly for the tropical re-affirmation of the lottery ticket hypothesis. That is similar to asking whether it is necessary to visualize and study the decision boundaries polytope δ(R(x)) as compared to the the dual subdivision polytope of the functional form of the network since for the 2-output neural network described in Theorem 2 we have that f1(x) = H1(x) Q1(x) and f2(x) = H2(x) Q2(x). We demonstrate this with an experiment that demonstrates the differences between these two views. For this purpose, we train a single hidden layer neural network on the same dataset shown in Figure 3. We perform several iterations of pruning in a similar fashion to Section 5 and visualise at each iteration both the decision boundaries polytope and all the dual subdivisions of the aforementioned tropical polynomials representing the functional form of the network, i.e. δ(H{1,2}(x)) and δ(Q{1,2}(x)). It is to be observed from Figure 11 that despite that the decision boundaries were barely affected with the lottery ticket pruning, the zonotopes representing the functional form of the network endure large variations. That is to say, investigating the dual subdivisions describing the functional form of the networks through the four zonotopes δ(H{1,2}(x)) and δ(Q{1,2}(x)) is not indicative enough to the behaviour of the decision boundaries." }, { "heading": "14.2 TROPICAL PRUNING", "text": "Toy Setup. To verify our theoretical work, we first start by pruning small networks that are in the form of Affine followed by ReLU followed by another Affine layer. We train the aforementioned network on two 2D datasets with a varying number of hidden nodes (100, 200, 300). In this setup, we observe from Figure that when Theorem 2 assumptions hold, our proposed tropical pruning is indeed competitive, and in many cases outperforms, the other non decision boundaries aware pruning schemes.\nExperimental Setup. In all experiments of the tropical pruning section, all algorithms are run for only a single iteration where λ increases linearly from 0.02 with a factor of 0.01. Increasing λ corresponds to increasing weight sparsity and we keep doing until sparsification is 100%.\nSupplemental Experiments. We conduct more experimental results on AlexNet and VGG16 on SVHN, CIFAR10 and CIFAR100 datasets. We examine the performance for when the networks have only the biases of the classifier fine tuned after tuning as shown in Figure 13. Moreover, a similar experiments is reported for the same networks but for when the biases for the complete networks are fine tuned as in Figure 14." } ]
2,020
null
SP:6a900a782e440dc5225d8ecb39155f594fa2cfb5
[ "The paper proposes a constellation model that performs feature clustering and encoding dense part representations. The constellation module is placed after convolutional blocks. The module clusters cell features and calculates distance map between each cluster centroids and cell feature. The self-attention mechanism is applied on the distance map and concatenated to the original feature map to complement the feature representation. The resulting feature representation contains part representations. The few-shot experiments on the mini-Imagenet, CIFAR-FS, and FC100 datasets show the effectiveness of the proposed method." ]
The success of deep convolutional neural networks builds on top of the learning of effective convolution operations, capturing a hierarchy of structured features via filtering, activation, and pooling. However, the explicit structured features, e.g. object parts, are not expressive in the existing CNN frameworks. In this paper, we tackle the few-shot learning problem and make an effort to enhance structured features by expanding CNNs with a constellation model, which performs cell feature clustering and encoding with a dense part representation; the relationships among the cell features are further modeled by an attention mechanism. With the additional constellation branch to increase the awareness of object parts, our method is able to attain the advantages of the CNNs while making the overall internal representations more robust in the few-shot learning setting. Our approach attains a significant improvement over the existing methods in few-shot learning on the CIFAR-FS, FC100, and mini-ImageNet benchmarks.
[ { "affiliations": [], "name": "Weijian Xu" }, { "affiliations": [], "name": "Yifan Xu" }, { "affiliations": [], "name": "Huaijin Wang" }, { "affiliations": [], "name": "Zhuowen Tu" } ]
[ { "authors": [ "Luca Bertinetto", "Joao F Henriques", "Philip HS Torr", "Andrea Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "arXiv preprint arXiv:1805.08136,", "year": 2018 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Yinbo Chen", "Xiaolong Wang", "Zhuang Liu", "Huijuan Xu", "Trevor Darrell" ], "title": "A new meta-baseline for few-shot learning", "venue": "arXiv preprint arXiv:2003.04390,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Y Ng" ], "title": "Learning feature representations with k-means", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Navneet Dalal", "Bill Triggs" ], "title": "Histograms of oriented gradients for human detection", "venue": "In CVPR,", "year": 2005 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2006 }, { "authors": [ "Pedro F Felzenszwalb", "Daniel P Huttenlocher" ], "title": "Pictorial structures for object recognition", "venue": "International journal of computer vision,", "year": 2005 }, { "authors": [ "Pedro F Felzenszwalb", "Ross B Girshick", "David McAllester", "Deva Ramanan" ], "title": "Object detection with discriminatively trained part-based models", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2009 }, { "authors": [ "Robert Fergus", "Pietro Perona", "Andrew Zisserman" ], "title": "Object class recognition by unsupervised scale-invariant learning", "venue": "In CVPR,", "year": 2003 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "arXiv preprint arXiv:1703.03400,", "year": 2017 }, { "authors": [ "Weifeng Ge", "Xiangru Lin", "Yizhou Yu" ], "title": "Weakly supervised complementary parts models for fine-grained image classification from the bottom up", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Sara Sabour", "Nicholas Frosst" ], "title": "Matrix capsules with em routing", "venue": null, "year": 2018 }, { "authors": [ "Ruibing Hou", "Hong Chang", "MA Bingpeng", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for few-shot classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shell Xu Hu", "Pablo G Moreno", "Yang Xiao", "Xi Shen", "Guillaume Obozinski", "Neil D Lawrence", "Andreas Damianou" ], "title": "Empirical bayes transductive meta-learning with synthetic gradients", "venue": null, "year": 2004 }, { "authors": [ "Adam Kosiorek", "Sara Sabour", "Yee Whye Teh", "Geoffrey E Hinton" ], "title": "Stacked capsule autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Philipp Krähenbühl", "Carl Doersch", "Jeff Donahue", "Trevor Darrell" ], "title": "Data-dependent initializations of convolutional neural networks", "venue": "arXiv preprint arXiv:1511.06856,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Svetlana Lazebnik", "Cordelia Schmid", "Jean Ponce" ], "title": "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Hankook Lee", "Sung Ju Hwang", "Jinwoo Shin" ], "title": "Self-supervised label augmentation via input transformations", "venue": "th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Kwonjoon Lee", "Subhransu Maji", "Avinash Ravichandran", "Stefano Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Aoxue Li", "Weiran Huang", "Xu Lan", "Jiashi Feng", "Zhenguo Li", "Liwei Wang" ], "title": "Boosting few-shot learning with adaptive margin loss", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Bin Liu", "Yue Cao", "Yutong Lin", "Qi Li", "Zheng Zhang", "Mingsheng Long", "Han Hu" ], "title": "Negative margin matters: Understanding margin in few-shot classification", "venue": "arXiv preprint arXiv:2003.12060,", "year": 2020 }, { "authors": [ "Yanbin Liu", "Juho Lee", "Minseop Park", "Saehoon Kim", "Eunho Yang", "Sung Ju Hwang", "Yi Yang" ], "title": "Learning to propagate labels: Transductive propagation network for few-shot learning", "venue": "arXiv preprint arXiv:1805.10002,", "year": 2018 }, { "authors": [ "David G Lowe" ], "title": "Distinctive image features from scale-invariant keypoints", "venue": "International journal of computer vision,", "year": 2004 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": "arXiv preprint arXiv:1707.03141,", "year": 2017 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive metalearner", "venue": null, "year": 2018 }, { "authors": [ "Tsendsuren Munkhdalai", "Xingdi Yuan", "Soroush Mehri", "Adam Trischler" ], "title": "Rapid adaptation with conditionally shifted neurons", "venue": "arXiv preprint arXiv:1712.09926,", "year": 2017 }, { "authors": [ "Boris N Oreshkin", "Alexandre Lacoste", "Pau Rodriguez" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "arXiv preprint arXiv:1805.10123,", "year": 2018 }, { "authors": [ "Yuxin Peng", "Xiangteng He", "Junjie Zhao" ], "title": "Object-part attention model for fine-grained image classification", "venue": "IEEE Transactions on Image Processing,", "year": 2017 }, { "authors": [ "Lei Qi", "Xiaoqiang Lu", "Xuelong Li" ], "title": "Exploiting spatial relation for fine-grained image classification", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ruslan Salakhutdinov", "Joshua B Tenenbaum", "Antonio Torralba" ], "title": "Learning with hierarchical-deep models", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1958 }, { "authors": [ "David Sculley" ], "title": "Web-scale k-means clustering", "venue": "In Proceedings of the 19th international conference on World wide web,", "year": 2010 }, { "authors": [ "Marcel Simon", "Erik Rodner" ], "title": "Neural activation constellations: Unsupervised part model discovery with convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Erik B Sudderth", "Antonio Torralba", "William T Freeman", "Alan S Willsky" ], "title": "Learning hierarchical models of scenes, objects, and parts", "venue": "In ICCV,", "year": 2005 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Pavel Tokmakov", "Yu-Xiong Wang", "Martial Hebert" ], "title": "Learning compositional representations for few-shot recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yao-Hung Hubert Tsai", "Nitish Srivastava", "Hanlin Goh", "Ruslan Salakhutdinov" ], "title": "Capsules with inverted dot-product attention routing", "venue": "arXiv preprint arXiv:2002.04764,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Markus Weber", "Max Welling", "Pietro Perona" ], "title": "Unsupervised learning of models for recognition", "venue": "In ECCV,", "year": 2000 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Chen Xing", "Negar Rostamzadeh", "Boris Oreshkin", "Pedro O O Pinheiro" ], "title": "Adaptive cross-modal few-shot learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sung Whan Yoon", "Jun Seo", "Jaekyun Moon" ], "title": "Tapnet: Neural network augmented with task-adaptive projection for few-shot learning", "venue": null, "year": 1905 }, { "authors": [ "Alan L Yuille", "Peter W Hallinan", "David S Cohen" ], "title": "Feature extraction from faces using deformable templates", "venue": "International journal of computer vision,", "year": 1992 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Jian Zhang", "Chenglong Zhao", "Bingbing Ni", "Minghao Xu", "Xiaokang Yang" ], "title": "Variational few-shot learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Song-Chun Zhu", "David Mumford" ], "title": "A stochastic grammar of images", "venue": "Now Publishers Inc,", "year": 2007 }, { "authors": [ "Yousong Zhu", "Chaoyang Zhao", "Jinqiao Wang", "Xu Zhao", "Yi Wu", "Hanqing Lu" ], "title": "Couplenet: Coupling global structure with local parts for object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Tremendous progress has been made in both the development and the applications of the deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; Szegedy et al., 2015; He et al., 2016; Xie et al., 2017). Visualization of the internal CNN structure trained on e.g. ImageNet (Deng et al., 2009) has revealed the increasing level of semantic relevance for the learned convolution kernels/filters to the semantics of the object classes, displaying bar/edge like patterns in the early layers, object parts in the middle layers, and face/object like patterns in the higher layers (Zeiler & Fergus, 2014). In general, we consider the learned convolution kernels being somewhat implicit about the underlying objects since they represent projections/mappings for the input but without the explicit knowledge about the parts in terms of their numbers, distributions, and spatial configurations.\nOn the other hand, there has been a rich history about explicit object representations starting from deformable templates (Yuille et al., 1992), pictorial structure (Felzenszwalb & Huttenlocher, 2005), constellation models (Weber et al., 2000; Fergus et al., 2003; Sudderth et al., 2005; Fei-Fei et al., 2006), and grammar-based model (Zhu & Mumford, 2007). These part-based models (Weber et al., 2000; Felzenszwalb & Huttenlocher, 2005; Fergus et al., 2003; Sudderth et al., 2005; Zhu & Mumford, 2007) share three common properties in the algorithm design: (1) unsupervised learning, (2) explicit clustering to obtain the parts, and (3) modeling to characterize the spatial configuration of the parts. Compared to the CNN architectures, these methods are expressive with explicit part-based representation. They have pointed to a promising direction for object recognition, albeit a lack of strong practice performance on the modern datasets. Another line of object recognition system with the part concept but trained discriminatively includes the discriminative trained part-based model (DPM) (Felzenszwalb et al., 2009) and the spatial pyramid matching method (SPM) (Lazebnik et al., 2006). In the context of deep learning, efforts exist to bring the explicit part representation into deep hierarchical structures (Salakhutdinov et al., 2012).\nThe implicit and explicit feature representations could share mutual benefits, especially in fewshot learning where training data is scarce: CNNs may face difficulty in learning a generalized representation due to lack of sufficient training data, whereas clustering and dictionary learning ∗indicates equal contribution\nprovide a direct means for data abstraction. In general, end-to-end learning of both the implicit and explicit part-based representations is a viable and valuable means in machine learning. We view convolutional features as an implicit part-based representation since they are learned through back-propagation via filtering processes. On the other hand, an explicit representation can be attained by introducing feature clustering that captures the data abstraction/distribution under a mixture model.\nIn this paper, we develop an end-to-end framework to combine the implicit and explicit part-based representations for the few-shot classification task by seamlessly integrating constellation models with convolution operations. In addition to keeping a standard CNN architecture, we also employ a cell feature clustering module to encode the potential object parts. This procedure is similar to the clustering/codebook learning for appearance in the constellation model (Weber et al., 2000). The cell feature clustering process generates a dense distance map. We further model the relations for the cells using a self-attention mechanism, resembling the spatial configuration design in the constellation model (Weber et al., 2000). Thus, we name our method constellation networks (ConstellationNet). We demonstrate the effectiveness of our approach on standard few-shot benchmarks, including FC100 (Oreshkin et al., 2018), CIFAR-FS (Bertinetto et al., 2018) and mini-ImageNet (Vinyals et al., 2016) by showing a significant improvement over the existing methods. An ablation study also demonstrates the effectiveness of ConstellationNet is not achieved by simply increasing the model complexity using e.g. more convolution channels or deeper and wider convolution layers (WRN-28-10 (Zagoruyko & Komodakis, 2016)) (see ablation study in Table 3 and Figure 2 (e))." }, { "heading": "2 RELATED WORK", "text": "Few-Shot Learning. Recently, few-shot learning attracts much attention in the deep learning community (Snell et al., 2017; Lee et al., 2019). Current few-shot learning is typically formulated as a meta-learning problem (Finn et al., 2017), in which an effective feature embedding is learned for generalization across novel tasks. We broadly divide the existing few-shot learning approaches into three categories: (1) Gradient-based methods optimize feature embedding with gradient descent during meta-test stage (Finn et al., 2017; Bertinetto et al., 2018; Lee et al., 2019). (2) Metric-based methods learn a fixed optimal embedding with a distance-based prediction rule (Vinyals et al., 2016; Snell et al., 2017). (3) Model-based methods obtains a conditional feature embedding via a weight predictor (Mishra et al., 2017; Munkhdalai et al., 2017). Here we adopt ProtoNet (Snell et al., 2017), a popular metric-based framework, in our approach and boost the generalization ability of the feature embeddings with explicit structured representations from the constellation model. Recently, Tokmakov et al. (2019) proposes a compositional regularization to the image with its attribute annotations, which is different from out unsupervised part-discovery strategy.\nPart-Based Constellation/Discriminative Models. The constellation model family (Weber et al., 2000; Felzenszwalb & Huttenlocher, 2005; Fergus et al., 2003; Sudderth et al., 2005; Fei-Fei et al., 2006; Zhu & Mumford, 2007) is mostly generative/expressive that shares two commonalities in the representation: (1) clustering/codebook learning in the appearance and (2) modeling of the spatial configurations. The key difference among these approaches lies in how the spatial configuration is modeled: Gaussian distributions (Weber et al., 2000); pictorial structure (Felzenszwalb & Huttenlocher, 2005); joint shape model (Fergus et al., 2003) ; hierarchical graphical model (Sudderth et al., 2005); grammar-based (Zhu & Mumford, 2007). These constellation models represent a promising direction for object recognition but are not practical competitive compared with deep learning based approaches. There are also discriminative models: The discriminatively trained part-based model (DPM) (Felzenszwalb et al., 2009) is a typical method in this vein where object parts (as HOG features (Dalal & Triggs, 2005)) and their configurations (a star model) are learned jointly in a discriminative way. The spatial pyramid matching method (SPM) (Lazebnik et al., 2006) has no explicit parts but instead builds on top of different levels of grids with codebook learned on top of the SIFT features (Lowe, 2004). DPM and SPM are of practical significance for object detection and recognition. In our approach, we implement the constellation model with cell feature clustering and attention-based cell relation modeling to demonstrate the appearance learning and spatial configuration respectively.\nParts models are extensively studied in fine-grained image classifications and object detection to provide spatial guidance for filtering uninformative object proposals (Simon & Rodner, 2015; Peng et al., 2017; Zhu et al., 2017; Ge et al., 2019; Qi et al., 2019). Related to our work, Neural Activation Constellations (NAC) (Simon & Rodner, 2015) introduces the constellation model to perform unsupervised part model discovery with convolutional networks. Our work is different from NAC in three aspects: (1) The algorithmic mechanisms behind Simon & Rodner (2015) and ours are\ndifferent. Simon & Rodner (2015) implements a traditional Gaussian-based constellation module to model the spatial configuration and part selection on top of a fixed pre-trained CNN. However, in our ConstellationNet, our part representation and spatial configuration are modeled by cell feature clustering and self-attention based cell relation module, which is general-purpose, modularized and recursive. (2) In Simon & Rodner (2015) , the constellation module is optimized in an EM-like algorithm, which is separate from the CNN optimization. Our constellation modules are seamlessly integrated into the current CNNs and jointly optimized with them. (3) Our ConstellationNet uses the dense cell features from the CNN feature maps, which considers all positions from the images as potential parts and models their relation. However, (Simon et al. 2015) extracts sparse part representations (i.e. it uses at most one part proposal per channel and selects even less parts later), which may not fully utilize the rich information from the CNN feature maps." }, { "heading": "3 FEW-SHOT LEARNING", "text": "In a standard classification problem, we aim to learn a model trained on the dataset Dbase that can generalize its classification ability to unseen test set Dnovel belonging to same categories. In few-shot classification problem, we encourage Dbase and Dnovel to be formed from different categories to emphasize model’s generalization ability on novel categories, where we denote training categories as Cbase, test categories as Cnovel, and Cbase ∩ Cnovel = ∅ to ensure the fairness. In the training stage (a.k.a. meta-train stage), metric-based few-shot learning approaches (Snell et al., 2017; Vinyals et al., 2016; Oreshkin et al., 2018) usually learn a feature extractor φ(x) on the dataset Dbase to obtain generic feature embedding by optimizing the loss L(φ):\nL(φ) = E{(x,y)}∼Dbase` ( {(φ(x), y)} ) (1)\nwhere {(x, y)} is a sampled mini-batch of data points and `(·) is usually an episodic few-shot loss (Vinyals et al., 2016) or a standard cross-entropy loss (Chen et al., 2020).\nIn the inference stage (a.k.a. meta-test stage), a typical few-shot benchmark evaluates the model on K-way, N -shot classification tasks T drawn from Dnovel, where each task has a support set and a query set, i.e. T = (T supp, T query). The support set T supp contains K classes and each class has N images (e.g. K = 5, N ∈ {1, 5}). Following Snell et al. (2017), the prediction ŷ′ of a query image x′ ∈ T query is given by the label of nearest prototype ck from T supp under a cosine similarity d(·, ·):\nŷ′ = arg max k\nd ( φ(x′), ck ) , ck = 1\nN ∑ (x,y)∈T supp, y=k φ(x). (2)\nAn extended description of the few-shot learning framework can be found from Appendix A.1. The generalization ability of the feature extractor φ(x) is improved in terms of training scheme (e.g.\nepisodic learning (Vinyals et al., 2016)), network design (e.g. task condition (Oreshkin et al., 2018)) or objective function (e.g. learnable distance (Sung et al., 2018)). In our method, we propose a novel network design by inserting constellation models into CNNs and strengthen the intermediate features." }, { "heading": "4 CONSTELLATION MODEL", "text": "The concept of constellation has been introduced to the few-shot learning scenario in early years (Fei-Fei et al., 2006), in which the appearance and the shape are independently learned in a mixture model. In our work, we revisit the constellation model in an end-to-end learning framework: First, we define the a cell feature as the individual local feature at a position in the feature map (see Figure 1). We then employ cell feature clustering to model the underlying distribution of input cell features, implying a part discovery procedure. We further obtain the distance map of the cell features from clustering and then perform cell relation modeling to build spatial relationships." }, { "heading": "4.1 CELL FEATURE CLUSTERING", "text": "In convolutional neural networks (CNNs), the convolutional filters are learned to detect the discriminative patterns from low-level to high-level through back-propagation (Zeiler & Fergus, 2014). In fact, the backward signal in the back-propagation is not necessarily needed to obtain a pattern detector. With the feature map in the forward step of the CNN, we are able to cluster the individual features at each location of the feature map (a.k.a. cell features) into multiple centers and employ the cluster centers as filters (Coates & Ng, 2012; Krähenbühl et al., 2015). Assume we obtain a convolutional feature map U with batch size B, spatial size H×W and channels C. We disensemble the feature map U ∈ RB×H×W×C into a cell features set U = {u1,u2, ...,un} where n = BHW and ui ∈ RC is a cell feature. Naively, we can conduct a k-means algorithm on input cell features U to solve the clustering objective:\nmin ∑ i ∑ k mik||ui − vk||22 s.t. mik ∈ {0, 1}, ∑ k mik = 1 (3)\nwhere V = {v1,v2, ...,vK} is a set of cluster centers and mik indicates if the input cell feature ui is assigned to cluster center vk. The clustering-based filters V can model the underlying cell feature distributions and capture the most frequent features, which can be explicitly interpreted as meaningful part patterns/part types. The hard assignment map mi = (mi1,mi2, ...,miK) of input cell feature ui onto the cluster centers can be used as a part-based representation, providing alternative information to the next layer in the CNN.\nHowever, there are two issues remaining unsolved in the naive design: Firstly, CNNs are typically optimized in a stochastic gradient descent (SGD) manner. Thus, in each forward step, only a minibatch of images are proceeded to provide cell features, which implies that the cluster centers cannot extract the global feature distribution across the whole dataset. Secondly, the hard assignment map has limited information due to its discrete representation. Therefore, inspired by Sculley (2010), we design a mini-batch soft k-means algorithm to cluster the cell features approximately:\n• Initialization. Randomly initialize global cluster centers V = {v1,v2, ...,vK} and a counter s = (s1, s2, ..., sK) = 0. • Cluster Assignment. In forward step, given input cell features U = {u1,u2, ...,un}, we compute the distance vector di = (di1, di2, ...diK) between input cell feature ui and all cluster centers V . We then compute the soft assignment mik ∈ R and generate the current mini-batch centers v′k:\ndik = ||ui − vk||22, mik = e−βdik∑ j e −βdij , v′k = ∑ imikui∑ imik\n(4)\nwhere β > 0 is an inverse temperature. • Centroid Movement. We formulate a count update ∆s = ∑imi by summing all assignment\nmaps mi = (mi1,mi2, ...miK). The current mini-batch centers v′k are then updated to the global centers vk with a momentum coefficient η:\nvk ← (1− η)vk + ηv′k, η = λ\nsk + ∆sk (5)\n• Counter Update. Counter s is updated and distance vectors {di} are reshaped and returned: s← s + ∆s (6)\nWith gradually updating global cluster centers, the above algorithm is able to address the issue of limited data in a mini-batch. In addition, we reshape the distance vectors {di} of all input cell features to a distance map D ∈ RB×H×W×K . Each distance vector di can be seen as a learned cell code in codebook (dictionary) learning, which encodes a soft assignment of the visual word (i.e. cell feature) onto the codewords (i.e. cluster centers) and implies a part representation. The distance map D then can be viewed as a cell code map that represents a spatial distribution of identified parts, which is passed to following layers. Empirically, it is observed that when ui and vk are L2 normalized, the training procedure is more stable and the Euclidean distance dik is equivalent to a cosine similarity up to an affine transformation. Details of the cell feature clustering can be found in Appendix A.9." }, { "heading": "4.2 CELL RELATION AND SPATIAL CONFIGURATION MODELING", "text": "Before the deep learning era, traditional constellation models (Fei-Fei et al., 2006) decompose visual information into appearance and shape representation. The appearance of different parts in the image is treated independently while the shape of parts is assumed to have spatial connections. In our constellation model, we establish the spatial relationship among the individual part-based representations at a different location from the distance map as well. Specifically, we apply the self-attention mechanism (Vaswani et al., 2017) to build the spatial relationship and enhance the representation instead of using probabilistic graphical models in prior work (Fei-Fei et al., 2006).\nIn cell relation modeling, we add a positional encoding P ∈ RB×H×W×C following Carion et al. (2020) for spatial locations to the distance map D and obtain the input feature map FI for query and key layers. For value layer, we directly flatten the distance map D to another input feature map F′I:\nFI = SpatialFlatten(D + P) ∈ RB×HW×K , F′I = SpatialFlatten(D) ∈ RB×HW×K (7) The input feature maps FI,F′I are transformed into query, key and value {Fq , Fk, Fv} ⊂ RB×HW×K by three linear layers {Wq , Wk, Wv} ⊂ RK×K and further computes the output feature FA:\n[Fq,Fk,Fv] = [FIW q,FIW k,F′IW v] (8)\nFA = Att(Fq,Fk,Fv) = softmax (Fq(Fk)>√\nK\n) Fv (9)\nThe softmax of dot product between query and key matrix Fq(Fk)> ∈ RB×HW×HW calculates the similarity scores in the embedding space among features across the spatial dimension. This encodes the spatial relationships of input features and leads to an enhanced output feature representation FA. Besides, √ K in the denominator is to stabilize the gradient. In practice, we adopt a multi-head attention to model the feature relation in the embedding subspaces:\nFMHA = MultiHeadAtt(Fq,Fk,Fv) = [F1, ...,FJ ]W, Fj = Att(F q j ,F k j ,F v j ) (10)\nIn a J-head attention, the aforementioned similarity scores in the K ′ = KJ dimensional embedding subspace are calculated using the query, key and value from j-th head, i.e. {Fqj , Fkj , Fvj} ⊂ RB×HW×K′ . The output features Fj of each head are computed following Eq. 9. All the output features {F1, ...,FJ} are concatenated back into K dimension embedding and further processed with a linear layer W ∈ RK×K to generate multi-head output features FMHA. Such multi-head attention settings could provide more diverse feature relation without introducing extra parameters." }, { "heading": "4.3 INTEGRATE CONSTELLATION MODEL WITH CNNS", "text": "Our constellation model has the capability to capture explicit structured features and encodes spatial relations among the cell features. The output features yield informative visual cues which are able to strengthen the convolutional features. Thus, as shown in Figure 1, we place the constellation model after the convolution operation to extract its unique explicit features and concatenate them with the original convolutional feature map. A following 1 × 1 convolutional layer is used on the concatenated features to restore the channels of convolutional feature map. In Table 3, we provide evidence that merging features from constellation model to the CNN backbone can significantly improve the representation ability. In contrast, increasing channels in CNNs alone to double the parameters (second row in Table 3) can only improve the performance marginally. Optionally, we found it is useful to adopt auxiliary loss when training the constellation model in deeper networks (e.g. ResNet-12). On top of each constellation model, we conduct a standard classification to acquire additional regularization." }, { "heading": "4.4 WHY CLUSTERING AND SELF-ATTENTION (CLUSTERING MAP + POSITIONAL ENCODING)?", "text": "As described in Section 1 and 2, classical constellation models (Fergus et al., 2003; Felzenszwalb & Huttenlocher, 2005) extract parts with their spatial relationships; they are expressive but do not produce competitive results on modern image benchmarks. CNN models (Krizhevsky et al., 2012; He et al., 2016) attain remarkable results on large-scale image benchmarks (Deng et al., 2009) but they are limited when training data is scarce. We take the inspiration from the traditional constellation models, but with a realization that overcomes their previous modeling limitations.\nThe main contribution of our work is a constellation module/block that performs cell-wise clustering, followed by self-attention on the clustering distance map + positional encoding. This separates our work from previous attempts, e.g. non-local block work (Wang et al., 2018) in which long-range non-linear averaging is performed on the convolution features (no clustering, nor positional encoding for the spatial configuration). The main properties of our constellation block include: (1) Cell based dense representation as opposed to the sparse part representation in (Weber et al., 2000) to make the cells recursively modeled in the self-attention unit in a modularized and general-purpose way. (2) Clustering to generate the cell code after clustering (codebook learning) that attains abstraction and is not dependent on the CNN feature dimensions. (3) Positional encoding (as in Carion et al. (2020)) for cells to encode the spatial locations. (4) Tokenized representation as expressive parts (code/clustering distance map + positional encoding) for the cells. (5) Self-attention to jointly model the cell code and positional encoding to capture the relationships between the parts together with their spatial configurations." }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 DATASETS", "text": "We adopt three standard benchmark datasets that are widely used in few-shot learning, CIFAR-FS dataset (Bertinetto et al., 2018), FC100 dataset (Oreshkin et al., 2018), and mini-ImageNet dataset (Vinyals et al., 2016). Details about dataset settings in few-shot learning are in Appendix A.2." }, { "heading": "5.2 NETWORK WITH MULTI-BRANCH", "text": "We build ConstellationNet on two ProtoNet variants, namely Conv-4 and ResNet-12, which are commonly used in few-shot learning. Details of networks and the optimization are in Appendix.\nWe develop a new technique, Multi-Branch, to optimize standard classification loss and prototypical loss simultaneously. We find the two training schemes, standard classification scheme and prototypical scheme, can be a companion rather than a conflict. Details of these two schemes can be found from Appendix A.1. Different from standard network backbone used in prior works, our embedding φ(x) is separated into two branches after a shared stem (Y-shape). Details of our multi-branch design are elaborated in A.10. The detailed ablation study is described in Table 3.\nFeature Augmentation. During the meta-testing stage, we discover that concatenating features before average pooling to the final output can improve classification accuracy. The advantage of this technique is that no additional training and model parameters are introduced." }, { "heading": "5.3 RESULTS ON STANDARD BENCHMARKS", "text": "Table 1 and 2 summarize the results of the few-shot classification tasks on CIFAR-FS, FC100, and mini-ImageNet, respectively. Our method shows a notable improvement over several strong baselines in various settings. ConstellationNet significantly improves the performance on shallow networks (Conv-4). In Table 2, our model outperforms SIB (Hu et al., 2020) 1-shot by 0.6% and 5-shot by 5.6%. In Table 1, our model outperforms MetaOptNet (Lee et al., 2019) by 5.95% in 1-shot and 6.24% in 5-shot. For deep networks with rich features, the constellation module still contributes to the performance, showing its complementary advantage to convolution. Our ResNet-12 model beats (Lee et al., 2019) 1-shot result by 2.7% on FC100, 3.4% on CIFAR-FS, and 1.72% on mini-ImageNet. The consistent improvement over both shallow and deep networks across all three datasets shows the generality of our method. Our ConstellationNet is orthogonal to the margin loss based methods (Liu et al., 2020; Li et al., 2020), and we also do not use extra cross-modal information (Xing et al., 2019; Li et al., 2020). On the contrary, our model enhances the embedding generalization ability by incorporating its own part-based representation. Additionally, to verify the orthogonality of our method, we adapt the negative margin loss following Liu et al. (2020) to our Conv-4 models in\nAppendix A.8. We observe ConstellationNet with negative margin brings 0.52% improvement to ConstellationNet, and obtains 6.93% gain compared with baseline on mini-ImageNet." }, { "heading": "6 MODEL ANALYSIS", "text": "" }, { "heading": "6.1 ARCHITECTURE ALTERNATIVES", "text": "In Table 3, we first study the role of each module in ConstellationNet, where the number of parameters is controlled approximately equivalent to the baseline’s size. Our constellation model brings 6.41% and 2.59% improvements over baseline on 1-shot Conv-4 and ResNet-12 results. Combined with our multi-branch training procedure, the model further improves additional 1.34% and 1.26% on 1-shot Conv-4 and ResNet-12, respectively. Finally, feature augmentation from penultimate layer to final output embedding brings additional 0.45% and 0.27% improvements on two variants.\nWe also test the baseline model with extra channels in the Table 3. The new model only shows slight improvements over original baseline, and is outperformed by our ConstellationNet with a large margin. We also obtain WRN-28-10 baseline results to validate our improvement. While making ResNet baselines deeper and wider, our ConstellationNet still outperforms this strong baseline. In Figure 2 (e), we further study whether the performance gap between ConstellationNet and baseline can be reduced by simply altering the baseline’s model complexity using e.g. more convolution channels. Although the trend of baseline accuracy increases when increasing the model parameter number gradually, the performance gap is still significant. This validates our concept that modeling hierarchical part structures can greatly benefit features learned from convolution operation, and obtain a more robust feature representation. In addition, applying self-attention on the distance map (6-th\nrow: 57.03% on Conv-4, 1-shot) achieves better performance than directly applying it to the original cell features (i.e. convolutional feature map) (4-th row: 55.92% on Conv-4, 1-shot). We also tried to replace the cell feature clustering module with a 1x1 convolution layer (output dimension is equal to the number of clusters) (5-th row: 55.46% on Conv-4, 1-shot). It is worse than our results (6-th row) as well. We observe that the 1x1 convolution layer is less expressive than the cell feature clustering module, making it difficult to extract enough context information during cell relation modeling." }, { "heading": "6.2 MODULES ANALYSIS", "text": "In Figure 2 (a), we vary the number of clusters adapted in all layers to observe the performance change. We found that increasing the number of clusters improves the accuracy in general, and set clusters to 64 is optimal in terms of both model size and classification performance. Figure 2 (b) shows the number of attention heads does not effect performance as much as the number of cluster, and 8-head attention obtains 1.80% performance gain on the 1-shot setting compared to 1-head attention. In Figure 2 (c, d), we also study the effectiveness of clustering algorithm applied to different layers. The results show both early features and high-level features benefit from introducing clusters algorithm into the original CNN architecture." }, { "heading": "6.3 VISUALIZATION", "text": "Figure 3 demonstrates the visualization of cluster centers in each layer of Conv-4 model on miniImageNet. In the upper part of the figure, each image shows patches corresponding to the nearest cell features to a cluster center (i.e. with lowest Euclidean distance). It is observed that clusters in early layers (e.g. layer 1,2) represent simple low-level patterns while the clusters in high layers (e.g. layer 3,4) indicate more complex structures and parts. In the lower part of the figure, we choose two cluster centers from layer 4 for further interpretation: The left one with green box could possibly represent legs since it consists of various types of legs from human, dog and other animals. The right one with the red box shows most nearest cell features to this cluster center are parts with bird’s head or beetles, which share a dotted structure (i.e. black dots on beetles / eyes on bird’s head).\nThe left side of Figure 4 shows the visualization of cell features that are assigned to different clusters. For each image, we extract the assignment maps corresponding to three cluster centers generated in the last constellation module of Conv-4 and find multiple cell features with the highest assignments within each assignment map. The locations of cell features are projected back in the original image space, marked by three different colors of \"·\" in the raw image to show three different feature clusters. For a given class of images, the same cluster centers are selected for comparison across 6 samples. As shown in Figure 4, we observe part information of each class is explicitly discovered. For the bird\ncategory, we can see different parts in each image, including head (cyan \"·\"), body (purple \"·\") and tail (yellow \"·\"). For the dog category, we see parts including heads (red \"·\"), legs (green \"·\") and body (blue \"·\"). For the tank category, we see parts like track (light blue \"·\") and turret (pink \"·\"). The right side of Figure 4 visualizes the attention maps in the cell relation model. We use the last constellation module in the ResNet-12 model for visualization since it captures high-level features that better represent parts. We choose one query feature at the center of the object and show its attention map to all key features. The middle part of the figure shows the attention maps corresponding to 8 heads in the multi-head attention. It is observed that some parts are identified such as head (second map in first row), legs (first two map in second row), buttock (first map in first row) and body (second map in the second row). A merged attention map by overlaying all 8 attention maps is presented at right part of the figure. It indicates that all the attention heads together can extract the features of the whole object, which would be useful for final classification." }, { "heading": "7 CONCLUSION", "text": "In this paper, we present ConstellationNet by introducing an explicit feature clustering procedure with relation learning via self-attention. We implement a mini-batch soft k-means algorithm to capture the cell feature distribution. With integrated implicit (standard CNN modules) and explicit (cell feature clustering + cell relation modeling) representations, our proposed ConstellationNet achieves significant improvement over the competing methods on few-shot classification benchmarks." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is funded by NSF IIS-1618477 and NSF IIS-1717431. We thank Qualcomm Inc. for an award support. We thank Kwonjoon Lee, Tiange Luo and Hao Su for valuable feedbacks." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 FEW-SHOT LEARNING FRAMEWORK", "text": "In this section, we introduce background concepts of meta-learning and elaborate the few-shot learning framework used in our ConstellationNet.\nMeta-Learning in Few-Shot Classification. Current few-shot learning is typically formulated as a meta-learning task (Finn et al., 2017), in which an dataset Dbase is used to provide commonsense knowledge and a dataset Dnovel for the few-shot classification. Dbase has the classes Cbase which are disjoint from the Cnovel in Dnovel to ensure fairness. There are two stages, meta-training and meta-test, in the meta-learning framework: In meta-training stage, we attempt to train a model to learn generic features from Dbase. In meta-test stage, we adapt the model on the limited training split from Dnovel and evaluate the performance of the model on the test split.\nProtoNet-Based Framework. In our ConstellationNet, we adopt ProtoNet (Snell et al., 2017) as the base few-shot learning framework. In ProtoNet, the dataset Dnovel is represented by a series of K-way N -shot tasks {T } where each task consists of a support set and a query set, i.e. T = (T supp, T query). The support set T supp contains K classes and each class has N examples from the training split of Dnovel, which are used to adapt the model in meta-test stage. The query set T query from the test split of Dnovel is then used to evaluate the model. The ProtoNet attempts to learn a generic feature extractor φ(x) on image x, and represent a class k by the prototype ck, which is the average feature of examples from support set T supp with this class:\nck = 1 |N | ∑\n(x,y)∈T supp,y=k\nφ(x) (11)\nDuring the meta-test stage, we use the prototypes to compute the probability pk of a query example x′ ∈ T query on class k and predict its label y′:\npk = p(y = k|x′, T supp) = exp(d(x′, ck))∑ k′ exp(d(x ′, ck′)) , y′ = arg max k pk. (12)\nwhere d(·, ·) is a cosine similarity function (different from the Euclidean distance in Snell et al. (2017)).\nDuring the meta-training stage, there are two different training schemes: The prototypical scheme from ProtoNet uses an episodic learning strategy that also formulates the dataset Dbase as a series of tasks {T }. The negative log-likelihood loss L(φ) is optimized:\n`(T supp, T query) = E(x′,y′)∈T query − log p(y = y′|x′, T supp), (13) L(φ) = ET =(T supp,T query)∼Dbase`(T supp, T query). (14)\nAnother way is the standard classification scheme (Chen et al., 2020): It simply uses Dbase as a standard classification dataset {(x, y)} consisting of Q classes in total. Thus, a cross-entropy loss L(φ) is optimized:\nL(φ) = E(x,y)∼Dbase − log exp(wy · φ(x))∑ q exp(wq · φ(x))\n(15)\nwhere wq is the linear weight for class q. In our ConstellationNet, we use the standard classification scheme at default. For the experiment with multi-branch network, we use the prototypical scheme and standard classification scheme for separate branches." }, { "heading": "A.2 DATASETS", "text": "The CIFAR-FS dataset (Bertinetto et al., 2018) is a few-shot classification benchmark containing 100 classes from CIFAR-100 (Krizhevsky et al., 2009). The classes are randomly split into 64, 16 and 20 classes as meta-training, meta-validation and meta-testing set respectively. For each class, it\ncontains 600 images of size 32× 32. We adopt the split from Lee et al. (2019). The FC100 dataset (Oreshkin et al., 2018) is another benchmark based on CIFAR-100 where classes are grouped into 20 superclasses to void the overlap between the splits. The mini-ImageNet dataset (Vinyals et al., 2016) is a common benchmark for few-shot classification containing 100 classes from ILSVRC2012 (Deng et al., 2009). The classes are randomly split into 64, 16 and 20 classes as meta-training, meta-validation and meta-testing set respectively. For each class, it contains 600 images of size 84 × 84. We follow the commonly-used split in Ravi & Larochelle (2016), Lee et al. (2019) and Chen et al. (2020). In all experiments, we conduct data augmentation for the meta-training set of all datasets to match Lee et al. (2019)’s implementation." }, { "heading": "A.3 NETWORK BACKBONE", "text": "Conv-4. Following Lee et al. (2019), we adopt the same network with 4 convolutional blocks. Each of the 4 blocks has a 3×3 convolutional layer, a batch normalization layer, a ReLU activation and a 2×2 max-pooling layer sequentially. The numbers of filters are 64 for all 4 convolutional layers. ResNet-12. Following Chen et al. (2020), we construct the residual block with 3 consecutive convolutional blocks followed by an addition average pooling layer where each convolutional block has a 3×3 convolutional layer, a batch normalization layer, a leaky ReLU activation, and max-pooling layers. The ResNet-12 network has 4 residual blocks with each filter size set to 64, 128, 256, 512, respectively.\nWRN-28-10. WideResNet expands the residual blocks by increasing the convolutional channels and layers (Zagoruyko & Komodakis, 2016). WRN-28-10 uses 28 convolutional layers with a widening factor of 10." }, { "heading": "A.4 CONSTELLATION MODULE CONFIGURATION", "text": "To achieve the best performance with constellation modules, we do not always fully enable them after all the convolutional layers. For Conv-4, we use constellation modules after all four convolutional layers, but the cell relation modeling module is disabled in first two constellation modules due to the high memory consumption. For ResNet-12, we enable the constellation modules after the convolutional layer 1,7,8,9 and disable the relation modeling module in the first constellation module. We use the deep supervision in ResNet-12 to stablize the training of constellation modules." }, { "heading": "A.5 SELF-ATTENTION SETTINGS", "text": "We follow the common practice in Vaswani et al. (2017) to set the attention layer with residual connections, dropout and layer normalization. The sine positional encoding follows settings in Carion et al. (2020)." }, { "heading": "A.6 TRAINING DETAILS", "text": "Optimization Settings. We follow implementation in Lee et al. (2019), and use SGD optimizer with initial learning rate of 1, and set momentum to 0.9 and weight decay rate to 5× 10−4. The learning rate reduces to 0.06, 0.012, and 0.0024 at epoch 20, 40 and 50. The inverse temperature β is set to 100.0 in the cluster assignment step, and λ is set to 1.0 in the centroid movement step." }, { "heading": "A.7 ABLATION STUDY ON THE NUMBER OF CLUSTERS", "text": "Table 4 studies the number of clusters needed for random and similar classes. The result shows the optimal number of clusters are less affected by the number of clusters but more affected by the similarity between classes. Less number of clusters are needed for dataset with classes of high similarity, which aligns with our intuition, limited number of patterns exist in this dataset so that small number of clusters are enough to represent its part-based information.\nFC100 training dataset consists of 60 classes that are grouped evenly into 12 superclasses. In the random classes group, the training dataset includes 6 randomly selected super-classes (i.e., 30 classes) and models are trained with 8, 16, 32, 64 and 128 number of clusters. The highest accuracy occurs at 16 clusters (1-shot: 39.12% in ResNet-12). In the similar classes group, 30 classes are randomly\nsampled from the original training dataset and we repeat the same experiments as above. The highest accuracy occurs at 64 clusters (1-shot: 41.22% in ResNet-12), which is much more than the 16 clusters used for images from similar classes." }, { "heading": "A.8 ADDITIONAL EXPERIMENTS WITH NEGATIVE MARGIN", "text": "Table 5 studies the use of negative margin loss (Liu et al., 2020) on our Conv-4 models. In the negative margin loss, we use the inner-product similarity, the temperature coefficient β = 1.0 and the negative margin m = −0.5, which attains the best performance improvement on our models. Besides, we do not have the fine-tune step during meta-test. Our baseline with the negative margin loss obtains 0.80% improvement on 1-shot and 0.44% improvement on 5-shot compared with the baseline. Similarly, our ConstellationNet with the negative margin loss achieves 0.52% improvement on 1-shot and 0.40% improvement on 5-shot. The consistent improvement of negative margin loss on the baseline and our ConstellationNet indicates that our constellation module is orthogonal to the negative margin loss, and both modules can boost the performance on few-shot classification." }, { "heading": "A.9 CLARIFICATION ON CLUSTERING PROCEDURE", "text": "In this section, we add more clarification on our cell feature clustering procedure in Sec. 4.1: During the training stage, the global cluster centers V = {vk} are updated by the computed clustering centers {v′k} in current mini-batch. Each update to a cluster center vk is weighted by a momentum coefficient η determined by the value of an associated counter sk, since we would like to avoid large adjustment from the current mini-batch in order to stabilize the global cluster centers. Besides, the mini-batches of examples are randomly drawn from the dataset following Sculley (2010), without specialized design to optimize clustering learning. During the evaluation stage, we fix the global cluster centers V in the forward step of our model, avoiding the potential information leak or transduction from the test mini-batches." }, { "heading": "A.10 MULTI-BRANCH DETAILS", "text": "Our embedding φ(x) is separated into two branches after a shared stem (Y-shape), which is defined as φ(x) = {φcls(x), φproto(x)} and φcls(x) = gcls(f stem(x)), φproto(x) = gproto(f stem(x)). Two branches φcls(x), φproto(x) are trained by standard classification and prototypical schemes separately\nin a multi-task learning fashion. During the testing time, φcls(x) and φproto(x) are concatenated together to compute distance between support prototypes and query images.\nFor our ConstellationNet, we split the network into two branches after the second convolutional blocks (Conv-4) or the second residual blocks (ResNet-12). We keep the shared stem identical to the network backbone and reduce the channels of two separate branches to match the parameter size of the model without multi-branch." }, { "heading": "A.11 CONNECTION WITH CAPSULE NETWORKS", "text": "A notable development to learning the explicit structured representation in an end-to-end framework is the capsule networks (CapsNets) (Sabour et al., 2017). The line of works on CapsNets (Sabour et al., 2017; Hinton et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020) intends to parse a visual scene in an interpretable and hierarchical way. Sabour et al. (2017) represents parts and objects in vector-based capsules with a dynamic routing mechanism. Tsai et al. (2020) uses a stacked autoencoder architecture to model the hierarchical relation among parts, objects and scenes. Here our ConstellationNet maintains part modeling by enabling the joint learning of the convolution and constellation modules to simultaneously attain implicit and explicit representations." } ]
2,021
ATTENTIONAL CONSTELLATION NETS FOR FEW-SHOT LEARNING
SP:e6866231757407d20d8fbd8059cf1d0414efe018
[ "The work proposes a simple enough idea to speed up the training of BERT by progressively stacking new layers while fixing older layers. Empirically, with the same number of training steps (and less time), the proposed method can achieve a comparable performance to the original BERT. When the same amount of running time (more steps) is used, the proposed strategy can further improve the performance. " ]
Pre-trained language models, such as BERT, have achieved significant accuracy gain in many natural language processing tasks. Despite its effectiveness, the huge number of parameters makes training a BERT model computationally very challenging. In this paper, we propose an efficient multi-stage layerwise training (MSLT) approach to reduce the training time of BERT. We decompose the whole training process into several stages. The training is started from a small model with only a few encoder layers and we gradually increase the depth of the model by adding new encoder layers. At each stage, we only train the top (near the output layer) few encoder layers which are newly added. The parameters of the other layers which have been trained in the previous stages will not be updated in the current stage. In BERT training, the backward computation is much more timeconsuming than the forward computation, especially in the distributed training setting in which the backward computation time further includes the communication time for gradient synchronization. In the proposed training strategy, only top few layers participate in backward computation, while most layers only participate in forward computation. Hence both the computation and communication efficiencies are greatly improved. Experimental results show that the proposed method can achieve more than 110% training speedup without significant performance degradation.
[ { "affiliations": [], "name": "ING SPEEDUP" } ]
[ { "authors": [ "James C Bezdek", "Richard J Hathaway" ], "title": "Some notes on alternating optimization", "venue": "In AFSS International Conference on Fuzzy Systems,", "year": 2002 }, { "authors": [ "James C Bezdek", "Richard J Hathaway" ], "title": "Convergence of alternating optimization. Neural", "venue": "Parallel & Scientific Computations,", "year": 2003 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "arXiv preprint arXiv:2003.10555,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Linyuan Gong", "Di He", "Zhuohan Li", "Tao Qin", "Liwei Wang", "Tieyan Liu" ], "title": "Efficient training of bert by progressively stacking", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Ashish Khetan", "Zohar Karnin" ], "title": "schubert: Optimizing elements of bert", "venue": "arXiv preprint arXiv:2005.06628,", "year": 2020 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": null, "year": 1909 }, { "authors": [ "Qiuwei Li", "Zhihui Zhu", "Gongguo Tang" ], "title": "Alternating minimizations converge to second-order optimal solutions", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Weijie Liu", "Peng Zhou", "Zhe Zhao", "Zhiruo Wang", "Qi Ju", "Haotang Deng", "Ping Wang" ], "title": "K-bert: Enabling language representation with knowledge graph", "venue": "arXiv preprint arXiv:1909.07606,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Victor Sanh", "Lysandre Debut", "Julien Chaumond", "Thomas Wolf" ], "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "venue": null, "year": 1910 }, { "authors": [ "Douglas Steinley" ], "title": "K-means clustering: a half-century synthesis", "venue": "British Journal of Mathematical and Statistical Psychology,", "year": 2006 }, { "authors": [ "Emma Strubell", "Ananya Ganesh", "Andrew McCallum" ], "title": "Energy and policy considerations for deep learning in nlp", "venue": "arXiv preprint arXiv:1906.02243,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Wei Wang", "Bin Bi", "Ming Yan", "Chen Wu", "Zuyi Bao", "Liwei Peng", "Luo Si" ], "title": "Structbert: Incorporating language structures into pre-training for deep language understanding", "venue": null, "year": 1908 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Jihoon Bae", "Junmo Kim" ], "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training bert in 76 minutes", "venue": "arXiv preprint arXiv:1904.00962,", "year": 1904 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, the pre-trained language models, such as BERT (Devlin et al., 2018), XLNet (Yang et al., 2019), GPT (Radford et al., 2018), have shown their powerful performance in various areas, especially in the field of natural language processing (NLP). By pre-trained on unlabeled datasets and fine-tuned on small downstream labeled datasets for specific tasks, BERT achieved significant breakthroughs in eleven NLP tasks (Devlin et al., 2018). Due to its success, a lot of variants of BERT were proposed, such as RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019), Structbert (Wang et al., 2019) etc., most of which yielded new state-of-the-art results.\nDespite the accuracy gains, these models usually involve a large number of parameters (e.g. BERTBase has more than 110M parameters and BERT-Large has more than 340M parameters), and they are generally trained on large-scale datasets. Hence, training these models is quite time-consuming and requires a lot of computing and storage resources. Even training a BERT-Base model costs at least $7k (Strubell et al., 2019), let alone the other larger models, such as BERT-Large. Such a high cost is not affordable for many researchers and institutions. Therefore, improving the training efficiency should be a critical issue to make BERT more practical.\nSome pioneering attempts have been made to accelerate the training of BERT. You et al. (2019) proposed a layerwise adaptive large batch optimization method (LAMB), which is able to train a BERT model in 76 minutes. However, the tens of times speedup is based on the huge amount of computing and storage resources, which is unavailable for common users. Lan et al. (2019) proposed an ALBERT model, which shares parameters across all the hidden layers, so the memory consumption is greatly reduced and training speed is also improved due to less communication overhead. Gong et al. (2019) proposed a progressively stacking method, which trains a deep BERT\nnetwork by progressively stacking from a shallow one. Utilizing the similarity of the attention distributions across different layers, such a strategy achieves about 25% speedup without significant performance loss.\nProgressively stacking provides a novel training strategy, namely training a BERT model from shallow to deep. However, progressively stacking only has a high training efficiency at the initial stage in which the model depth is small. As the training goes on, the model depth increases and the training speed decreases. The low efficiency of the later stages makes the overall speedup of progressively stacking limited. Note that in the progressively stacking method, the bottom layers are trained with longer time than the top layers. However, we observe that though the bottom layers are updated all the time, they do not have significant changes in the later stages, in terms of the attention distribution which can reflect the functionality of the encoder layers to some extent (Gong et al., 2019). In other words, most optimization of the bottom layers has been finished in the early stage when the model is shallow. Motivated by this observation, in this work, we propose a novel multi-stage layerwise training (MSLT) approach, which can greatly improve the training efficiency of BERT. We decompose the training process of BERT into several stages, as shown in Fig. 1. We start the training from a small BERT model with only a few encoder layers and gradually add new encoder layers. At each stage (except the first stage), only the output layer and the newly added top encoder layers are updated, while the other layers which have been trained in the previous stages will be fixed in the current stage. After all the encoder layers are trained, to make the network better behaved, we further retrain the model by updating all the layers together. Since the whole model has already been well trained, this stage only requires a few steps (accounting for about 20% of the total steps). Compared with the progressively stacking method, which requires a lot of steps (accounting for about 70% of the total steps (Gong et al., 2019)) to train the whole model, our method is much more time-efficient.\nExperimental results demonstrate the effectiveness and efficiency of the proposed method in two aspects: 1) with the same data throughput (same training steps), our method can achieve comparable performance, compared with the original training method, but consumes much less training time; 2) with the same training time, our method can achieve better performance than the original method. According to the results, the proposed method achieves more than 110% speedup without significant performance degradation.\nTo avoid misunderstanding, it should be mentioned that some widely-known methods such as model compression (Han et al., 2015a;b) and knowledge distillation (Yim et al., 2017; Hinton et al., 2015; Sanh et al., 2019) are designed for network speedup in the inference phase. Namely, these methods are used after the model has been trained. While in this paper, we focus on the model training speedup." }, { "heading": "2 RELATED WORK", "text": "Based on the bidirectional Transformer (Vaswani et al., 2017) encoder, BERT has shown its great representational ability and it achieved state-of-the-art results in eleven NLP tasks. Following BERT, many pre-trained models were proposed, such as RoBERTa (Liu et al., 2019b), XLNet (Yang et al., 2019), KBERT (Liu et al., 2019a), StructBERT (Wang et al., 2019), and so on. Higher accuracy were achieved by these models with more training data, more training steps, or more effective loss functions. However, the BERT models are generally large-scale and they need to be trained on massive datasets (e.g. BERT-base is trained on BooksCorpus and Wikipedia with totally 3.3 billion word corpus). Hence, training a BERT model is challenging in terms of both computation and storage. In the literature, some approaches were proposed to improve the training speed of BERT." }, { "heading": "2.1 DISTRIBUTED TRAINING WITH LARGE BATCH SIZE", "text": "A direct way to reduce the training time is to increase the training batch size by using more machines and train the model in a distributed manner. However, traditional stochastic gradient descent (SGD) based optimization methods perform poorly in large mini-batches training. Naively increasing the batch size leads to performance degradation and computational benefits reduction (You et al., 2019). An efficient layerwise adaptive large batch optimization technique named LAMB was proposed in You et al. (2019) to address this problem. It allows the BERT model to be trained with extremely large batch size without any performance degradation. By using 1024 TPUv3 chips, LAMB reduced the BERT training time from 3 days to 76 minutes. Though tens of times speedup is achieved, these methods require a huge amount of computing and storage resources, which are far beyond the reach of common users." }, { "heading": "2.2 ALBERT", "text": "ALBERT (Lan et al., 2019) adopted two parameter reduction techniques, namely factorized embedding parameterization and cross-layer parameter sharing, which significantly reduced the model size. In addition, ALBERT adopted the sentence-order prediction loss instead of the next-sentence prediction loss during pre-training, which is demonstrated to be more effective in terms of downstream performance. Since the communication overhead is directly proportional to the number of parameters in the model, ALBERT also improved the communication efficiency in distributed training setting. However, since ALBERT has almost the same computational complexity as BERT, training an ALBERT model is still very time-consuming." }, { "heading": "2.3 PROGRESSIVELY STACKING", "text": "The most related work should be progressively stacking (Gong et al., 2019), which is mainly based on the observation that in a trained BERT model, the attention distributions of many heads from top layers are quite similar to the attention distributions of the corresponding heads from the bottom layers, as shown in Fig. 2. Such a phenomenon implies that the encoder layers in the BERT model have similar functionalities. Utilizing the natural similarity characteristic, to train a N−layer BERT model, the progressively stacking method first trains a N/2−layer model, and then sticks it into N−layer by copying the parameters of the trained N/2 layers. After the N−layer model is constructed, the progressively stacking method continues to train the whole model by updating all the parameters together. By repeatedly using such a strategy, the deep BERT model can be trained more efficiently. According to the results shown in Gong et al. (2019), progressively stacking can achieve the training time about 25% shorter than the original training method (Devlin et al., 2018).\nHowever, we can see that the speedup of progressively stacking mainly comes from the initial stage in which the model depth is small. As the model depth increases in the later stages, the training efficiency also decreases, and according to Gong et al. (2019), to guarantee the performance, more training steps should be assigned in the later stages for training the deep model. Hence, the overall speedup brought by progressively stacking is limited. Such an issue is addressed in this paper. In our work, we also train the BERT model from shallow to deep. In contrast , at each stage, we only train the top few layers and we almost keep a high training efficiency during the whole training process. Hence, much more significant speedup is achieved." }, { "heading": "3 METHODOLOGY", "text": "In this section, we propose an efficient training method to accelerate the training of BERT model." }, { "heading": "3.1 MOTIVATION", "text": "The large depth should be one of the main reasons making the BERT training time-consuming. The original method (Devlin et al., 2018) trains all the encoder layers simultaneously. At each training step, the parameters need to wait for the cost function gradients to propagate backwards across all the layers before update, which is very inefficient, especially when the model is very deep.\nInspired by progressively stacking (Gong et al., 2019), we also consider to train the BERT model from shallow to deep. The main problem of progressively stacking is that its training efficiency decreases as the training goes on. We observe that in the progressively stacking strategy, the bottom layers are trained for longer time than the top layers. For example, the first encoder layer (near the input layer) is updated from beginning to end, while the last encoder layer (near the output layer) is only trained at the last stage. We doubt whether it is necessary to spend much more time training the bottom layers, since some research implies that the top encoder layers play a much more significant role (Khetan & Karnin, 2020).\nIn BERT model, the encoder layers are mainly used to learn the dependencies of the input elements, which can be reflected by the attention distribution. In Gong et al. (2019), the authors showed that the distributions of most attention heads are mixtures of two distributions: one distribution focuses on local positions, and another focuses on the first CLS token. In addition, the authors also observed that the attention distribution of the top layers is very similar to that of the bottom layers. Using a similar way, we also visualize some important attention distributions and we get some new findings when using the progressively stacking method to train a 12-layer BERT-Base model. Specifically, we first train a 6-layer model. Then we stack the trained 6-layer model into a 12-layer model and continue to train the whole model until convergence. The attention distributions of the top 6 encoder layers of the final 12-layer BERT-Base model are shown in the first row of Fig. 3. For each layer, we randomly choose an attention head. Then we also show the attention distributions of the corresponding heads of the bottom 6 encoder layers in the second row of Fig. 3. Further, we show the attention distributions of the corresponding heads of the trained 6-layer BERT model before stacking in the third row. As a comparison, we also train a 12-layer BERT-Base model from scratch using the original method, where the parameters of the bottom 6 encoder layers use the same initialization as the above BERT model trained by progressively stacking. The forth row of Fig. 3 shows the attention distributions of the bottom 6 encoder layers of the original BERT-Base model.\nCombined with Fig. 2, we find that:\n1. Except the two obvious distributions found by Gong et al. (2019), namely the distribution focusing on local positions and the distribution focusing on the CLS token, the attention distributions of many heads also focus on the SEP token (for example, the dark vertical line in “L11 H4” in Fig. 2 corresponds to the position of SEP).\n2. Compared with the first and second rows of Fig. 3, one can see that for a trained 12-layer BERT model, some bottom layers have similar attention distributions to the corresponding top layers, which is in line with the observation in Gong et al. (2019). In addition, there are also some bottom-top layer pairs whose attention distributions are very different. On the other hand, compared with the second and third rows of Fig. 3, we can see that the attention distributions of the bottom layers from the final 12-layer model are almost the same as those of the corresponding layers from the trained 6-layer model before stacking, which implies that the further training of the bottom layers after stacking does not bring substantial optimization. The performance of the bottom encoder layers are not further improved, in terms of catching the elements’ dependencies. Compared with the third and forth rows of Fig. 3, we see that the attention distributions of the trained 6-layer model are also very similar to those of the bottom layers of the BERT-Base model with all the layers jointly trained from scratch.\nTherefore, it is not worth spending too much time training the bottom layers, especially updating the bottom layers is generally much more expensive than updating the top layers, since backward computation is top-down. An intuitive idea is that at each stage, let the bottom layers having been trained in the previous stages only participate in the forward computation, and only the newly added top layers as well as the output layer participate in the backward computation. Then the gradient information of the parameters from the bottom layers will not be computed and also will not be communicated in distributed training setting. So the time of both computation and communication can be greatly reduced." }, { "heading": "3.2 MULTI-STAGE LAYERWISE TRAINING", "text": "Now we present our MSLT training method. Based on the above observations, to train a deep BERT model with N encoder layers, we decompose the whole training process into k stages, as shown in Fig. 1. We start our training from a shallow model with only N/k encoder layers and we gradually add new encoder layers on the top of the model (below the output layer). At each time, we only add N/k new layers. Except the first stage in which we update all the parameters, in the other stages, only the parameters of the newly added top layers and the output layer are updated. Namely, the bottom layers which have been trained in the previous stages only participate in the forward computation to provide the training input for the top layers. In addition, in the BERT model, the word embedding matrices in the input and output layers are shared. So we only update the word embedding matrix in the first stage, and in the later stages the word embedding matrix is fixed.\nSimilar to Gong et al. (2019), at each stage, we initialize the parameters of the newly added N/k layers by copying the parameters of the top N/k layers of the model trained in the previous stage. To make the final model better behaved, after each layer is trained, we further retrain the model by updating all the parameters together for a few steps.\nIn the BERT model training, backward computation generally takes much longer time than the forward computation, especially in the distributed training setting in which the backward computation also includes gradient synchronization. For example, in our experiment, when using the original method (Devlin et al., 2018) to train a BERT-Base model, the time of backward computation (including communication time for gradient synchronization in the distributed setting) is almost six times as the time of forward computation. The proposed MSLT method can greatly reduce the backward computation, since only top few layers participate in the backward computation in the whole training process, except the final retraining stage. Hence, the total training time will be much shorter." }, { "heading": "3.3 EXTENDING TO ALBERT", "text": "The proposed MSLT strategy can also be used to speed up the training of ALBERT Lan et al. (2019). The only problem is that ALBERT shares the parameters across all the encoder layers, so we are not able to only update the top layers while keeping the other layers fixed. Hence, before applying MSLT, we make a slight modification of ALBERT. We decompose the whole encoder layers into k groups and we share the parameters of the encoder layers in the same group. Then we use a similar strategy as shown in last subsection to efficiently train the ALBERT model. Specifically, at each stage, we add a new group of encoder layers and only the newly added group of layers will be updated in this stage.\nCompared with the original ALBERT model, the modified ALBERT model requires more storage resources since it involves more parameters. However, the training speed is significantly improved. In real applications, one should decompose the encoder layers into a suitable number of groups to achieve a computation-storage efficiency balance, according to actual demand." }, { "heading": "4 EXPERIMENT", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "In this section, we perform experiments to demonstrate the effectiveness and efficiency of the propose method. Following the setup in Devlin et al. (2018), we use English Wikipedia (2,500M words) and BookCorpus (800M words) for pre-training. Though some more effective objectives were proposed in Lan et al. (2019); Steinley (2006), to make the comparison as meaningful as possible, in this experiment we still adopt the same Masked Language Model (MLM) and Next Sentence Prediction (NSP) objectives used by the original BERT (Devlin et al., 2018). We set the maximum length of each input sequence to be 128. We focus on the training time speed up of the proposed method and whether similar performance is achieved compared with the original BERT with the same setting. The models considered in this section are BERT-Base and BERT-Large, whose hyperparameters setting can be seen in Devlin et al. (2018). For each model, we decompose the training process into k = 4 stages. So at each stage, we train 3 encoder layers for BERT-Base, and 6 encoder layers for BERT-Large. All the experiments are performed on a distributed computing cluster consisting of 32 Telsa V100 GPU cards with 32G memory, and the batch size is 32*32=1024. We use the LAMB optimizer with learning rate 0.00088 (You et al., 2019) . All the other settings are the same as Devlin et al. (2018), including the data pre-processing, unless otherwise specified." }, { "heading": "4.2 COMPARISON FOR TRAINING WITH SAME STEPS", "text": "We train both BERT-Base and BERT-Large models using the MSLT method for totally 1,000,000 steps. Specifically, each stage uses 200,000 steps, and the final model is retrained for the remaining 200,000 steps. We train a BERT-Base and a BERT-Large model with 1,000,000 steps from scratch using the original method (Devlin et al., 2018) as the baselines. For each model, the first 10,000 training steps are used for learning rate warmup. We first show the pre-training loss of all the models in Fig. 4. We can see that for both BERT-Base and BERT-Large, the loss of our method decreases faster than the baseline, and the final convergence value of our method is close to that of the baseline.\nFinally, our method achieves more than 110% speedup (saves about 55% training time), which is a significant improvement compared with the 25% speedup achieved by progressively stacking (Gong et al., 2019).\nThen we further evaluate the above models on the widely used General Language Understanding Evaluation (GLUE (Wang et al., 2018)) benchmark and the Stanford Question Answering Dataset (SQuAD1.1 and SQuAD2.0). The sequence length of the SQuAD tasks is 384. So for the SQuAD tasks, we train the last 10% of the steps using the sequence of 512 to learn the positional embeddings. To better show the advantage of the proposed method, we also add the progressively stacking method Gong et al. (2019) as the baseline. Table 1 shows the Dev set results on SQuAD and selected GLUE tasks. Similar to Liu et al. (2019b), all the results are the median of five runs. For each GLUE task, we fine-tune the model with batch size 32 and we perform a grid search on the learning rate set [5e-5, 4e-5, 3e-5, 2e-5]. Following Clark et al. (2020), we fine-tune the model with 10 epochs for STS and RTE, and 3 epochs for all the other tasks. For SQuAD1.1 task, we fine-tune the model with batch size 12, epoch 2, and learning rate 3e-5. For SQuAD2.0 task, we fine-tune the model with batch size 48, epoch 2, and learning rate 5e-5.\nFrom Table 1, we can see that for both BERT-Base and BERT-Large, the results of our method are close to those of the original BERT. In addition, the results of the baselines are comparable to those shown in Devlin et al. (2018), which confirms the validity of our proposed method." }, { "heading": "4.3 COMPARISON FOR TRAINING WITH SAME TIME", "text": "In this section, we compare the performance of all the models trained with the same time. Training a BERT-Base model with 1M steps using MLST method requires about 40 hours, which is similar to the time of training a BERT-Base model using the original method for 480,000 steps (about 41 hours). In addition, we further train two BERT-Large models using MSLT with 500,000 steps (about 42 hours) and the original method with 230,000 steps (about 42 hours), respectively. The results on selected GLUE tasks are shown in Table 2. We see that with the same training time, the models trained by our method perform much better than the baselines." }, { "heading": "4.4 ACCELERATE TRAINING OF ALBERT", "text": "As shown in Section 3.3, the proposed MSLT method can also be used to speed up the training of ALBERT. Table 3 reports the results on select GLUE tasks of the original ALBERT-Base model and the modified ALBERT-Base model trained by MSLT. All the models are trained for 1M steps and the other settings are the same as Section 4.2. We can see that the modified ALBERT model trained by MSLT achieves better performance than the original BERT. That is because the modified ALBERT model involves more parameters. Hence, the original ALBERT has higher memory efficiency while the modified ALBERT trained using MSLT has higher time efficiency and better performance." }, { "heading": "4.5 EFFECT OF JOINTLY RETRAINING", "text": "In the above examples, we left 200,000 steps for jointly retraining all the layers. Here we investigate the impact of the retraining stage. Table 4 shows the results of BERT models with/without retraining. The results implies that the retraining stage can further improve the performance of the BERT model. The training efficiency of the retraining stage is much lower than the previous stages, since in this stage all the parameters are updated. However, since the model is already near-optimal after training by previous stage, the retraining stage only requires a few steps. In practice, we use 10% ∼ 20% of the total steps for retraining, so the retraining stage will not make the whole training very timeconsuming." }, { "heading": "4.6 ONLINE TEST RESULTS ON GLUE", "text": "Lastly, we report the online test results on GLUE tasks (except WNLI) of all the models after finetuning, which are shown in Table 5. All the submitted models are pre-trained with 1,000,000 steps. Same as Devlin et al. (2018), for each task, we select the best fine-tuning learning rate on the Dev set." }, { "heading": "5 DISCUSSION", "text": "In last section, we empirically showed the effectiveness and efficiency of the proposed MSLT method. In this section we discuss how does it work and why it can improve the convergence speed." }, { "heading": "5.1 RELATIONSHIP BETWEEN MSLT AND ALTERNATING OPTIMIZATION", "text": "At first glance, MSLT is an extension of the progressively stacking method, and the main difference between MSLT and progressively stacking is that at each time progressively stacking updates all the parameters, while MSLT freezes most parameters and only updates the parameters from the few top layers. Such a strategy is similar to alternating optimization Bezdek & Hathaway (2003), which is widely used for solving multi-variable nonconvex optimization problem, due to its simple\nimplementation, fast convergence, and superb empirical performance Li et al. (2019). Alternating optimization addresses the multi-variable optimization problem by updating a small set of variables and keeping the other variables fixed. The alternating optimization method is quite suitable for dealing with the problems in which updating a part of variables is much easier than updating all variables Bezdek & Hathaway (2002).\nAccording to alternating optimization, in the retraining stage, we should update the bottom layers again and keep the other layers fixed. However, when the model is stacked deep, to compute the gradient of the parameters of the bottom layers, we have to compute the gradient of the parameters of the top layers, according to the chain rule. In this situation, updating the parameters of the bottom layers is almost as expensive as updating all the parameters. So we update all the parameters simultaneously in the retraining stage.\nOverall, in the MSLT method, we utilize the advantage of alternating optimization to quickly reach a near-optimal state. When the model is deep enough, alternating optimization does not have efficiency advantage, then we use joint descent to sufficiently utilize the gradient information." }, { "heading": "5.2 AVOID CONTRADICTION OF CHOICE OF LEARNING RATE", "text": "In Section 2.3, we have shown that, though the bottom layers are updated for most steps in progressively stacking method, they do not have significant change in the later stage. So it is not worthwhile to spend so much the computation to updating the parameters of bottom layers. Another issue is that for the adaptive optimizer, such as Adam or LAMB, when the variable is close to the optimal value, we should use a small learning rate, and when the variable is far from optimal, we should use a large learning rate. In progressively stacking strategy, the bottom layers are near-optimal while the newly added top layers are far from optimal since they are not trained. So they have completely different requirements for the learning rate. The MSLT method address the learning rate contradiction by first optimizing the untrained top layers and keep the trained bottom layers fixed. When all the layers are near-optimal, we further retrain all the layers with a small learning rate. Meanwhile, such a manner saves a lot of computation." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose an efficient multi-stage layerwise training method for accelerating the training process of BERT. We decompose the whole training process into several stages and we adopt the progressively stacking strategy that trains a BERT model from shallow to deep by gradually adding new encoder layers. We find that the attention distributions of the bottom layers tend to be fixed early, and further training in the later stages does not bring significant changes. So in our method, at each stage, we only train the top few layers which are newly added, while the bottom layers only participate in the forward computation. Experimental results show that the proposed training method achieves significant speedup without significant performance loss, compared with the existing training method." } ]
2,020
PROGRESSIVELY STACKING 2.0: A MULTI-STAGE LAYERWISE TRAINING METHOD FOR BERT TRAIN-
SP:247dfe2208798ffebd81477467ac4dab8661ef3a
[ "The authors contribute to the NAS literature by presenting a framework that works decently well on small ASR tasks, specifically TIMIT. They make judicious decisions regard the macro and micro cells that are then swept over. They also show that there is some correlation between training for TIMIT and tasks that have more data, such as librispeech. The experiments look to have been done carefully." ]
Powered by innovations in novel architecture design, noise tolerance techniques and increasing model capacity, Automatic Speech Recognition (ASR) has made giant strides in reducing word-error-rate over the past decade. ASR models are often trained with tens of thousand hours of high quality speech data to produce state-of-the-art (SOTA) results. Industry-scale ASR model training thus remains computationally heavy and time-consuming, and consequently has attracted little attention in adopting automatic techniques. On the other hand, Neural Architecture Search (NAS) has gained a lot of interest in the recent years thanks to its successes in discovering efficient architectures, often outperforming handcrafted alternatives. However, by changing the standard training process into a bi-level optimisation problem, NAS approaches often require significantly more time and computational power compared to single-model training, and at the same time increase complexity of the overall process. As a result, NAS has been predominately applied to problems which do not require as extensive training as ASR, and even then reproducibility of NAS algorithms is often problematic. Lately, a number of benchmark datasets has been introduced to address reproducibility issues by providing NAS researchers with information about performance of different models obtained through exhaustive evaluation. However, these datasets focus mainly on computer vision and NLP tasks and thus suffer from limited coverage of application domains. In order to increase diversity in the existing NAS benchmarks, and at the same time provide systematic study of the effects of architectural choices for ASR, we release NAS-Bench-ASR – the first NAS benchmark for ASR models. The dataset consists of 8, 242 unique models trained on the TIMIT audio dataset for three different target epochs, and each starting from three different initializations. The dataset also includes runtime measurements of all the models on a diverse set of hardware platforms. Lastly, we show that identified good cell structures in our search space for TIMIT transfer well to a much larger LibriSpeech dataset.
[ { "affiliations": [], "name": "Abhinav Mehrotra" }, { "affiliations": [], "name": "Alberto Gil C. P. Ramos" }, { "affiliations": [], "name": "Sourav Bhattacharya" }, { "affiliations": [], "name": "Łukasz Dudziak" }, { "affiliations": [], "name": "Ravichander Vipperla" }, { "affiliations": [], "name": "Thomas Chau" }, { "affiliations": [], "name": "Samin Ishtiaq" }, { "affiliations": [], "name": "Mohamed S. Abdelfattah" }, { "affiliations": [], "name": "Nicholas D. Lane" } ]
[ { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep Speech 2: End-to-End Speech Recognition in English and Mandarin", "venue": null, "year": 2016 }, { "authors": [ "Ahmed Baruwa", "Mojeed Abisiga", "Ibrahim Gbadegesin", "Afeez Fakunle" ], "title": "Leveraging End-toEnd Speech Recognition with Neural Architecture Search", "venue": "International Journal of Scientific Engineering Research,", "year": 2019 }, { "authors": [ "Han Cai", "Chuang Gan", "Song Han" ], "title": "Once for all: Train one network and specialize it for efficient deployment", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": null, "year": 2019 }, { "authors": [ "Yi-Chen Chen", "Jui-Yang Hsu", "Cheng-Kuang Lee", "Hung yi Lee" ], "title": "DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Chung-Cheng Chiu", "Colin Raffel" ], "title": "Monotonic chunkwise attention", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Shaojin Ding", "Tianlong Chen", "Xinyu Gong", "Weiwei Zha", "Zhangyang Wang" ], "title": "AutoSpeech: Neural Architecture Search for Speaker Recognition", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Łukasz Dudziak", "Thomas Chau", "Mohamed S. Abdelfattah", "Royson Lee", "Hyeji Kim", "Nicholas D. Lane" ], "title": "BRP-NAS: Prediction-based NAS using GCNs", "venue": "NeurIPS,", "year": 2020 }, { "authors": [ "M. Gales", "K. Knill", "A. Ragni", "Shakti P. Rath" ], "title": "Speech recognition and keyword spotting for low-resource languages: Babel project research at CUED", "venue": "SLTU,", "year": 2014 }, { "authors": [ "Awni Hannun", "Ann Lee", "Qiantong Xu", "Ronan Collobert" ], "title": "Sequence-to-Sequence Speech Recognition with Time-Depth Separable Convolutions", "venue": "In Interspeech,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Liqiang He", "Dan Su", "Dong Yu" ], "title": "Learned transferable architectures can surpass hand-designed architectures for large scale speech recognition", "venue": "arXiv preprint arXiv:2008.11589,", "year": 2020 }, { "authors": [ "Andrew Howard", "Ruoming Pang", "Hartwig Adam", "Quoc Le", "Mark Sandler", "Bo Chen", "Weijun Wang", "Liang-Chieh Chen", "Mingxing Tan", "Grace Chu", "Vijay Vasudevan", "Yukun Zhu" ], "title": "Searching for MobileNetV3", "venue": null, "year": 2019 }, { "authors": [ "J. Kahn", "M. Rivière", "W. Zheng", "E. Kharitonov", "Q. Xu", "P.E. Mazaré", "J. Karadayi", "V. Liptchinsky", "R. Collobert", "C. Fuegen", "T. Likhomanenko", "G. Synnaeve", "A. Joulin", "A. Mohamed", "E. Dupoux" ], "title": "Libri-Light: A Benchmark for ASR with Limited or No Supervision", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Jihwan Kim", "Jisung Wang", "Sangki Kim", "Yeha Lee" ], "title": "Evolved Speech-Transformer: Applying Neural Architecture Search to End-to-End Automatic Speech Recognition", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Kwangyoun Kim", "Kyungmin Lee", "Dhananjaya Gowda", "Junmo Park", "Sungsoo Kim", "Eunhyang S. Kim", "Young-Yoon Lee", "Jinsu Yeo", "Daehyun Kim", "Seokyeong Jung", "Jungin Lee", "Myoungji Han", "Chanwoo Kim" ], "title": "Attention based on-device streaming speech recognition with large speech", "venue": null, "year": 2019 }, { "authors": [ "Nikita Klyuchnikov", "Ilya Trofimov", "Ekaterina Artemova", "Mikhail Salnikov", "Maxim Fedorov", "Evgeny Burnaev" ], "title": "NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing", "venue": "arXiv preprint arXiv:2006.07116,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "K-F Lee", "H-W Hon" ], "title": "Speaker-independent phone recognition using hidden markov models", "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing,", "year": 1989 }, { "authors": [ "Royson Lee", "Łukasz Dudziak", "Mohamed Abdelfattah", "Stylianos Venieris", "Hyeji Kim", "Hongkai Wen", "Nicholas Lane" ], "title": "Journey towards tiny perceptual super-resolution", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Jixiang Li", "Chuming Liang", "Bo Zhang", "Zhao Wang", "Fei Xiang", "Xiangxiang Chu" ], "title": "Neural Architecture Search on Acoustic Scene Classification", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "In UAI,", "year": 2019 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": null, "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Hanna Mazzawi", "Xavi Gonzalvo", "Aleks Kracun", "Prashant Sridhar", "Niranjan Subrahmanya", "Ignacio Lopez Moreno", "Hyun Jin Park", "Patrick Violette" ], "title": "Improving Keyword Spotting and Language Identification via Neural Architecture Search at Scale", "venue": null, "year": 2019 }, { "authors": [ "Tong Mo", "Yakun Yu", "Mohammad Salameh", "Di Niu", "Shangling Jui" ], "title": "Neural Architecture Search for Keyword Spotting", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "V. Panayotov", "G. Chen", "D. Povey", "S. Khudanpur" ], "title": "Librispeech: An ASR corpus based on public domain audio books", "venue": "In ICASSP,", "year": 2015 }, { "authors": [ "Daniel S. Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D. Cubuk", "Quoc V. Le" ], "title": "SpecAugment: A simple data augmentation method for Automatic Speech Recognition", "venue": null, "year": 2019 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient Neural Architecture Search via Parameters Sharing", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Vineel Pratap", "Qiantong Xu", "Jacob Kahn", "Gilad Avidov", "Tatiana Likhomanenko", "Awni Hannun", "Vitaliy Liptchinsky", "Gabriel Synnaeve", "Ronan Collobert" ], "title": "Scaling up online speech recognition using convnets", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Xiaoyang Qu", "Jianzong Wang", "Jing Xiao" ], "title": "Evolutionary Algorithm Enhanced Neural Architecture Search for Text-Independent Speaker Verification", "venue": "In Interspeech,", "year": 2020 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Christian Sciuto", "Kaicheng Yu", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Julien Siems", "Lucas Zimmer", "Arber Zela", "Jovita Lukasik", "Margret Keuper", "Frank Hutter" ], "title": "NASBench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search", "venue": null, "year": 2008 }, { "authors": [ "Gabriel Synnaeve", "Qiantong Xu", "Jacob Kahn", "Edouard Grave", "Tatiana Likhomanenko", "Vineel Pratap", "Anuroop Sriram", "Vitaliy Liptchinsky", "Ronan Collobert" ], "title": "End-to-End ASR: From supervised to semi-supervised learning with modern architectures", "venue": "In ICML: Workshop on Selfsupervision in Audio and Speech,", "year": 2020 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "EfficientNet: Rethinking model scaling for convolutional neural networks", "venue": null, "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V. Le" ], "title": "MnasNet: Platform-Aware Neural Architecture Search for Mobile", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Chris Ying" ], "title": "Enumerating unique computational graphs via an iterative graph invariant", "venue": "arXiv preprint arXiv:1902.06192,", "year": 2019 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "NASBench-101: Towards Reproducible Neural Architecture Search", "venue": null, "year": 2019 }, { "authors": [ "Arber Zela", "Julien Siems", "Frank Hutter" ], "title": "NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Innovations in Deep Neural Network (DNN) architecture design, data augmentation techniques and a continuous increase in the amount of available high quality training datasets, resulted in a massive reduction in ASR word-error-rate over the past decade [Amodei et al., 2016; Kim et al., 2019; Park et al., 2019; Synnaeve et al., 2020]. However, training ASR models to achieve state-of-the-art performance remains challenging as it requires computationally heavy training process, e.g., often thousands of GPU-hours are needed for good convergence [Amodei et al., 2016; Kahn et al., 2020]. Furthermore, the requirement of hyper-parameter optimizations increases the computational loads in ASR training. Despite the system-level complexities in the training procedure, the importance of novel architecture design has proven extremely important in a variety of application domains including ASR [Chiu & Raffel, 2018; Pratap et al., 2020], computer vision [He et al., 2016; Krizhevsky et al., 2012], and natural-language processing (NLP) [Devlin et al., 2019; Vaswani et al., 2017].\nHowever, architecture design is a non-trivial task and often depends on years of experience, domain knowledge of the researchers and is driven by empirical successes.\nOver the past few years, the deep learning community is witnessing a trend in adopting automatic techniques to find neural network architectures over more traditional hand-designed alternatives. NAS algorithms are highly successful in discovering state-of-the-art architectures in various computer vision tasks [Cai et al., 2020; Howard et al., 2019; Lee et al., 2020; Real et al., 2018; Tan et al., 2019; Tan & Le, 2019]. However, many of them suffer from high computational demands, requiring a large number of architecture variations to be trained [Zoph & Le, 2017]. Furthermore, NAS algorithms are often difficult to reproduce by different researchers, mainly due to a non-standard use of training settings, e.g., hyperparameters, and subtle variations in the architecture search spaces [Li & Talwalkar, 2019; Sciuto et al., 2020]. Recently, a number of attempts have been made to mitigate these problems by releasing various benchmark datasets for the NAS research community [Dong & Yang, 2020; Klyuchnikov et al., 2020; Siems et al., 2020; Ying et al., 2019]. These datasets usually provide a direct mapping between an architecture variant and its post training performances, which can be used efficiently by a NAS algorithm speeding up the search process and, at the same time, providing common, fully reproducible environment for assessment and comparison of different algorithms. Initial attempts of creating benchmark datasets predominantly focus on image classification tasks (with only one existing work targeting NLP at the time of this writing), and thus suffer from poor application coverage.\nWe address the lack of coverage problem by introducing a new NAS-benchmak dataset in the domain of ASR, to our best knowledge the very first of its kind. To build the dataset, we have trained 8, 242 unique convolutional neural network architectures on the TIMIT dataset [Garofolo et al., 1993]. We consider convolutional architectures due to their recent successes in the domain of ASR [Pratap et al., 2020; Hannun et al., 2019]. Moreover, convolution based architectures are computationally efficient to run on mobile devices, thus favouring real-time on-device deployment. Our dataset contains multiple runs of the entire training procedure of an architecture, spanning three initializations of the network parameters and three target epochs, amounting a total of 74, 178 = 8, 242 × 3 × 3 training runs. In addition to the per epoch validation and final test metrics, such as Phoneme Error Rate (PER), and CTC loss, we also provide run-times of the architectures on desktop and embedded GPUs for varying batch size. Furthermore, we compare a number of NAS algorithms [Zoph & Le, 2017; Real et al., 2018; Dudziak et al., 2020; Li et al., 2017; Li & Talwalkar, 2019] on our search space, highlighting potential challenges and differences compared to their performances on existing NAS benchmark dataset. Lastly, we show the transferability of the top architecture cells found on TIMIT to a much larger Librispeech dataset [Panayotov et al., 2015].\nIn summary, the contributions of this paper are: • Design of ASR NAS Search Space. ASR NAS-Bench is a first-of-its-kind search space for\nconvolutional speech models. It facilitates the reproducible study of ASR through NAS methods and thus fills an important gap in the literature. The associated dataset consists of 8, 242 unique cells and contains validation and test metrics along with model parameters, FLOPs and on-device run-times1.\n• Enabling NAS for Large-scale ASR. Prohibitive training times for non-toy ASR datasets, has prevented NAS from strongly influencing the evolution of ASR architecture design. We show that ASR NAS-Bench is able to support the discovery of cell structures that generalize even to largescale datasets like Librispeech – a key breakthrough. We believe the methodological decisions in this paper will act as a blueprint for future work, where NAS plays a prominent role in ASR design.\n• Validating Existing NAS Algorithm Design. Existing understanding of NAS is grossly influenced by image-based tasks. By systematically benchmarking popular NAS algorithms, under a rich ASR search space, our findings provide otherwise lacking scientific support for prior results." }, { "heading": "2 RELATED WORK", "text": "NAS Benchmarks. Ying et al. [2019] inroduced NAS-Bench-101 dataset in an attempt to address the difficulties in reproducing NAS research. It contains over 400K unique image classification\n1The NAS-Bench-ASR dataset and the code can be downloaded from https://github.com/ AbhinavMehrotra/nb-asr.\nmodels trained on the CIFAR10 dataset. Despite being the biggest NAS dataset yet, not all NAS algorithms can utilize the dataset due to the restrictions it imposes on the maximum number of edges to limit the search space size. To mitigate the limitations, NAS-Bench-201 was introduced, which contains 15K image classification models and includes more diagnostic data [Dong & Yang, 2020]. Concurrently to NAS-Bench-201, NAS-Bench-1shot1 [Zela et al., 2020] and NAS-Bench301 [Siems et al., 2020] were introduced. NAS-Bench-1shot1 focuses on benchmarking NAS with weight sharing, whereas, NAS-Bench-301 points out the need for surrogate functions for scaling and uses the DARTS search space [Liu et al., 2019] with approximately 1018 models. Lastly, NASBench-NLP [Klyuchnikov et al., 2020] contains models with custom recurrent cells, which are used to replace traditional layers like LSTMs. The dataset contains around 14K different architectures trained on a language modeling task.\nNAS in Audio Modeling. Recently, there has been an increasing interest in applying NAS in speechrelated tasks, such as keyword spotting [Mo et al., 2020; Mazzawi et al., 2019], speaker verification [Ding et al., 2020; Qu et al., 2020], and acoustic scene classification [Li et al., 2020]. NAS has also been applied in ASR [He et al., 2020; Chen et al., 2020; Kim et al., 2020; Baruwa et al., 2019]. For example, Chen et al. [2020] used a search methodology based on the vanilla DARTS to optimize a CNN-based feature extractor, followed by a fixed Bi-LSTM module and multiple output heads. The authors evaluated the NAS-discovered models under mono- and multi-lingual settings, using the Full Language Pack from IARPA BABEL [Gales et al., 2014], showing improvements over a VGG-based extractor. In contrast, Kim et al. [2020] considered evolution-based search to optimize a micro-cell used within a transformer architecture. The evaluation was done using English and Korean dataset of approximately 100 hours of speech each, under monolingual setting. Similarly, He et al. [2020] used differentiable search, using P-DARTS [Chen et al., 2019] as the base, to optimize a convolutional model without the recurrent tail. Unlike the previous approaches, the evaluation is done on Mandarin, while using various training datasets spanning between 170 and 10K hours of speech. Closest to our work is the work by Baruwa et al. [2019], where the authors consider both TIMIT and LibriSpeech datasets, but studied them independently rather than using TIMIT as a proxy for LibriSpeech. Further differences include variations in search space design, e.g., the authors only considered operations arranged in a fixed feed-forward manner, and allowed interleaving convolutions with recurrent blocks. Lastly, current work on NAS for ASR focus mainly on using the search to improve prediction accuracy, however, they often lack solid foundations needed for analysing and reasoning about the overall search process and identifying its limitations. This work presents a large-scale study on the effects of architectural changes on ASR models." }, { "heading": "3 ASR NAS-BENCH", "text": "The main purpose of the ASR NAS-Bench dataset is to provide a direct mapping from an architecture instance in the search-space (§3.1) to its training time and final performance metrics. The mapping is designed for any NAS algorithm to quickly navigate the architecture space without requiring the time-consuming and computationally-heavy training procedure. For architecture search we use the TIMIT dataset [Garofolo et al., 1993] and conduct a pilot experiment to select suitable hyperparameters used to train 8, 242 unique models." }, { "heading": "3.1 ASR ARCHITECTURE SEARCH SPACE", "text": "In-line with existing work [Liu et al., 2019; Pham et al., 2018] and NAS Benchmarks [Ying et al., 2019; Dong & Yang, 2020], we restrict our search to a small feed-forward neural network topologyspace, commonly known as cells. We repeat and arrange a chosen cell to construct a predefined macro-architecture, which is then trained on the TIMIT dataset.\nMicro-Architecture. A micro-architecture or a cell, as shown in Figure 1(a), is represented by a directed acyclic graph (DAG). We consider DAGs with four nodes T1, . . . , T4 with corresponding incident tensors t1, . . . , t4 and allow two types of edges: main and skip connection edges. A main edge connects two successive nodes Ti−1 and Ti in the graph as shown by the solid line-arrows in Figure 1(a). A skip connection edge on the other hand, can connect any two nodes Tj and Ti, with the constraint j < i and are depicted as dotted line-arrows in Figure 1(a). Each edge ej→i represents an operation on the tensor tj . The tensor ti is computed by summing the results of operations done by all incoming edges on Ti (i.e., ej→i, where j < i).\nWe consider a choice of six operations for the main edges: linear operation, four convolution operations distinguished by choices of (kernel size,dilation) ∈ {(5, 1), (5, 2), (7, 1), (7, 2)}, and a zero operation, which outputs a tensor of zeros with the same shape and type as its input. Skip connection operations on the other hand, can be either the identity operation or the zero operation. We used L2 kernel regularizer with convolution operations and dropouts with linear operations. Within the convolution operations, the size of the kernels (e.g., 5 or 7) is chosen as a trade-off between the audio context duration and the model size. Similarly two dilation factors are considered (e.g., 1 or 2) to investigate the trade-off between audio context duration and time resolution. As the final step of a cell, the value of t4 is further passed through a layer normalization to produce the output of the cell. These design choices are made after we have considered the search space that can be used by the vast majority of the NAS algorithms.\nMacro-Architecture. One of our focuses in this benchmark is on-device deployment, so the macroarchitecture is made up of convolution and unidirectional LSTM blocks as they are computationally ef-\nficient to run on mobile CPUs. The macro architecture, illustrated in Figure 1(b), is a sequential computation composed of four blocks followed by a unidirectional LSTM and a linear layer. Individual blocks are composed of a convolution layer, an activation layer (ReLU), a layer normalization and a composition of Ci (i = 1, 2, 3, 4) search cells, with the same micro architecture across all the blocks. Each block is parametrized by three parameters Fi, Ki and Si. These are used to define the number of filters Fi, the kernel size Ki and the stride Si of the convolution layer. Note that Fi is also the number of filters of the convolution layers inside the search cell. We use the following set of parameters to define the macro-architecture while performing all micro-architecture training on the TIMIT dataset: C1∶4 = [3, 4, 5, 6], F1∶4 = [600, 800, 1000, 1200], K1∶4 = [8, 8, 8, 8] and S1∶4 = [1, 1, 2, 2]." }, { "heading": "3.2 TIMIT DATASET", "text": "TIMIT [Garofolo et al., 1993] is one of the earliest datasets designed for evaluating and benchmarking phoneme ASR systems. It is ideally suited for neural architecture search experiments due to its small size and high quality transcriptions. It comprises of 6, 300 utterances from 630 speakers, amounting to 5.4 hours of speech. We use the standard training partition of 3, 696 utterances from 462 speakers. Following Lee & Hon [1989], we split the core test dataset into a test partition, consisting of 24 speakers, and a validation partition. Inline with kaldi Timit-s5 recipe, the original 61 phonemes are mapped into a set of 48 phonemes, which forms the output layer targets for all the models. These 48 phonemes were further folded to a set of 39 during evaluations [Lee & Hon, 1989]. We use 80-dimensional log-mel spectrograms computed over 25 ms sliding window with a stride of 10 ms as input features. During training, we employ a curriculum learning strategy [Amodei et al., 2016], where we begin training by iterating twice over audio utterances shorter than 1s, then twice over audio utterances shorter than 2s as a warmup phase followed by training on the entire dataset. For efficiency, we also use a batch bucketing strategy, where a batch size of 64 is used for audio utterances smaller than 2s, and a batch size of 32 is used otherwise. We used CTC beam-search decoder with beam-size of 12. Note, we did not use any language model in our experiments in order to avoid further HPO (e.g., weighting factors) optimization. This also helped us to keep the architecture search for Acoustic Model (AM) tractable. All training experiments on TIMIT report the PER on validation and test partitions." }, { "heading": "3.3 PILOT EXPERIMENT", "text": "As different training procedures can potentially lead to substantially different results [Ying et al., 2019], in this work we employ a single general training procedure for all models. Moreover, we use the same parameter set, e.g., F,K, S, learning rate, and decay across all 8, 242 models. In order to select good values of the parameters, we conducted a range of training experiments, where we performed a grid-search to find good macro structure parameters and optimizer settings.\nMacro Structure Parameter Selection. The macro structure considered in this work has four main blocks (see Figure 1(a)) and we considered the following variations in the pilot study: filters F ∈ {[600, 800, 1000, 1200], [900, 1100, 1300, 1500], [1200, 1300, 1500, 1700]}, kernel sizes K ∈ {[6, 8, 10, 12], [8, 8, 8, 8], [10, 10, 10, 10], [12, 12, 12, 12]}, and time reduction via strides S ∈ {[2, 2, 2, 1], [1, 1, 2, 2], [2, 2, 1, 1]}. Optimizer Setting for Training. We further explored good ranges of the learning rate, decay factor, and start epoch for an exponential decay learning rate scheduler used in conjunction with the Adam optimizer. Specifically, we considered the learning rate (LR) ∈ [10−3, 10−4, 5 ∗ 10−5, 10−5], the decay factor ∈ [0.9, 0.95, 0.99], and the LR decay start epoch ∈ [5, 10, 15]. Finally, we randomly chose five micro architectures among all possible 8, 242 cells and for each cell we conducted the grid search over the aforementioned parameters. Specifically, for each cell we created a model from a particular macro architecture, which we then trained by selecting a particular optimizer setting on TIMIT. We identified the parametric setting that resulted in the lowest PER across all five cells in the experiments and used the same parameters in all latter model training experiments. The best macro structure parameters are presented above (see §3.1), whereas the best LR was 10−4, and the decay factor and start epoch were: (i) 0.9 and 5 for target epoch 40, (ii) 0.631 and 2 for target epoch 10, and (iii) 0.398 and 1 for target epoch 5 respectively. The decay factor and start epoch are chosen such that the learning rate of the optimizer reaches the same value at the end of the target epoch of training." }, { "heading": "3.4 TRAINING SETUP", "text": "Individual models are trained using a TensorFlow-based training pipeline running on a single GPU. We leveraged NVIDIA V100 and P40 GPUs, and decreased training time by increasing throughput via the bucketing strategy based on the audio length. Metrics such as CTC loss and Phoneme Error Rate (PER) for TIMIT were used to measure model performances. Specifically, train and validation metrics were logged at each epoch, and the test metrics were logged at the end of training for the best model (refered to as final test PER). Our dataset contains logs of each of the 8, 242 models trained with three different seeds and for three target epochs (5, 10 and 40), thus generating a total of 74, 178 model training traces. Additionally, we computed the number of parameters and floating point operations (FLOPs) for each of the architectures and measured their latency on two commonly used hardware platforms: Tesla 1080Ti and Jetson Nano." }, { "heading": "4 ANALYSIS OF NAS-BENCH-ASR DATASET", "text": "As described in §3.1, our cell search space is constructed by selecting 3 main operations from a set of 6 candidates, and then selecting either identity or zeroize operation for the 6 skip connection edges (see Figure 1(a)). This amounts to 13, 824 (63 × 26) possible instances for the search-space. How-\never, the use of zero operations introduces computational graph-isomorphism. To identify unique architectures, following [Ying et al., 2019; Dudziak et al., 2020], we first identify nodes with zero operations and disconnect them from their neighbors. Then we perform a reachability test and remove all nodes that can not be reached from the input or output nodes of the cell. Finally, we end up with a minimized graph representation, which we hash using an iterative graph invariant approach [Ying, 2019]. After accounting for isomorphism and discarding the empty graph from the list of valid models, we found 8, 242 unique architectures, out of which 8, 000 are without any zero operations.\nDistribution of Model Performances. Analyzing the model training results we found that the majority of the models have converged and achieved a final test PER within a small range. Specifically, 83.3% (6, 869) of the models fall below a PER of 26%, with the best model reaching test PER of 20.83%. The remaining 16.7% of the models form a rather evenly-distributed tail, achieving PER values between 26% to 100%. Because of the high density of points below 26% PER and overall high range of PER values, we conducted our analysis by considering three subgroups of models: TOP models are the top 1, 000 architectures according to test PER, MIDDLE are models at positions between 2, 000 and 3, 000 in the test PER ranking, and BOTTOM are between 7, 000 and 8000.\nCorrelation Between Validation and Test PERs. Figure 2 shows the correlation between the test PER and the best validation PER. The results indicate that there is a moderate correlation for top models, weak for middle ones, and strong for worst ones. An overall summary is also presented in Table 1. In general, correlation between validation and test accuracy among the top performing models is significantly lower than for image classification benchmarks, which is highlighted in Table 2, can potentially pose additional challenge to NAS algorithms.\nDistribution of Operations. Figure 3 (first row) presents the distribution of selected operations among main edges across top, middle and bottom most models. We found that the top models have the highest number of linear operations on the main edges, while the middle and bottom groups have low linear operations but higher convolution operations. Figure 3 (second row) also highlights that the worst models (bottom group) also have the highest selection of skip-connections. Figure 4 further confirms this observations showing how significantly worse the models in the search space become as they include less linear layers, or more than 2 skip connections.\nInterestingly, we found that the micro cells in our search space yielding the best validation PER is similar to the structure found in the state-of-the-art convolution models such as time-depth-separable (TDS) models [Pratap et al., 2020]. Figure 5 shows the TDS-cell and\nthe micro-cells with best validation and test PERs (from left to right). This indicates the potential of conducting NAS on smaller datasets to discover new architectures yielding state-of-the-art results on larger datasets.\nLatency, Params, FLOPs and PER. Figure 6 investigates the relationship among test PER, number of parameters and latency of models in the dataset. FLOPS counts are closely correlated to the number of parameters so are not shown for simplicity. Although this figure shows only latency measured on a desktop GPU (NVIDIA GeForce GTX 1080 Ti) with batch size of 32, the NASBench-ASR includes latency measured on different devices with varying batch sizes. Also shown in the figure are 13 Pareto-optimal models, indicated by purple (x) markers. Also, the best models on test and validation dataset are highlighted in red markers. TDS-like models ([Pratap et al., 2020]) achieve test PER between 22.7% and 23.45%, and the best validation PER model has test PER of 22.13%. There are Pareto-optimal models that achieve much lower latency without significant increase in PER, for instance, the model indicated by violet (⬩) has one-fifth of the latency of the best test PER model and still has a test PER of 21.98%. The distribution of coloured clusters along the y axis suggests that the number of parameters is not a strong factor to determine the accuracy of models. Analogically, models with similar number of parameters can lead to very different latency." }, { "heading": "5 TRANSFERABILITY", "text": "Training an ASR architecture to achieve state-of-the-art WER is a slow process, as it often requires iterating over tens of thousands hours of speech data, and potentially over various hyper-parameter choices. Even when distributed across multiple GPUs, this process can take days or weeks [Amodei et al., 2016; Kahn et al., 2020]. Since the largest strides in lowering WERs have resulted from new architectures being developed manually by speech experts slowly over decades, it would be beneficial to scale such exploration through NAS. However, performing NAS directly on large scale ASR datasets needed for peak performance is not practical due to significantly more computational costs. An issue which is further exacerbated when one is also interested in architectures yielding lower latency and computational cost.\nFor this reason, we explore how well new architectures found by NAS on TIMIT (5 hours) transfer to Librispeech (100 hours). Specifically, we investigate the correlation between the PER of best/random/worst performing architectures found on TIMIT, and the Label Error Rate (LER) (or WER) obtained when training the same architectures on Librispeech. If a high correlation is present, this would mean that one could lower the computational requirements of NAS by running it on a smaller datasets. In our case this would mean a 20x speedup in NAS, and in general this would open a new avenue for research, namely on how much one could save by running NAS on different smaller datasets, or running NAS on a subset of a larger dataset extracted by current or next generation data summarization techniques.\nLibrispeech Dataset. Librispeech [Panayotov et al., 2015] is a widely used ASR training and benchmarking database of about 1, 000 hours of speech derived from audiobooks. The training dataset is partitioned into clean-100, clean-360 and other-500 sets with the clean partitions exhibiting much higher SNR. The test and dev sets are also partitioned similarly. In our work, we have used clean100 partition for training and dev-clean and test-clean for validation and testing. The choice of 100 hours partition is to strike a balance between building a decent large scale model, while still keeping the computational cost involved in architecture search experiments plausible. Previous literature indicates that the models trained on 100 hours partition are about 1-1.5% absolute [Panayotov et al., 2015] behind the models trained on 1, 000 hours when evaluated on test-clean set.\nTraining. Librispeech models were trained with 4 GPUs with Horovod distributed framework. We use 80-dimensional log-mel features computed over 25ms Hamming windows strided by 10ms. Guided by latency considerations, we used a vocabulary of 780 sub-word tokens as output. Model performances are measured by tracking CTC loss, LER, and WER on the validation dataset during training and on test dataset post-training.\nPilot Experiments. Similarly to §3.3, we also conducted a pilot experiment to determine appropriate macro structure parameters and hyper-parameters to train the models for Librispeech. To promote transferability between datasets at a reasonable compute cost, with the micro cell of the best model found on TIMIT dataset (out of 8, 242), we conducted another grid search but instead on Librispeech dataset. Accordingly, we chose the final number of filters, kernel size, dilation, learning rate, decay factor and start epoch based on lowest LER/WER on Librispeech.\nCorrelation Analysis. For a meaningful analysis we selected 100 architectures from our search space based on their TIMIT PER, namely we took the best 20, 70 random, and the worst 10. We then plugged the micro cell architectures into the macro architecture for Librispeech. Finally, we trained these models on Librispeech with the hyper-parameters reported earlier and computed their LERs/WERs. We observe that models that didn’t converge for TIMIT also didn’t converge for Librispeech. For this reason, when looking at the strength of transferability between TIMIT and Librispeech we considered models that had TIMIT PER lower than 50%. As shown in Figure 7, there is a high correlation between TIMIT PER and Librispeech WER. In particular, we observe 0.85 correlation between validation metrics, and 0.86 for test metrics. Thus, it is possible to do indirect NAS for cell micro architecture on Librispeech by first applying NAS on TIMIT, and then reusing the identified cell structures in a macro architecture suitable for Librispeech." }, { "heading": "6 NAS EXPERIMENTS", "text": "Following existing NAS benchmarks, we include results from running different NAS algorithms on our search space, using the NAS-Bench-ASR to query for accuracy of models. We consider the following standard NAS algorithms: Regularized Evolution (RE) [Real et al., 2018], Hyperband (HYPB) [Li et al., 2017], REINFORCE with an LSTM-based controller (REINF) [Zoph & Le, 2017], Binary Predictor (BP) [Dudziak et al., 2020], and Random Search (RAND) [Li & Talwalkar, 2019]. We also include an algorithm based on Deep Q-Learning (QLRN), which behaves similarly to RE but tries to chain mutations (actions) and learns what mutations can eventually lead to better results. We report results as functions of “trained models” to unify notion of cost. Because in the full training setting we trained models for 40 epochs, if an algorithm trains a model for fewer epochs than that (e.g. HYPB) we consider the cost of such a training to be n/40 of a “trained model”, where n is the number of epochs used (0 < n ≤ 40). We run each algorithm 100 times and present results in Figure 8 as average test PER of the best model found (all algorithms were using validation accuracy as their optimization target), and average validation PER of the most recently trained model (after smoothing the curve using exponential moving average with weight 0.9). We found that none of the algorithms were able to consistently find the best model – this is somewhat surprising given the moderate size our search space and the fact that algorithms like RE are able to consistently find the best model on NAS-Bench-201, given similar budget. Another interesting observation is related to the weak correlation between validation and test accuracies mentioned in the previous sections. By comparing the figures on the right and left sides (Figure 8), we can see that algorithms which tend to better optimize their rewards (validation accuracy) as search progresses are not necessarily better in the test PER objective – this is especially visible in the case of REINFORCE, which significantly outperforms random search if validation accuracy is considered, but when evaluated using test PER it presents comparable results. Another surprise is weak performance of Hyperband – the only algorithm that has the ability to adjust number of epochs to train models. From our observations it seems that many of our models converge to relatively good PER values fast (after 5-10 epochs) so we expected Hyperband to be able to leverage this property. However, it seems that in the case of our search space this is not so helpful – potentially because models might still change relative ordering as they are trained for more epochs despite the fact that they have already achieved decent performance." }, { "heading": "7 CONCLUSIONS", "text": "We introduced NAS-Bench-ASR, a first comprehensive dataset for allowing computationally inexpensive training/test time evaluation of model performances, while conducting NAS research in the domain of ASR. We trained 8, 242 unique models on the TIMIT dataset for three initializations and three target epochs. In addition to the common training metrics in ASR, e.g., PER, CTC-loss, we also provide information on the number of parameters, FLOPS and latency of running all the models on two hardware platforms for varying batch sizes. Model run-times are especially beneficial for NAS researchers interested in optimizing architectures for efficient on-device deployment. We presented a comprehensive analysis of our dataset and evaluated the performances of a number of NAS algorithms on it. Lastly, we show that good cell structures as identified on the TIMIT dataset transfer well to Librispeech, which paves the way for affordable NAS on ASR domain." }, { "heading": "A APPENDIX", "text": "Our dataset comprises of 8,242 unique models that were trained for three target epochs and each starting with three different initializations. In this appendix we present the results for the models with different initializations and target epochs.\nA.1 CORRELATION BETWEEN VALIDATION AND TEST PERS FOR THE RUNS WITH DIFFERENT TARGET EPOCHS\nIn this section we present the correlation of validation and test PERs between different target epochs. Specifically, we correlate each run of target epoch 40 (i.e., runs starting with unique initialization and trained for 40 epochs) with the runs for target epoch 5 and 10. This analysis informs us if reduced training (i.e., target epoch of 5 or 10) can be used as a good proxy for NAS. As shown in Table 3, there is weak correlation of target epoch 40 and 5. However, there is a moderate correlation between target epoch 40 and 10, suggesting that the reduced training might be useful for performing NAS.\nA.2 CORRELATION BETWEEN VALIDATION AND TEST PERS\nIn this section we present the correlation between the final test PER and the best validation PER for the three runs for target epoch 40 starting with different initializations, which is extension of results presented in Figure 2. As shown in Figure 9, the results for all seeds are similar. Specifically, there is moderate correlation for top models, weak for middle ones, and strong for worst ones.\nA.3 CELL ARCHITECTURES IN OUR SEARCH SPACE\nIn this section we present the micro-cell architectures in our search space that achieved best validation and test PERs for the three runs starting with different initializations. Figure 10 shows the micro-cell architectures that achieved best validation PERs (in top row) and best test PERs (in bottom row). We observe that the micro-cell architectures of top models often start with two convolution layers followed by a linear layer (which is consistent across all architectures).\nA.4 IMPACT OF DIFFERENT OPERATIONS-COUNT ON MODEL PERFORMANCE\nIn this section we present the impact of the number of selected operation on the model performance. As shown in Figure 11, models with more convolution layers, or with no linear layer are ranked as worst performing models. At the same time, models with a zero ops in the main edge have poor performance. This indicates that the top performing models are designed such that all nodes comprise of linear and convolution layers. We also observe that an increase in the number of skip connections deteriorates performance significantly, with surprisingly two skip connections yielding similar results to the traditional one skip connection, which aims to learn a perturbation of the identity rather than a generic transformation between input and output nodes." } ]
2,021
NAS-BENCH-ASR: REPRODUCIBLE NEURAL ARCHITECTURE SEARCH FOR SPEECH RECOGNITION
SP:06c032ed2556090f71a474a5ff4ee340c103d5c2
[ "The key message of this paper is that input-gradients (gradient of the logit wrt to input) or loss-gradients are/might be unrelated to the discriminative capabilities of a DNN. The input-gradient is a key primitive in several interpretability and visualization methods. Until now, it has been taken as a given that these gradients reveal 'why' or what parts of the inputs the model is sensitive to. However, this paper questions this reasoning and says that if the input-gradients can be easily manipulated without changing the generalization ability of the model, then does the input-gradient really contain discriminative signals? " ]
Current methods for the interpretability of discriminative deep neural networks commonly rely on the model’s input-gradients, i.e., the gradients of the output logits w.r.t. the inputs. The common assumption is that these input-gradients contain information regarding pθ(y | x), the model’s discriminative capabilities, thus justifying their use for interpretability. However, in this work we show that these input-gradients can be arbitrarily manipulated as a consequence of the shiftinvariance of softmax without changing the discriminative function. This leaves an open question: if input-gradients can be arbitrary, why are they highly structured and explanatory in standard models? We investigate this by re-interpreting the logits of standard softmax-based classifiers as unnormalized log-densities of the data distribution and show that input-gradients can be viewed as gradients of a class-conditional density model pθ(x | y) implicit within the discriminative model. This leads us to hypothesize that the highly structured and explanatory nature of input-gradients may be due to the alignment of this class-conditional model pθ(x | y) with that of the ground truth data distribution pdata(x | y). We test this hypothesis by studying the effect of density alignment on gradient explanations. To achieve this density alignment, we use an algorithm called score-matching, and propose novel approximations to this algorithm to enable training large-scale models. Our experiments show that improving the alignment of the implicit density model with the data distribution enhances gradient structure and explanatory power while reducing this alignment has the opposite effect. This also leads us to conjecture that unintended density alignment in standard neural network training may explain the highly structured nature of input-gradients observed in practice. Overall, our finding that input-gradients capture information regarding an implicit generative model implies that we need to re-think their use for interpreting discriminative models.
[ { "affiliations": [], "name": "MODEL INTERPRETABILITY" }, { "affiliations": [], "name": "Suraj Srinivas" }, { "affiliations": [], "name": "François Fleuret" } ]
[ { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marco Ancona", "Enea Ceolini", "Cengiz Oztireli", "Markus Gross" ], "title": "Towards better understanding of gradient-based attribution methods for deep neural networks", "venue": "In 6th International Conference on Learning Representations (ICLR", "year": 2018 }, { "authors": [ "Haim Avron", "Sivan Toledo" ], "title": "Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix", "venue": "Journal of the ACM (JACM),", "year": 2011 }, { "authors": [ "David GT Barrett", "Benoit Dherin" ], "title": "Implicit gradient regularization", "venue": "arXiv preprint arXiv:2009.11162,", "year": 2020 }, { "authors": [ "John S Bridle" ], "title": "Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition", "venue": "In Neurocomputing,", "year": 1990 }, { "authors": [ "Prasad Chalasani", "Jiefeng Chen", "Amrita Roy Chowdhury", "Somesh Jha", "Xi Wu" ], "title": "Concise explanations of neural networks using adversarial training", "venue": "arXiv, pp", "year": 2018 }, { "authors": [ "Ann-Kathrin Dombrowski", "Maximillian Alber", "Christopher Anders", "Marcel Ackermann", "Klaus-Robert Müller", "Pan Kessel" ], "title": "Explanations can be manipulated and geometry is to blame", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial robustness as a prior for learned representations", "venue": null, "year": 1906 }, { "authors": [ "Christian Etmann", "Sebastian Lunz", "Peter Maass", "Carola-Bibiane Schönlieb" ], "title": "On the connection between adversarial robustness and saliency map interpretability", "venue": null, "year": 1905 }, { "authors": [ "Ruth Fong", "Mandela Patrick", "Andrea Vedaldi" ], "title": "Understanding deep networks via extremal perturbations and smooth masks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Amirata Ghorbani", "Abubakar Abid", "James Zou" ], "title": "Interpretation of neural networks is fragile", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Joern-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Juyeon Heo", "Sunghwan Joo", "Taesup Moon" ], "title": "Fooling neural network interpretations via adversarial model manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michael F Hutchinson" ], "title": "A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines", "venue": "Communications in Statistics-Simulation and Computation,", "year": 1990 }, { "authors": [ "Aapo Hyvärinen" ], "title": "Estimation of non-normalized statistical models by score matching", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Daniel Jakubovitz", "Raja Giryes" ], "title": "Improving dnn robustness to adversarial attacks using jacobian regularization", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Simran Kaur", "Jeremy Cohen", "Zachary C Lipton" ], "title": "Are perceptually-aligned gradients a general property of robust classifiers", "venue": "arXiv preprint arXiv:1910.08640,", "year": 2019 }, { "authors": [ "Durk P Kingma", "Yann LeCun" ], "title": "Regularized estimation of image statistics by score matching", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Visualizing deep convolutional neural networks using natural pre-images", "venue": "International Journal of Computer Vision,", "year": 2016 }, { "authors": [ "Anh Nguyen", "Alexey Dosovitskiy", "Jason Yosinski", "Thomas Brox", "Jeff Clune" ], "title": "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tianyu Pang", "Taufik Xu", "Chongxuan Li", "Yang Song", "Stefano Ermon", "Jun Zhu" ], "title": "Efficient learning of generative models via finite-difference score matching", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Barak A Pearlmutter" ], "title": "Fast exact multiplication by the hessian", "venue": "Neural computation,", "year": 1994 }, { "authors": [ "Andrew Slavin Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "arXiv preprint arXiv:1711.09404,", "year": 2017 }, { "authors": [ "Wojciech Samek", "Alexander Binder", "Grégoire Montavon", "Sebastian Lapuschkin", "Klaus-Robert Müller" ], "title": "Evaluating the visualization of what a deep neural network has learned", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2016 }, { "authors": [ "Shibani Santurkar", "Andrew Ilyas", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Image synthesis with a single (robust) classifier", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Konstantin Shmelkov", "Cordelia Schmid", "Karteek Alahari" ], "title": "How good is my gan", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Avanti Shrikumar", "Peyton Greenside", "Anshul Kundaje" ], "title": "Learning important features through propagating activation differences", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": "arXiv preprint arXiv:1312.6034,", "year": 2013 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "arXiv preprint arXiv:1706.03825,", "year": 2017 }, { "authors": [ "Yang Song", "Stefano Ermon" ], "title": "Generative modeling by estimating gradients of the data distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yang Song", "Sahaj Garg", "Jiaxin Shi", "Stefano Ermon" ], "title": "Sliced score matching: A scalable approach to density and score estimation", "venue": null, "year": 1905 }, { "authors": [ "Suraj Srinivas", "François Fleuret" ], "title": "Full-gradient representation for neural network visualization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Akshayvarun Subramanya", "Vipin Pillai", "Hamed Pirsiavash" ], "title": "Fooling network interpretation in image classification", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Pascal Vincent" ], "title": "A connection between score matching and denoising autoencoders", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Xinyang Zhang", "Ningfei Wang", "Hua Shen", "Shouling Ji", "Xiapu Luo", "Ting Wang" ], "title": "Interpretable deep learning under fire", "venue": "In 29th {USENIX} Security Symposium ({USENIX} Security", "year": 2020 } ]
[ { "heading": null, "text": "Current methods for the interpretability of discriminative deep neural networks commonly rely on the model’s input-gradients, i.e., the gradients of the output logits w.r.t. the inputs. The common assumption is that these input-gradients contain information regarding pθ(y | x), the model’s discriminative capabilities, thus justifying their use for interpretability. However, in this work we show that these input-gradients can be arbitrarily manipulated as a consequence of the shiftinvariance of softmax without changing the discriminative function. This leaves an open question: if input-gradients can be arbitrary, why are they highly structured and explanatory in standard models? We investigate this by re-interpreting the logits of standard softmax-based classifiers as unnormalized log-densities of the data distribution and show that input-gradients can be viewed as gradients of a class-conditional density model pθ(x | y) implicit within the discriminative model. This leads us to hypothesize that the highly structured and explanatory nature of input-gradients may be due to the alignment of this class-conditional model pθ(x | y) with that of the ground truth data distribution pdata(x | y). We test this hypothesis by studying the effect of density alignment on gradient explanations. To achieve this density alignment, we use an algorithm called score-matching, and propose novel approximations to this algorithm to enable training large-scale models. Our experiments show that improving the alignment of the implicit density model with the data distribution enhances gradient structure and explanatory power while reducing this alignment has the opposite effect. This also leads us to conjecture that unintended density alignment in standard neural network training may explain the highly structured nature of input-gradients observed in practice. Overall, our finding that input-gradients capture information regarding an implicit generative model implies that we need to re-think their use for interpreting discriminative models." }, { "heading": "1 INTRODUCTION", "text": "Input-gradients, or gradients of outputs w.r.t. inputs, are commonly used for the interpretation of deep neural networks (Simonyan et al., 2013). For image classification tasks, an input pixel with a larger input-gradient magnitude is attributed a higher ‘importance’ value, and the resulting maps are observed to agree with human intuition regarding which input pixels are important for the task at hand (Adebayo et al., 2018). Quantitative studies (Samek et al., 2016; Shrikumar et al., 2017) also show that these importance estimates are meaningful in predicting model response to larger structured perturbations. These results suggest that input-gradients do indeed capture relevant information regarding the underlying model. However in this work, we show that input-gradients can be arbitrarily manipulated using the shift-invariance of softmax without changing the underlying discriminative model, which calls into question the reliability of input-gradient based attribution methods for interpreting arbitrary black-box models.\nGiven that input-gradients can be arbitrarily structured, the reason for their highly structured and explanatory nature in standard pre-trained models is puzzling. Why are input-gradients relatively well-\nbehaved when they can just as easily be arbitrarily structured, without affecting discriminative model performance? What factors influence input-gradient structure in standard deep neural networks?\nTo answer these, we consider the connections made between softmax-based discriminative classifiers and generative models (Bridle, 1990; Grathwohl et al., 2020), made by viewing the logits of standard classifiers as un-normalized log-densities. This connection reveals an alternate interpretation of input-gradients, as representing the log-gradients of a class-conditional density model which is implicit within standard softmax-based deep models, which we shall call the implicit density model. This connection compels us to consider the following hypothesis: perhaps input-gradients are highly structured because this implicit density model is aligned with the ‘ground truth’ class-conditional data distribution? The core of this paper is dedicated to testing the validity of this hypothesis, whether or not input-gradients do become more structured and explanatory if this alignment increases and vice versa.\nFor the purpose of validating this hypothesis, we require mechanisms to increase or decrease the alignment between the implicit density model and the data distribution. To this end, we consider a generative modelling approach called score-matching, which reduces the density modelling problem to that of local geometric regularization. Hence by using score-matching, we are able to view commonly used geometric regularizers in deep learning as density modelling methods. In practice, the score-matching objective is known for being computationally expensive and unstable to train (Song & Ermon, 2019; Kingma & LeCun, 2010). To this end, we also introduce approximations and regularizers which allow us to use score-matching on practical large-scale discriminative models.\nThis work is broadly connected to the literature around unreliability of saliency methods. While most such works consider how the explanations for nearly identical images can be arbitrarily different (Dombrowski et al., 2019; Subramanya et al., 2019; Zhang et al., 2020; Ghorbani et al., 2019), our work considers how one may change the model itself to yield arbitrary explanations without affecting discriminative performance. This is similar to Heo et al. (2019) who show this experimentally, whereas we provide an analytical reason for why this happens relating to the shift-invariance of softmax.\nThe rest of the paper is organized as follows. We show in § 2 that it is trivial to manipulate input-gradients of standard classifiers using the shift-invariance of softmax without affecting the discriminative model. In § 3 we state our main hypothesis and describe the details of score-matching, present a tractable approximation for the same that eliminates the need for expensive Hessian computations. § 4 revisits other interpretability tools from a density modelling perspective. Finally, § 5 presents experimental evidence for the validity of the hypothesis that improved alignment between the implicit density model and the data distribution can improve the structure and explanatory nature of input-gradients." }, { "heading": "2 INPUT-GRADIENTS ARE NOT UNIQUE", "text": "In this section, we show that it is trivial to manipulate input-gradients of discriminative deep networks, using the well-known shift-invariance property of softmax. Here we shall make a distinction between two types of input-gradients: logit-gradients and loss-gradients. While logit-gradients are gradients of the pre-softmax output of a given class w.r.t. the input, loss-gradients are the gradients of the loss w.r.t. the input. In both cases, we only consider outputs of a single class, usually the target class.\nLet x ∈ RD be a data point, which is the input for a neural network model f : RD → RC intended for classification, which produces pre-softmax logits for C classes. The cross-entropy loss function for some class 1 ≤ i ≤ C, i ∈ N corresponding to an input x is given by `(f(x), i) ∈ R+, which is shortened to `i(x) for convenience. Note that here the loss function subsumes the softmax function as well. The logit-gradients are given by∇xfi(x) ∈ RD for class i, while loss-gradients are ∇x`i(x) ∈ RD. Let the softmax function be p(y = i|x) = exp(fi(x))/ ∑C j=1 exp(fj(x)), which we denote as pi for simplicity. Here, we make the observation that upon adding the same scalar function g to all logits, the logit-gradients can arbitrarily change but the loss values do not.\nObservation. Assume an arbitrary function g : RD → R. Consider another neural network function given by f̃i(·) = fi(·) + g(·), for 0 ≤ i ≤ C, for which we obtain ∇xf̃i(·) = ∇xfi(·) +∇xg(·).\nFor this, the corresponding loss values and loss-gradients are unchanged, i.e.; ˜̀i(·) = `i(·) and ∇x ˜̀i(·) = ∇x`i(·) as a consequence of the shift-invariance of softmax.\nThis explains how the structure of logit-gradients can be arbitrarily changed: one simply needs to add an arbitrary function g to all logits. This implies that individual logit-gradients ∇xfi(x) and logits fi(x) are meaningless on their own, and their structure may be uninformative regarding the underlying discriminative model. Despite this, a large fraction of work in interpretable deep learning (Simonyan et al., 2013; Selvaraju et al., 2017; Smilkov et al., 2017; Fong et al., 2019; Srinivas & Fleuret, 2019) uses individual logits and logit-gradients for saliency map computation. We also provide a similar illustration in the supplementary material for the case of loss-gradients, where we show that it is possible for loss-gradients to diverge significantly even when the loss values themselves do not.\nThese simple observations leave an open question: why are input-gradients highly structured and explanatory when they can just as easily be arbitrarily structured, without affecting discriminative model performance? Further, if input-gradients do not depend strongly on the underlying discriminative function, what aspect of the model do they depend on instead? In the section that follows, we shall consider a generative modelling view of discriminative neural networks that offers insight into the information encoded by logit-gradients." }, { "heading": "3 IMPLICIT DENSITY MODELS WITHIN DISCRIMINATIVE CLASSIFIERS", "text": "Let us consider the following link between generative models and the softmax function. We first define the following joint density on the logits fi of classifiers: pθ(x, y = i) = exp(fi(x;θ)) Z(θ) , where Z(θ) is the partition function. We shall henceforth suppress the dependence of f on θ for brevity. Upon using Bayes’ rule to obtain pθ(y = i | x), we observe that we recover the standard softmax function. Thus the logits of discriminative classifiers can alternately be viewed as un-normalized log-densities of the joint distribution. Assuming equiprobable classes, we have pθ(x | y = i) = exp(fi(x))Z(θ)/C , which is the quantity of interest for us. Thus while the logits represent un-normalized log-densities, logit-gradients represent the score function, i.e.; ∇x log pθ(x | y = i) = ∇xfi(x), which avoids dependence on the partition function Z(θ) as it is independent of x.\nThis viewpoint naturally leads to the following hypothesis, that perhaps the reason for the highly structured and explanatory nature of input-gradients is that the implicit density model pθ(x | y) is close to that of the ground truth class-conditional data distribution pdata(x | y)? We propose to test this hypothesis explicitly using score-matching as a density modelling tool.\nHypothesis. (Informal) Improved alignment of the implicit density model to the ground truth classconditional density model improves input-gradient interpretability via both qualitative and quantitative measures, whereas deteriorating this alignment has the opposite effect." }, { "heading": "3.1 SCORE-MATCHING", "text": "Score-matching (Hyvärinen, 2005) is a generative modelling objective that focusses solely on the derivatives of the log density instead of the density itself, and thus does not require access to the partition function Z(θ). Specifically, for our case we have∇x log pθ(x | y = i) = ∇xfi(x), which are the logit-gradients.\nGiven i.i.d. samples X = {xi ∈ RD} from a latent data distribution pdata(x), the objective of generative modelling is to recover this latent distribution using only samples X . This is often done by training a parameterized distribution pθ(x) to align with the latent data distribution pdata(x). The score-matching objective instead aligns the gradients of log densities, as given below.\nJ(θ) = Epdata(x) 1\n2 ‖∇x log pθ(x)−∇x log pdata(x)‖22 (1) = Epdata(x) ( trace(∇2x log pθ(x)) + 1\n2 ‖∇x log pθ(x)‖22\n) + const (2)\nThe above relationship is proved (Hyvärinen, 2005) using integration by parts. This is a consistent objective, i.e, J(θ) = 0 ⇐⇒ pdata = pθ. This approch is appealing also because this reduces the problem of generative modelling to that of regularization of the local geometry of functions, i.e.; the resulting terms only depend on the point-wise gradients and Hessian-trace." }, { "heading": "3.2 EFFICIENT ESTIMATION OF HESSIAN-TRACE", "text": "In general, equation 2 is intractable for high-dimensional data due to the Hessian trace term. To address this, we can use the Hutchinson’s trace estimator (Hutchinson, 1990) to efficiently compute an estimate of the trace by using random projections, which is given by: trace(∇2x log pθ(x)) = Ev∼N (0,I) vT ∇2x log pθ(x) v. This estimator has been previously applied to score-matching (Song et al., 2019), and can be computed efficiently using Pearlmutter’s trick (Pearlmutter, 1994). However, this trick still requires two backward passes for a single monte-carlo sample, which is computationally expensive. To further improve computational efficiency, we introduce the following approximation to Hutchinson’s estimator using a Taylor series expansion, which applies to small values of σ ∈ R.\nEv∼N (0,I) vT∇2x log pθ(x)v ≈ 2\nσ2 Ev∼N (0,σ2I)\n( log pθ(x+ v)− log pθ(x)−∇x log pθ(x)Tv ) = 2\nσ2 Ev∼N (0,σ2I) (log pθ(x+ v)− log pθ(x)) (3)\nNote that equation 3 involves a difference of log probabilities, which is independent of the partition function. For our case, log pθ(x+ v|y = i)− log pθ(x|y = i) = fi(x+ v)− fi(x). We have thus considerably simplified and speeded-up the computation of the Hessian trace term, which now can be approximated with no backward passes, but using only a single additional forward pass. We present details regarding the variance of this estimator in the supplementary material. A concurrent approach (Pang et al., 2020) also presents a similar algorithm, however it is applied primarily to Noise Contrastive Score Networks (Song & Ermon, 2019) and Denoising Score Matching (Vincent, 2011), whereas we apply it to vanilla score-matching on discriminative models." }, { "heading": "3.3 STABILIZED SCORE-MATCHING", "text": "In practice, a naive application of score-matching objective is unstable, causing the Hessian-trace to collapse to negative infinity. This occurs because the finite-sample variant of equation 1 causes the model to ‘overfit’ to a mixture-of-diracs density, which places a dirac-delta distribution at every data point. Gradients of such a distribution are undefined, causing training to collapse. To overcome this, regularized score-matching (Kingma & LeCun, 2010) and noise conditional score networks (Song & Ermon, 2019) propose to add noise to inputs for score-matching to make the problem well-defined. However, this did not help for our case. Instead, we use a heuristic where we add a small penalty term proportional to the square of the Hessian-trace. This discourages the Hessian-trace becoming too large, and thus stabilizes training." }, { "heading": "4 IMPLICATIONS OF THE DENSITY MODELLING VIEWPOINT", "text": "In the previous section we related input-gradients to the implicit density model, thus linking gradient interpretability to density modelling through our hypothesis. In this section, we consider two other interpretability tools: activity maximization and the pixel perturbation test, and show how these can interpreted from a density modelling perspective. These perspectives also enable us to draw parallels between score-matching and adversarial training." }, { "heading": "4.1 ACTIVITY MAXIMIZATION AS SAMPLING FROM THE IMPLICIT DENSITY MODEL", "text": "The canonical method to obtain samples from score-based generative models is via Langevin sampling (Welling & Teh, 2011; Song & Ermon, 2019), which involves performing gradient ascent on the density model with noise added to the gradients. Without this added noise, the algorithm recovers the modes of the density model.\nWe observe that activity maximization algorithms used for neural network visualizations are remarkably similar to this scheme. For instance, Simonyan et al. (2013) recover inputs which maximize the logits of neural networks, thus exactly recovering the modes of the implicit density model. Similarly, deep-dream-like methods (Mahendran & Vedaldi, 2016; Nguyen et al., 2016; Mordvintsev et al., 2015) extend this by using “image priors” to ensure that the resulting samples are closer to the distribution of natural images, and by adding structured noise to the gradients in the form of jitter, to obtain more visually pleasing samples. From the density modelling perspective, we can alternately view these visualization techniques as biased sampling methods for score-based density models trained on natural images. However, given the fact that they draw samples from the implicit density model, their utility in interpreting discriminative models may be limited." }, { "heading": "4.2 PIXEL PERTURBATION AS A DENSITY RATIO TEST", "text": "A popular test for saliency map evaluation is based on pixel perturbation (Samek et al., 2016). This involves first selecting the least-relevant (or most-relevant) pixels according to a saliency map representation, ‘deleting’ those pixels and measuring the resulting change in output value. Here, deleting a pixel usually involves replacing the pixel with a non-informative value such as a random or a fixed constant value. A good saliency method identifies those pixels as less relevant whose deletion does not cause a large change in output value.\nWe observe that this change in outputs criterion is identical to the density ratio, i.e., log (pθ(x+ v|y = i)/pθ(x|y = i)) = fi(x + v) − fi(x). Thus when logits are used for evaluating the change in outputs (Samek et al., 2016; Ancona et al., 2018), the pixel perturbation test exactly measures the density ratio between the perturbed image and the original image. Thus if a perturbed image has a similar density to that of the original image under the implicit density model, then the saliency method that generated these perturbations is considered to be explanatory. Similarly, Fong et al. (2019) optimize over this criterion to identify pixels whose removal causes minimal change in logit activity, thus obtaining perturbed images with a high implicit density value similar to that of activity maximization. Overall, this test captures sensitivity of the implicit density model, and not the underlying discriminative model which we wish to interpret. We thus recommend that the pixel perturbation test always be used in conjunction with either the change in output probabilities, or the change in the accuracy of classification, rather than the change in logits." }, { "heading": "4.3 CONNECTING SCORE-MATCHING TO ADVERSARIAL TRAINING", "text": "Recent works in adversarial machine learning (Etmann et al., 2019; Engstrom et al., 2019; Santurkar et al., 2019; Kaur et al., 2019; Ross & Doshi-Velez, 2017) have observed that saliency map structure and samples from activation maximization are more perceptually aligned for adversarially trained models than for standard models. However it is unclear from these works why this occurs. Separate from this line of work, Chalasani et al. (2018) also connect regularization of a variant of integrated gradients with adversarial training, suggesting a close interplay between the two.\nWe notice that these properties are shared with score-matched models, or models trained such that the implicit density model is aligned with the ground truth. Further, we note that both score-matching and adversarial training are often based on local geometric regularization, usually involving regularization of the gradient-norm (Ross & Doshi-Velez, 2017; Jakubovitz & Giryes, 2018), and training both the discriminative model and the implicit density model (Grathwohl et al., 2020) has been shown to improve adversarial robustness. From these results, we can conjecture that training the implicit density model via score-matching may have similar outcomes as adversarial training. We leave the verification and proof of this conjecture to future work." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present experimental results to show the efficacy of score-matching and the validation of the hypothesis that density alignment influences the gradient explanation quality. For experiments, we shall consider the CIFAR100 dataset. We present experiments with CIFAR10 in the supplementary section. Unless stated otherwise, the network structure we use shall be a 18-layer ResNet that achieves 78.01% accuracy on CIFAR100, and the optimizer used shall be SGD with momentum. All models use the softplus non-linearity with β = 10, which is necessary to ensure that\nthe Hessian is non-zero for score-matching. Before proceeding with our experiments, we shall briefly introduce the score-matching variants we shall be using for comparisons.\nScore-Matching We propose to use the score-matching objective as a regularizer in neural network training to increase the alignment of the implicit density model to the ground truth, as shown in equation 4, with the stability regularizer discussed in §3.3. For this, we use a regularization constant λ = 1e − 3. This model achieves 72.20% accuracy on the test set, which is a drop of about 5.8% compared to the original model. In the supplementary material, we perform a thorough hyper-parameter sweep and show that it is possible to obtain better performing models.\nh(x) := 2\nσ2 Ev∼N (0,σ2I) (fi(x+ v)− fi(x))\n`reg(f(x), i)︸ ︷︷ ︸ regularized loss = `(f(x), i)︸ ︷︷ ︸ cross-entropy +λ\n Hessian-trace︷︸︸︷ h(x) + 1\n2 gradient-norm︷ ︸︸ ︷ ‖∇xfi(x)‖22︸ ︷︷ ︸\nscore-matching\n+ 10−4︷︸︸︷ µ h2(x)︸ ︷︷ ︸\nstability regularizer (4) Anti-score-matching We would like to have a tool that can decrease the alignment between the implicit density model and the ground truth. To enable this, we propose to maximize the hessiantrace, in an objective we call anti-score-matching. For this, we shall use a the clamping function on hessian-trace, which ensures that its maximization stops after a threshold is reached. We use a threshold of τ = 1000, and regularization constant λ = 1e− 4. This model achieves an accuracy of 74.87%.\nGradient-Norm regularization We propose to use gradient-norm regularized models as another baseline for comparison, using a regularization constant of λ = 1e − 3. This model achieves an accuracy of 76.60%." }, { "heading": "5.1 EVALUATING THE EFFICACY OF SCORE-MATCHING AND ANTI-SCORE-MATCHING", "text": "Here we demonstrate that training with score-matching / anti-score-matching is possible, and that such training improves / deteriorates the quality of the implicit density models respectively as expected." }, { "heading": "5.1.1 DENSITY RATIOS", "text": "One way to characterize the generative behaviour of models is to compute likelihoods on data points. However this is intractable for high-dimensional problems, especially for un-normalized models. We observe although that the densities p(x | y = i) themselves are intractable, we can easily compute density ratios p(x+ η | y = i)/p(x | y = i) = exp(fi(x+ η)− fi(x)) for a random noise variable η. Thus, we propose to plot the graph of density ratios locally along random directions. These can be thought of as local cross-sections of the density sliced at random directions. We plot these values for gaussian noise η for different standard deviations, which are averaged across points in the entire dataset.\nIn Figure 1, we plot the density ratios upon training on the CIFAR100 dataset. We observe that the baseline model assigns higher density values to noisy inputs than real inputs. With anti-scorematching, we observe that the density profile grows still steeper, assigning higher densities to inputs with smaller noise. Gradient-norm regularized models and score-matched models improve on this behaviour, and are robust to larger amounts of noise added. Thus we are able to obtain penalty terms that can both improve and deteriorate the density modelling behaviour within discriminative models." }, { "heading": "5.1.2 SAMPLE QUALITY", "text": "We are interested in recovering modes of our density models while having access to only the gradients of the log density. For this purpose, we apply gradient ascent on the log probability log p(x | y = i) = fi(x), similar to activity maximization. Our results are shown in Figure 2. We observe that samples from the score-matched and gradient-norm regularized models are significantly less noisy than other models.\n10 4 10 3 10 2 10 1 Standard deviation of\n0.6\n0.8\n1.0\n1.2\n1.4\n1.6\n1.8\n2.0\np( x\n+ )\np( x)\nBaseline ResNet with Grad-Norm regularization with Score-Matching with Anti-Score-Matching\nresults on a discriminative variant of the pixel perturbation test. Second, we visualize the gradient maps to assess qualitative differences between them." }, { "heading": "5.2.1 QUANTITATIVE RESULTS ON DISCRIMINATIVE PIXEL PERTURBATION", "text": "As noted in 4.2, it is recommended to use the pixel perturbation test using accuracy changes, and we call this variant as discriminative pixel perturbation. We select the least relevant pixels and replace them with the mean pixel value of the image, note down the accuracy of the model on the resulting samples. We note that this test is only used so far to compare different saliency methods for the same underlying model. However, we here seek to compare saliency methods across models. For this we consider two experiments. First, we perform the pixel perturbation experiment with each of the four\ntrained models on their own input-gradients and plot the results in Figure 3a. These results indicate that the input-gradients of score-matched and gradient-norm regularized models are better equipped to identify least relevant pixels in this model. However, it is difficult to completely disentangle the robustness benefits of such score-matched models against improved identification of less relevant pixels through such a plot.\nTo this end, we conduct a second experiment in Figure 3b, where we use input-gradients obtained from these four trained models to explain the same standard baseline ResNet model. This disentangles the robustness of different models as inputs to the same model is perturbed in all cases. Here also we find that gradients from score-matched and gradient-norm regularized models explain behavior of standard baseline models better than the gradients of the baseline model itself. Together, these tests show that training with score-matching indeed produces input-gradients that quantitatively more explanatory than baseline models." }, { "heading": "5.2.2 QUALITATIVE GRADIENT VISUALIZATIONS", "text": "We visualize the structure of logit-gradients of different models in Figure 4. We observe that gradientnorm regularized model and score-matched model have highly perceptually aligned gradients, when compared to the baseline and anti-score-matched gradients, corroborating the quantitative results." }, { "heading": "6 CONCLUSION", "text": "In this paper, we investigated the cause for the highly structured and explanatory nature of inputgradients in standard pre-trained models, and showed that alignment of the implicit density model with the ground truth data density is a possible cause. This density modelling interpretation enabled us to view canonical approaches in interpretability such as gradient-based saliency methods, activity maximization and the pixel perturbation test through a density modelling perspective, showing that these capture information relating to the implicit density model, not the underlying discriminative model which we wish to interpret. This calls for a need to re-think the role of these tools in the interpretation of discriminative models. For practitioners, we believe it is best to avoid usage of logit gradient-based tools, for interpretability. If unavoidable, it is recommended to use only gradient-norm regularized or score-matched models, as input-gradients of these models produce more reliable estimates of the gradient of the underlying distribution. As our experiments show, these may be a useful tool even though they are not directly related to the discriminative model.\nHowever, our work still does not answer the question of why pre-trained models may have their implicit density models aligned with ground truth in the first place. One possible reason could be the the presence of an implicit gradient norm regularizer in standard SGD, similar to that shown independently by Barrett & Dherin (2020). Another open question is to understand why gradient-norm regularized models are able to perform implicit density modelling as observed in our experiments in § 5.1.2, which lead to improved gradient explanations." }, { "heading": "A FOOLING GRADIENTS IS SIMPLE", "text": "Observation. Assume an arbitrary function g : RD → R. Consider another neural network function given by f̃i(·) = fi(·) + g(·), for 0 ≤ i ≤ C, for which we obtain ∇xf̃i(·) = ∇xfi(·) +∇xg(·). For this, the corresponding loss values and loss-gradients are unchanged, i.e.; ˜̀i(·) = `i(·) and ∇x ˜̀i(·) = ∇x`i(·).\nProof. The following expressions relate the loss and neural network function outputs, for the case of cross-entropy loss and usage of the softmax function.\n`i(x) = −fi(x) + log C∑ j=1 exp(fj(x)) (5) ∇x`i(x) = −∇xfi(x) +\nC∑ j=1 pj∇xfj(x) (6)\nUpon replacing fi with f̃i = fi + g, the proof follows.\nA.1 MANIPULATING LOSS-GRADIENTS\nHere, we show how we can also change loss-gradients arbitrarily without significantly changing the loss values themselves. In this case, the trick is to add a high frequency low amplitude sine function to the loss.\nObservation. Consider g(x) = sin(mx), and ˜̀i(x) = `i(x) + g(x), for ,m ≥ R+ and x ∈ RD. Then, it is easy to see that |˜̀i(x)− `i(x)| ≤ , and ‖∇x ˜̀i(x)−∇x`i(x)‖1 ≤ m× ×D.\nThus two models with losses differing by some small can have gradients differing by m× ×D. For m → ∞ and a fixed , the gradients can diverge significantly. Thus, loss-gradients are also unreliable, as two models with very similar loss landscapes and hence discriminative abilities, can have drastically different loss-gradients.\nThis simple illustration highlights the fact that gradients of high-dimensional black-box models are not well-behaved in general, and this depends on both the model smoothness and the high-dimensionality of the inputs. Further, loss values and loss-gradients for highly confident samples are close to zero. Thus any external noise added (due to stochastic training, for instance) can easily dominate the loss-gradient terms even when smoothness conditions (small m) are enforced." }, { "heading": "B SCORE-MATCHING APPROXIMATION", "text": "We consider the approximation derived for the estimator of the Hessian trace, which is first derived from Hutchinson’s trace estimator Hutchinson (1990). We replace log pθ(x) terms used in the main text with f(x) terms here for clarity. The Taylor series trick for approximating the Hessian-trace is given below.\nEv∼N (0,I) vT∇2xf(x)v = 1\nσ2 Ev∼N (0,σ2I)vT∇2xf(x)v\n= 2\nσ2 Ev∼N (0,σ2I)\n( f(x+ v)− f(x)−∇xf(x)Tv +O(σ3) ) (7)\nAs expected, the approximation error vanishes in the limit of small σ. Let us now consider the finite sample variants of this estimator, with N samples. We shall call this the Taylor Trace Estimator.\nTaylor Trace Estimator (TTE) = 2\nNσ2 N∑ i=1 ( f(x+ vi)− f(x) ) s.t. vi ∼ N (0, σ2I) (8)\nWe shall henceforth suppress the dependence on i for brevity. For this estimator, we can compute its variance for quadratic functions f , where higher-order Taylor expansion terms are zero. We make the following observation. Observation. For quadratic functions f , the variance of the Taylor Trace Estimator is greater than the variance of the Hutchinson estimator by an amount at most equal to 4σ−2‖∇xf(x)‖2.\nProof.\nVar(T.T.E.) = 1\nσ4 Ev\n( 2\nN N∑ i=1 ( f(x+ v)− f(x) ) −EvvT∇2xf(x)v\n)2\n= 1 σ4 Ev ( 2 N N∑ i=1 ( f(x+ v)− f(x) ) − 1 N N∑ i=1 vT∇2xf(x)v\n+ 1\nN N∑ i=1 vT∇2xf(x)v − EvvT∇2xf(x)v )2\n= 1 σ4 Ev ( 2 N N∑ i=1 ( f(x+ v)− f(x) ) − 1 N N∑ i=1 vT∇2xf(x)v )2\n+ 1 σ4 Ev ( 1 N N∑ i=1 vT∇2xf(x)v − EvvT∇2xf(x)v )2\nThus we have decomposed the variance of the overall estimator into two terms: the first captures the variance of the Taylor approximation, and the second captures the variance of the Hutchinson estimator.\nConsidering only the first term, i.e.; the variance of the Taylor approximation, we have:\n1 Nσ4 Ev ( 2 N∑ i=1 ( f(x+ v)− f(x) ) − N∑ i=1 vT∇2xf(x)v )2 = 4 Nσ4 Ev ( N∑ i=1 ∇xf(x)Tv )2\n≤ 4 σ4 ‖∇xf(x)‖2Ev‖v‖2 = 4σ−2‖∇xf(x)‖2\nThe intermediate steps involve expanding the summation, noticing that pairwise terms cancel, and applying the Cauchy-Schwartz inequality.\nThus we have a trade-off: a large σ results in lower estimator variance but a large Taylor approximation error, whereas the opposite is true for small σ. However for functions with small gradient norm, both the estimator variance and Taylor approximation error is small for small σ. We note that when applied to score-matching Hyvärinen (2005), the gradient norm of the function is also minimized. This implies that in practice, the gradient norm of the function is likely to be low, thus resulting in a small estimator variance even for small σ. The variance of the Hutchinson estimator is given below for reference Hutchinson (1990); Avron & Toledo (2011):\nVar(Hutchinson) = 2\nN ‖∇2xf(x)‖2F" }, { "heading": "C EVALUATING EFFECT OF SCORE-MATCHING ON GRADIENT EXPLANATIONS (ON CIFAR10)", "text": "We repeat the pixel perturbation experiments on the CIFAR10 dataset and we observe similar qualitative trends. In both cases, we observe that score-matched and gradient norm regularized models have more explanatory gradients, while anti-score-matched model contains the least explanatory gradients. We also present visualization results of input-gradients of various models for reference." }, { "heading": "D HYPER-PARAMETER SWEEP ON SCORE-MATCHED TRAINING", "text": "We present results on a hyper-parameter sweep on the λ and µ parameters of score-matching, where we provide both test-set accuracy on CIFAR100 and the corresponding GAN-test scores. We find upon performing a hyper-parameter sweep that λ = 1e − 5 and µ = 1e − 3 seems to perform the best, whereas in the main paper we present results for λ = 1e − 3 and µ = 1e − 4. It is possible that changing the training schedule by increasing the number of epochs or learning rate may further improve these results, but we did not explore that here." } ]
2,021
null
SP:f0ab80d4f3742a539ea2559845d00e8110ab9e98
[ "This paper proposes a new method for learning subgoal representations in HRL. The method learns a representation that emphasises features that change slowly, through a “slowness objective”. The slowness objective minimises changes in the subgoal representation between low level time steps, while maximising feature changes between the high-level temporal intervals. This objective allows for efficient exploration, which the paper justifies theoretically, and supports with some empirical experiments on challenging control domains." ]
In goal-conditioned Hierarchical Reinforcement Learning (HRL), a high-level policy periodically sets subgoals for a low-level policy, and the low-level policy is trained to reach those subgoals. A proper subgoal representation function, which abstracts a state space to a latent subgoal space, is crucial for effective goal-conditioned HRL, since different low-level behaviors are induced by reaching subgoals in the compressed representation space. Observing that the high-level agent operates at an abstract temporal scale, we propose a slowness objective to effectively learn the subgoal representation (i.e., the high-level action space). We provide a theoretical grounding for the slowness objective. That is, selecting slow features as the subgoal space can achieve efficient hierarchical exploration. As a result of better exploration ability, our approach significantly outperforms stateof-the-art HRL and exploration methods on a number of benchmark continuouscontrol tasks12. Thanks to the generality of the proposed subgoal representation learning method, empirical results also demonstrate that the learned representation and corresponding low-level policies can be transferred between distinct tasks.
[ { "affiliations": [], "name": "SLOW DYNAMICS" }, { "affiliations": [], "name": "Siyuan Li" }, { "affiliations": [], "name": "Lulu Zheng" }, { "affiliations": [], "name": "Jianhao Wang" }, { "affiliations": [], "name": "Chongjie Zhang" } ]
[ { "authors": [ "Amitay Bar", "Ronen Talmon", "Ron Meir" ], "title": "Option discovery in the absence of rewards with manifold analysis", "venue": "arXiv preprint arXiv:2003.05878,", "year": 2020 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Justin A Boyan", "Andrew W Moore" ], "title": "Generalization in reinforcement learning: Safely approximating the value function", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Nat Dilokthanakul", "Christos Kaplanis", "Nick Pawlowski", "Murray Shanahan" ], "title": "Feature control as intrinsic motivation for hierarchical reinforcement learning", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2019 }, { "authors": [ "Chuong B Do" ], "title": "The multivariate gaussian distribution", "venue": "Section Notes, Lecture on Machine Learning, CS,", "year": 2008 }, { "authors": [ "John Duchi" ], "title": "Derivations for linear algebra and optimization", "venue": "Berkeley, California,", "year": 2007 }, { "authors": [ "Zach Dwiel", "Madhavun Candadai", "Mariano Phielipp", "Arjun K Bansal" ], "title": "Hierarchical policy learning is sensitive to goal space design", "venue": null, "year": 1905 }, { "authors": [ "Alberto N Escalante-B", "Laurenz Wiskott" ], "title": "How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Chelsea Finn", "Xin Yu Tan", "Yan Duan", "Trevor Darrell", "Sergey Levine", "Pieter Abbeel" ], "title": "Deep spatial autoencoders for visuomotor learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Mathias Franzius", "Henning Sprekeler", "Laurenz Wiskott" ], "title": "Slowness and sparseness lead to place, head-direction, and spatial-view cells", "venue": "PLoS Comput Biol,", "year": 2007 }, { "authors": [ "Mathias Franzius", "Niko Wilbert", "Laurenz Wiskott" ], "title": "Invariant object recognition and pose estimation with slow feature analysis", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Dibya Ghosh", "Abhishek Gupta", "Sergey Levine" ], "title": "Learning actionable representations with goalconditioned policies", "venue": "arXiv preprint arXiv:1811.07819,", "year": 2018 }, { "authors": [ "Ross Goroshin", "Joan Bruna", "Jonathan Tompson", "David Eigen", "Yann LeCun" ], "title": "Unsupervised feature learning from temporal data", "venue": "arXiv preprint arXiv:1504.02518,", "year": 2015 }, { "authors": [ "Ross Goroshin", "Michael F Mathieu", "Yann LeCun" ], "title": "Learning to linearize under uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "Aren Jansen", "Manoj Plakal", "Ratheet Pandya", "Daniel PW Ellis", "Shawn Hershey", "Jiayang Liu", "R Channing Moore", "Rif A Saurous" ], "title": "Unsupervised learning of semantic audio representations", "venue": "IEEE international conference on acoustics, speech and signal processing (ICASSP),", "year": 2018 }, { "authors": [ "Dinesh Jayaraman", "Kristen Grauman" ], "title": "Slow and steady feature analysis: higher order temporal coherence in video", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yuu Jinnai", "Jee Won Park", "David Abel", "George Konidaris" ], "title": "Discovering options for exploration by minimizing cover time", "venue": null, "year": 1903 }, { "authors": [ "Yuu Jinnai", "Jee Won Park", "Marlos C Machado", "George Konidaris" ], "title": "Exploration in reinforcement learning with deep covering options", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Rico Jonschkowski", "Oliver Brock" ], "title": "Learning state representations with robotic priors", "venue": "Autonomous Robots,", "year": 2015 }, { "authors": [ "Hyoungseok Kim", "Jaekyeom Kim", "Yeonwoo Jeong", "Sergey Levine", "Hyun Oh Song" ], "title": "Emi: Exploration with mutual information", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Varun Raj Kompella", "Marijn Stollenga", "Matthew Luciw", "Juergen Schmidhuber" ], "title": "Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots", "venue": "Artificial Intelligence,", "year": 2017 }, { "authors": [ "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford" ], "title": "Pac reinforcement learning with rich observations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Tejas D Kulkarni", "Ardavan Saeedi", "Simanta Gautam", "Samuel J Gershman" ], "title": "Deep successor reinforcement learning", "venue": "arXiv preprint arXiv:1606.02396,", "year": 2016 }, { "authors": [ "Robert Legenstein", "Niko Wilbert", "Laurenz Wiskott" ], "title": "Reinforcement learning on slow features of high-dimensional input streams", "venue": "PLoS Comput Biol,", "year": 2010 }, { "authors": [ "Timothée Lesort", "Natalia" ], "title": "Dı́az-Rodrı́guez, Jean-Franois Goudou, and David Filliat. State representation learning for control: An overview", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Andrew Levy", "George Konidaris", "Robert Platt", "Kate Saenko" ], "title": "Learning multi-level hierarchies with hindsight", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Siyuan Li", "Rui Wang", "Minxue Tang", "Chongjie Zhang" ], "title": "Hierarchical reinforcement learning with advantage-based auxiliary rewards", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Michael Bowling" ], "title": "A laplacian framework for option discovery in reinforcement learning", "venue": "arXiv preprint arXiv:1703.00956,", "year": 2017 }, { "authors": [ "Marlos C Machado", "Clemens Rosenbaum", "Xiaoxiao Guo", "Miao Liu", "Gerald Tesauro", "Murray Campbell" ], "title": "Eigenoption discovery through the deep successor representation", "venue": "arXiv preprint arXiv:1710.11089,", "year": 2017 }, { "authors": [ "Sridhar Mahadevan", "Mauro Maggioni" ], "title": "Proto-value functions: A laplacian framework for learning representation and control in markov decision processes", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Near-optimal representation learning for hierarchical reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Suraj Nair", "Chelsea Finn" ], "title": "Hierarchical foresight: Self-supervised learning of long-horizon tasks via visual subgoal generation", "venue": "arXiv preprint arXiv:1909.05829,", "year": 2019 }, { "authors": [ "Soroush Nasiriany", "Vitchyr Pong", "Steven Lin", "Sergey Levine" ], "title": "Planning with goal-conditioned policies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Alexandre Péré", "Sébastien Forestier", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "Unsupervised learning of goal spaces for intrinsically motivated goal exploration", "venue": "arXiv preprint arXiv:1803.00781,", "year": 2018 }, { "authors": [ "Rahul Ramesh", "Manan Tomar", "Balaraman Ravindran" ], "title": "Successor options: An option discovery framework for reinforcement learning", "venue": "arXiv preprint arXiv:1905.05731,", "year": 2019 }, { "authors": [ "Jürgen Schmidhuber", "Reiner Wahnsiedler" ], "title": "Planning simple trajectories using neural subgoal generators", "venue": "In From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior,", "year": 1993 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Henning Sprekeler" ], "title": "On the relation of slow feature analysis and laplacian eigenmaps", "venue": "Neural computation,", "year": 2011 }, { "authors": [ "Sainbayar Sukhbaatar", "Emily Denton", "Arthur Szlam", "Rob Fergus" ], "title": "Learning goal embeddings via self-play for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1811.09083,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Aad W Van der Vaart" ], "title": "Asymptotic statistics, volume 3", "venue": "Cambridge university press,", "year": 2000 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1703.01161,", "year": 2017 }, { "authors": [ "Laurenz Wiskott", "Terrence J Sejnowski" ], "title": "Slow feature analysis: Unsupervised learning of invariances", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "The laplacian in rl: Learning representations with efficient approximations", "venue": "arXiv preprint arXiv:1810.04586,", "year": 2018 }, { "authors": [ "Chongjie Zhang", "Sherief Abdallah", "Victor Lesser" ], "title": "Integrating organizational control into multiagent learning", "venue": "In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2009 }, { "authors": [ "Tianren Zhang", "Shangqi Guo", "Tian Tan", "Xiaolin Hu", "Feng Chen" ], "title": "Generating adjacencyconstrained subgoals in hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:2006.11485,", "year": 2020 }, { "authors": [ "Yuke Zhu", "Roozbeh Mottaghi", "Eric Kolve", "Joseph J Lim", "Abhinav Gupta", "Li Fei-Fei", "Ali Farhadi" ], "title": "Target-driven visual navigation in indoor scenes using deep reinforcement learning", "venue": "IEEE international conference on robotics and automation (ICRA),", "year": 2017 }, { "authors": [ "NI Y" ], "title": "x;μ,Σ),B = UΛ, and Λ is a diagonal matrix whose entries are the square roots of the corresponding entries from Λ (Do", "venue": null, "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Reinforcement Learning (RL) has demonstrated increasing capabilities in a wide range of domains, including playing games (Mnih et al., 2015; Silver et al., 2016), controlling robots (Schulman et al., 2015; Gu et al., 2017) and navigation in complex environments (Mirowski et al., 2016; Zhu et al., 2017). Solving temporally extended tasks with sparse or deceptive rewards is one of the major challenges for RL. Hierarchical Reinforcement Learning (HRL), which enables control at multiple time scales via a hierarchical structure, provides a promising way to solve those challenging tasks. Goal-conditioned methods have long been recognized as an effective paradigm in HRL (Dayan & Hinton, 1993; Schmidhuber & Wahnsiedler, 1993; Nachum et al., 2019). In goal-conditioned HRL, higher-level policies set subgoals for lower-level ones periodically, and lower-level policies are incentivized to reach these selected subgoals. A proper subgoal representation function, abstracting a state space to a latent subgoal space, is crucial for effective goal-conditioned HRL, because the abstract subgoal space, i.e., high-level action space, simplifies the high-level policy learning, and explorative low-level behaviors can be induced by setting different subgoals in this compressed space as well.\nRecent works in goal-conditioned HRL have been concentrated on implicitly learning the subgoal representation in an end-to-end manner with hierarchical policies (Vezhnevets et al., 2017; Dilokthanakul et al., 2019), e.g., using a variational autoencoder (Péré et al., 2018; Nair & Finn, 2019; Nasiriany et al., 2019), directly utilizing the state space (Levy et al., 2019) or a handcrafted space (Nachum et al., 2018) as a subgoal space. Sukhbaatar et al. (2018) proposed to learn subgoal embeddings via self-play, and Ghosh et al. (2018) designed a representation learning objective using an actionable distance metric, but both of the methods need a pretraining process. Near-Optimal\n∗Denotes equal contribution 1Videos available at https://sites.google.com/view/lesson-iclr 2Find open-source code at https://github.com/SiyuanLee/LESSON\nRepresentation (NOR) for HRL (Nachum et al., 2019) learns an abstract space concurrently with hierarchical policies by bounding the sub-optimality. However, the NOR subgoal space could not support efficient exploration in challenging deceptive reward tasks.\nIn this paper, we develop a novel method, which LEarns the Subgoal representation with SlOw dyNamics (LESSON) along with the hierarchical policies. Subgoal representation in HRL is not only a state space abstraction, but also a form of high-level action abstraction. Since the high-level agent makes decisions at a low temporal resolution, our method extracts features with slow dynamics from observations as the subgoal space to enable temporal coherence. LESSON minimizes feature changes between adjacent low-level timesteps, in order for the learned feature representation to have the slowness property. To capture dynamic features and prevent the collapse of the learned representation space, we also introduce an additional contrastive objective that maximizes feature changes between high-level temporal intervals. We provide a theoretical motivation for the slowness objective. That is, selecting slow features as the subgoal space can achieve the most efficient hierarchical exploration when the subgoal space dimension is low and fixed. We illustrate on a didactic example that our method LESSON accomplishes the most efficient state coverage among all the compared subgoal representation functions. We also compare LESSON with state-of-theart HRL and exploration methods on complex MuJoCo tasks (Todorov et al., 2012). Experimental results demonstrate that (1) LESSON dramatically outperforms previous algorithms and learns hierarchical policies more efficiently; (2) our learned representation with slow dynamics can provide interpretability for the hierarchical policy; and (3) our subgoal representation and low-level policies can be transferred between different tasks." }, { "heading": "2 PRELIMINARIES", "text": "In reinforcement learning, an agent interacts with an environment modeled as an MDP M = (S,A, P,R, γ), where S is a state space, A is an action space. P : S × A × S → [0, 1] is an unknown dynamics model, which specifies the probability P (s′|s, a) of transitioning to next state s′ from current state s by taking action a. R : S × A → R is a reward function, and γ ∈ [0, 1) is a discount factor. We optimize a stochastic policy π(a|s), which outputs a distribution over the action space for a given state s. The objective is to maximize the expected cumulative discounted reward Eπ[ ∑∞ t=0 γ trt] under policy π." }, { "heading": "3 METHOD", "text": "In this section, we present the proposed method for LEarning Subgoal representations with SlOw dyNamics (LESSON). First, we describe a two-layered goal-conditioned HRL framework. We then introduce a core component of LESSON, the slowness objective for learning the subgoal representation of HRL. Finally, we summarize the whole learning procedure." }, { "heading": "3.1 FRAMEWORK", "text": "Following previous work (Nachum et al., 2018; 2019), we model a policy π(a|s) as a two-level hierarchical policy composed of a high-level policy πh(g|s) and a low-level policy πl(a|s, g). The high-level policy πh(g|s) selects a subgoal g in state s every c timesteps. The subgoal g is in a low dimensional space abstracted by representation function φ(s) : S → Rk. The low-level policy πl(a|s, g) takes the high-level action g as input and interacts with the environment every timestep. Figure 1 depicts the execution process of the hierarchical policy.\nLESSON iteratively learns the subgoal representation function φ(s) with the hierarchical policy. To encourage policy πl to reach the subgoal g, we train πl with an intrinsic reward function based on the negative Euclidean distance in the latent space, rl(st, at, st+1, g) = −||φ(st+1) − g||2. Policy πh is trained to optimize the expected extrinsic rewards renvt . We use the off-policy algorithm SAC (Haarnoja et al., 2018) as our base RL optimizer. In fact, our framework is compatible with any standard RL algorithm.\nApparently, a proper subgoal representation φ(s) is critical not only for learning an effective lowlevel goal-conditioned policy but also for efficiently learning an optimal high-level policy to solve a given task. As the feature dimension k is low, φ(s) has a compression property, which is necessary\nto make the hierarchical policy learning easier. If φ(s) is exactly an identity function without any abstraction, the high-level policy πh still needs to explore in a large space and the complicated subgoal g for the low-level policy is hard to reach as well. In this circumstance, the hierarchical structure cannot simplify the MDP and has no advantage over a flat structure." }, { "heading": "3.2 LEARNING SUBGOAL REPRESENTATIONS", "text": "Inspired by physics-based priors, features with slow dynamics preserve higher temporal coherence and less noise (Wiskott & Sejnowski, 2002). As the high-level policy acts at a lower temporal resolution compared to the low-level policy, it is sensible to learn a subgoal representation function with a slowness objective. To solve large-scale problems, we parameterize the representation function φ(s) with a neural network to extract slow features. One natural way of learning φ(s) is to minimize the squared difference between feature values at times t and t+ 1,\nmin φ E(st,st+1)∼D[||φ(st)− φ(st+1)||2], (1)\nwhere D is a replay buffer. This loss function eliminates fast features, but can be trivially optimized if we allow lossy representation function φ (e.g., if φ(s) = 0 for ∀s ∈ S). To avoid such trivial solutions and capture dynamic features, we propose a contrastive loss to maximize the distance between high-level state transitions in the latent subgoal space, i.e., minφ E(st,st+c)∼D[−||φ(st) − φ(st+c)||2]. To trade off these two loss functions, we adopt the technique of triplet loss (Chopra et al., 2005), i.e., imposing the latent distance between high-level transitions larger than a margin parameterm, as shown by Eq. 2. If we remove the margin parameterm and themax operator, Eq. 2 will be dominated by the maximizing distance part. Margin m defines a unit of distance in the latent space, which prevents trivial solutions as well.\nmin φ E(st,st+1,st+c)∼D[||φ(st)− φ(st+1)||2 +max(0,m− ||φ(st)− φ(st+c)||2)]. (2)\nThe above learning objective abstracts the state space to a latent subgoal space with slow dynamics. As Eq. 2 optimizes the squared difference between feature values, the learned representation can preserve the spatial locality property of the state space, so a subgoal g can be selected in the neighborhood of φ(s). In the next section, we give a theoretical motivation for the slowness objective. That is, selecting slow features as the subgoal space can promote efficient exploration. Algorithm 1 shows the learning procedure of our method. We update φ(s) and πl at the same frequency so that the low-level reward function varies in a stationary way. The high-level policy is updated less frequently, as the high-level transitions are less." }, { "heading": "4 EFFICIENT EXPLORATION WITH SLOW SUBGOAL REPRESENTATION", "text": "In this section, we provide a theoretical motivation for subgoal representation learning with slow dynamics from a statistical view. To support a formal analysis, we consider selecting a subset of features from the state space as a subgoal space. We prove that, given a fixed subgoal space dimension, selecting slow features as the subgoal space can achieve the most efficient hierarchical exploration. We first define a measure for exploration and describe assumptions of our analysis. Then, we present a theorem about the optimality property and corresponding implications.\nAlgorithm 1 LESSON algorithm 1: Input: Number of training steps N , margin m, replay buffer D. 2: Initialize: Learnable parameters for πh(g|s), πl(a|s, g) and φ(s). 3: for t = 1..N do 4: Collect experience (st, gt, at, st+1, renvt ) under πh and πl. 5: Compute low-level reward rlt = −||φ(st+1)− gt||2. 6: Update the replay buffer D. 7: Optimize πh by maximizing cumulative task rewards with D every c timesteps. 8: Optimize πl by maximizing cumulative low-level rewards with D every timestep. 9: Sample a batch of state transitions from D and update φ with Eq. 2 every timestep. 10: end for 11: Return: πh, πl and φ." }, { "heading": "4.1 DEFINITIONS AND ASSUMPTIONS", "text": "To develop a theoretical analysis, we give a definition of slow features and a measure of exploration. Then, we formulate the exploration process in goal-conditioned HRL as a random walk in the state space as follows.\nAs our theoretical analysis is broadly applicable to arbitrary feature space, we denote a state st = [s1t , ..., s I t ] T as a vector containing I features3. State st can be factored into slow features sslow and fast features sfast with a one-step feature change metric ∆sit = |sit − sit+1| (1 ≤ i ≤ I). Without loss of generality, we assume that Eπr [∆sit] < Eπr [∆s i+1 t ], where πr is a random policy. The expected one-step feature change of slow features is relatively small. With a limited slow feature dimension k, sslow = [s1, ..., sk]T , and the rest are fast features. For example, the movements of a robot are slow, but the changes of noisy sensory observations are fast. Definition 1 (Measure of Exploration). In goal-conditioned HRL, an effectiveness measure of hierarchical exploration is defined as the Kullback–Leibler (KL) divergence from the distribution of explored states q(x) to a desired state distribution p(x):\nDKL(p‖q) = ∫ ∞ −∞ p(x) log ( p(x) q(x) ) dx. (3)\nIn this definition, the desired state distribution p(x) is a prior state distribution while q(x) is the distribution of the states explored by the agent. According to Definition 1, when the state distribution of exploration q(x) is closer to the target state distribution p(x), the exploration is more effective. Definition 2 (Random Walk). In goal-conditioned HRL, the exploration process in the state space is an I-dimensional random walk when there is no extrinsic reward for the high-level policy and the low-level policy is optimal. Define s0 as the origin of the state space: s0 = 0, and the unit step of the random walk is Xct = st − st−c, t = c, 2c, · · · , which is i.i.d. Denote a sequence of random variables Yn = ∑n i=1 X c ic, then the asymptotic distribution of Yn is q(x): Yn D→ q(x).\nWe aim to solve sparse reward problems, where an agent needs to explore with little extrinsic rewards, so we consider the circumstance with no extrinsic rewards and the optimal low-level policy to analyze the exploration problem. Thus the high-level policy selects subgoals randomly. The agent can move independently and identically in the state space, leading to Xct is i.i.d. In fact, q(x) can be seen as the steady state distribution of the Markov chain induced by the policy. To facilitate the analysis of different subgoal representations, we make the following assumptions throughout this section:\n(a) The transition function P (s′|s, a) is deterministic. (b) The features are all independent. (c) Xct is bounded in the state space: {|xi| ≤ ri, i = 1, · · · , I}, where xi is the i-th element of\nXct , and ri is a fixed upper bound of |xi|. (d) The subgoal g selected by the high-level policy at time t is constrained in the neighbour-\nhood of st: { |gj − sjt | ≤ rg, j = 1, · · · , k } , where gj and s j t are the j-th elements of\ncorresponding vectors, and rg is a fixed bound of subgoals in all dimensions. 3The state here refers to a true Markovian state.\nAssumption (a) is a general technique to simplify theoretical analysis in RL (Krishnamurthy et al., 2016; Boyan & Moore, 1995). Assumption (b) makes it possible to analyze the exploration of each feature dimension separately. Assumption (c) means that every c timesteps, the agent can move in dimension i with a step size |xi| ≤ ri, and slower features have a smaller bound: ∀i < i′, ri < ri′ . Taking advantage of the spatial continuity of the state space, subgoals are set in the neighborhood of the current state in the selected subgoal feature dimensions, specified as Assumption (d)." }, { "heading": "4.2 OPTIMALITY AND IMPLICATIONS", "text": "Theorem 1. Assume p(x) is a multivariate Gaussian distribution: p(x) ∼ NI(x; 0,R), where R is a diagonal matrix diag(r2) and r is large enough. Given a fixed subgoal space dimension k, selecting the k slowest features for the subgoal space leads to the optimal hierarchical exploration. Denote the distribution of the explored states in this case as qslow, we have:\nqslow = q ∗ = arg min q∈Q DKL(p‖q), (4)\nwhereQ is the sets of all distributions of explored states brought by different subgoal space selection.\nWithout any prior knowledge, we assume p(x) is an isotropic Gaussian distribution with zero mean. When r is large enough, q(x) approximates a uniform distribution.\nProof sketch. The exploration process is decided by the coverage area of the random walk, as shown in Definition 2, and a larger coverage area leads to better exploration (see Definition 1). We analyze the coverage scale in each dimension separately. The exploration ability varies in different dimensions since the slow-feature dimension has a smaller coverage scale. Notice that the exploration ability in dimension i changes if we select the i-th feature for the subgoal space. Concretely, if we choose slow features for the subgoal space, the coverage area in these dimensions will expand. In contrast, selecting fast features decreases the ability of exploration. We prove that with a fixed subgoal space dimension k, if and only if we select the k slowest features as the subgoal space, DKL(p‖q) is minimized, i.e., achieves the optimal hierarchical exploration defined in Definition 1. See the detailed proof in Appendix A.\nSelecting slow features as the subgoal space can achieve superior exploration shown in Theorem 1. This property indicates that using the slowness objective to learn the subgoal representation can promote more efficient exploration. As real-world tasks are often on a large scale, utilizing neural networks to extract slow features as the subgoal space is more general. To conclude, Theorem 1 is a theoretical grounding for the learning objective of LESSON." }, { "heading": "5 RELATED WORK", "text": "Learning subgoal representations is a challenging problem in HRL (Dwiel et al., 2019). Nachum et al. (2018) and Zhang et al. (2020) predefined a subspace of observations as a subgoal space with domain knowledge. Li et al. (2019) sought for an alternative way of setting advantage-based auxiliary rewards to the low level policy to avoid this difficult problem. Levy et al. (2019) directly used the whole observation space, which is unscalable to high-dimensional tasks. A variational autoencoder (VAE) (Kingma & Welling, 2013) can compress the high-dimensional observations in an unsupervised way, and it has been utilized to learn a subgoal space in (Péré et al., 2018; Nair & Finn, 2019; Nasiriany et al., 2019). However, the features extracted by VAE can hardly capture the transitional relationship in MDPs. In Vezhnevets et al. (2017) and Dilokthanakul et al. (2019), a subgoal representation is learned in an end-to-end way with hierarchical policies. Since the resulting representation is under-defined, those methods often underperformed (see Nachum et al. (2018)). Ghosh et al. (2018) proposed to learn representations using an actionable distance metric, assuming that goal-conditioned policies are given. Sukhbaatar et al. (2018) developed a method called HSP to learn subgoal representations via self-play, but HSP requires a pretraining process, and thus it may be inefficient. Near-Optimal Representation (NOR) for HRL (Nachum et al., 2019) outperforms the previous methods by learning representations bounding the sub-optimality of hierarchical policies. However, the optimization of NOR is complicated, and the abstraction of the NOR space does not aim for efficient exploration. In contrast, we develop a simple subgoal space learning method with a slowness objective. Furthermore, we formally show that the slowness objective has a theoretical grounding for better exploration ability.\nSlowness or temporal coherence has been an important prior for learning state representations in continuous control tasks (Bengio et al., 2013; Jonschkowski & Brock, 2015; Lesort et al., 2018). Standard Slow Feature Analysis (SFA) methods learn slow features by solving an optimization problem with constraints (Wiskott, 1999; Wiskott & Sejnowski, 2002). However, their expressivity tends to scale unfavorably in high-dimensional problems. To increase the expressivity, hierarchical SFA (Franzius et al., 2007; 2011; Escalante-B & Wiskott, 2013) composes multiple SFA modules in a layer-wise way. More recent works use neural networks to extract slow features using a slowness loss function. To avoid trivial solutions, another term, such as reconstruction loss (Goroshin et al., 2015a; Finn et al., 2016) or prediction error (Goroshin et al., 2015b), is also included in the loss function. In similarity metric learning, contrastive or triplet loss is investigated to capture slow features in video and audio datasets as well (Jayaraman & Grauman, 2016; Jansen et al., 2018). In reinforcement learning, several approaches exploit the slowly changing bias to extract useful features so that policy learning can be accelerated (Zhang et al., 2009; Legenstein et al., 2010; Oord et al., 2018). To the best of our knowledge, we are the first to utilize the slowness objective in HRL, and our proposed method significantly outperforms state-of-the-art HRL methods on benchmark environments.\nThe inductive bias of slowness has largely been investigated in the skill discovery methods as well. Continual Curiosity driven Skill Acquisition (CCSA) learns a latent space with SFA, and utilizes curiosity-driven rewards in this latent space to train skills (Kompella et al., 2017). Similarly, Machado et al. (2017a), Jinnai et al. (2019) and Bar et al. (2020) proposed to learn options to reach local maxima or minima of the Proto-value functions (PVFs) (Mahadevan & Maggioni, 2007). As pointed out by Sprekeler (2011), the objective functions of SFA and PVFs are equivalent, when the adjacent function of PVFs is the transition function in MDP. But obtaining the full transition function in large scale tasks is nearly infeasible. To solve large scale problems, Machado et al. (2017b) and Ramesh et al. (2019) proposed to replace PVFs with eigenvectors of the deep successor representation (Kulkarni et al., 2016), which equal to scaled PVFs. Jinnai et al. (2020) approximated the computation of PVFs with the objective introduced by Wu et al. (2018). Our method and those skill discovery methods share some similarities in learning low-level policies in a smooth or slow latent space. However, the skill discovery methods can be regarded as bottom-up HRL, where a set of task-agnostic low-level skills are firstly learned with some intrinsic reward functions and then composed to solve downstream tasks. In contrast, our goal-conditioned method can be regarded as top-down HRL, where the high-level policy sets subgoals to the low level during learning a task, and the level-level policy is incentivized to reach those subgoals.\n6 EXPERIMENTS Start\n1 N-1… N-20 (a)\n(b)\nFigure 2: (a) The NChain environment. (b) Results on the 64-link chain environment. Each line is the mean of 20 runs with shaded regions corresponding confidential intervals of 95%.\nWe conduct experiments to compare our approach to existing state-of-the-art methods in HRL and in efficient exploration. Firstly, we show on a didactic example that LESSON can achieve the most efficient state coverage among all the compared subgoal representations. To demonstrate our strengths in high-dimensional tasks, we then compare with several baselines on a number of benchmark continuous-control tasks. After that, we analyze the dynamic property of the learned subgoal representation and provide an interpretation by visualization. Lastly, we show that both the subgoal representation and the low-level policies learned by our method are transferable." }, { "heading": "6.1 DIDACTIC EXAMPLE: NCHAIN", "text": "The NChain environment was designed hard to explore by Osband et al. (2016), as shown in Figure 2(a). Starting from state 0, an agent can move forward (blue arrow) to the next state in the chain or\nbackward (green arrow) to the previous state. The state representation is encoded in binary, so the low bits are features with fast dynamics. To make the problem harder, the effect of each action is randomly swapped with a probability of 0.1. In this near-deterministic environment, we compare the exploration ability of our method to HRL methods using other subgoal spaces with a dimension k = 1. Baselines include the NOR subgoal space (Nachum et al., 2019), a randomly selected bit of the state representation and the lowest bit (fast features). For pure exploration comparison, we consider the circumstance of no external rewards, the same as the setting of our theoretical analysis in Section 4. Figure 2(b) illustrates that using slow features as the subgoal space can achieve the most efficient exploration with goal-conditioned HRL. As expected, the performance of fast features as the subgoal space is the worst. Randomly selected features perform better than fast features. Although NOR aims to bound the sub-optimality of the value function, our method outperforms NOR in terms of exploration." }, { "heading": "6.2 MUJOCO TASKS", "text": "We evaluate our proposed subgoal representation learning objective on a set of challenging MuJoCo tasks that require a combination of locomotion and object manipulation. The details of our full implementation and environments are available in Appendix B. We conduct experiments comparing to the following methods in hierarchical learning and exploration, and all the learning curves in this section are averaged over 10 runs.\n• NOR: HRL with a learned subgoal space, which is optimized to bound the sub-optimality of the hierarchical policy (Nachum et al., 2019).\n• Oracle: HRL with the oracle subgoal space (x, y position of the agent) in navigation tasks. • DCO: A hierarchical exploration method with deep covering options (Jinnai et al., 2020)4. • EMI: A flat exploration method by predicting dynamics in a latent space (Kim et al., 2019). • SAC: The base RL algorithm used in our method (Haarnoja et al., 2018).\nBenefiting from a better exploration ability, our method with a temporally-coherent subgoal space significantly outperforms baseline methods in terms of speed and quality of convergence. Even when the raw observation is given by using top-down images, our method can achieve high success rate, presented in Figure 3(e), (f). In the Ant Maze task, our method reaches a success rate of 100% at only 1.5 million training steps, which is more than two times faster than the NOR algorithm.\n4For a fair comparison, we use the online version of DCO, as the offline version needs a pretraining process.\nIn the Point Maze task, the flat exploration method EMI shows an equal performance with our approach. However, when the dynamic model is more complex (e.g., for the Ant robot), predicting dynamics becomes much harder, and the performance of EMI degrades dramatically. We evaluate NOR with its published code5, and results show its ineffectiveness of exploration in challenging tasks. The online DCO method can hardly learn successful policies in those tasks, partly because the pretraining of the second eigenfunction in their method is necessary." }, { "heading": "6.3 ANALYSIS OF LEARNED REPRESENTATIONS", "text": "We visualize the subgoal representation and learned hierarchical policies of our method in the Ant Push task in Figure 4. The learned subgoal space highly resembles the oracle (x, y) position space. By setting subgoals in the learned latent space, the high-level policy guides the agent to jump out of the local optimum of moving towards the goal. The Ant robot under the hierarchical policy firstly moves to the left, then pushes the block to the right, and finally reaches the goal. In contrast, the SAC agent without a hierarchical structure easily gets stuck into the local optimum of moving directly to the goal, since the immediate extrinsic reward is given as the negative L2 distance to the environment goal." }, { "heading": "6.4 PARALLEL LEARNING OF THE REPRESENTATION FUNCTION AND POLICIES", "text": "We show the subgoal representation learning process in the Ant Push (Images) task in this section. Figure 5 (a)∼(h) visualize trajectories to a hard goal in the representation spaces and the visited areas in the x, y space at different learning stages. Along with the representation visualization, we evaluate an easy goal as the midpoint of the trajectory to the hard goal.\nThe learning of the hierarchical policy and the subgoal representation could promote each other. At about 0.2 million timesteps, with the distance-to-goal dense rewards, our method approximately learns an inaccurate subgoal representation and the policy to reach the easy goal. Since the learned representation is generalizable to the neighborhood of the explored areas to some extent, which facilitates the exploration of the hierarchical policy, the explored areas are expanded little by little. The newly collected samples in the expanded region could be utilized to improve the subgoal representation further." }, { "heading": "6.5 TRANSFERABILITY OF REPRESENTATIONS", "text": "Because of the generality of our representation learning objective with slow dynamics, the learned subgoal space is transferable between different tasks of the same robot. The low-level policy induced by the learned subgoal representation is transferable as well. To verify this transferability, we initialize the representation network and the low-level policy network in a target task with those\n5Code at https://github.com/tensorflow/models/tree/master/research/effici ent-hrl\nweights learned in a source task and further finetune them in the target task. The high-level policy for the target task is randomly initialized. From Figure 6, we can see that transfer learning helps the agent learn more efficiently and achieve better asymptotic performance." }, { "heading": "7 CONCLUSION", "text": "In this work, we propose a self-supervised subgoal representation learning method, LESSON. Our approach is motivated by the slowness prior and supports iterative learning of the representation function and hierarchical policies. In addition, we provide a theoretical grounding for the slowness prior in hierarchical exploration. We test our method on a suite of high-dimensional, continuous control tasks, and it significantly outperforms state-of-the-art HRL and exploration methods. Furthermore, the subgoal representation and low-level policies learned by LESSON are transferable between different tasks. Since the low-level policy learning may result in a non-stationary high-level transition function, combining LESSON with off-policy correction methods to reduce the variance of off-policy learning might be a promising future direction. Furthermore, as the rewards for the continuous control tasks are deceptive and dense, another challenging problem is learning a good subgoal representation and hierarchical policies with extremely sparse rewards." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), and a grant from the Institute of Guo Qiang, Tsinghua University." }, { "heading": "A PROOF", "text": "Theorem 1. Assume p(x) is a multivariate Gaussian distribution: p(x) ∼ NI(x; 0,R), where R is a diagonal matrix diag(r2) and r is large enough. Given a fixed subgoal space dimension k, selecting the k slowest features for the subgoal space leads to the optimal hierarchical exploration. Denote the distribution of the explored states in this case as qslow, we have:\nqslow = q ∗ = arg min q∈Q DKL(p‖q), (4)\nwhereQ is the sets of all distributions of explored states brought by different subgoal space selection.\nFirst, we prove that q(x) is a multivariate Gaussian distribution regardless of the distribution of Xct . Since there is no extrinsic reward, the high-level policy will set subgoals to the low-level randomly every c steps, thus Xcc,X c 2c, . . . ,X c nc are independent and identically distributed with the same mean vector µ = E [Xcic] ∈ RI and the same covariance matrix ΣI×I . Denote the average of Yn as\n1 n Yn = 1 n n∑ i=1 Xcic = Xn. (5)\nBy Multidimensional Central limit theorem (Van der Vaart, 2000), we have √ n ( Xn − µ ) D→ NI(x; 0,Σ). (6) Plug Eq. 5 into Eq. 6 and consider finite samples, we have\nYn D→ NI (x;nµ, nΣ) . (7)\nWhen n → ∞, the distribution of Yn converges to q(x), which means q(x) is a multivariate Gaussian distribution. Without loss of generality, we consider the case when n = 1, i.e., NI (x;µ,Σ), to compare different KL divergence induced by different subgoal representations. The exploration process can be formulated as a random walk in the state space with continuous action space (Definition 2). The selection of the features for the subgoal space only changes the variance of the unit action Xct in the random walk, furthermore, deciding the covariance matrix of q(x).\nNext, we analyze the statistical characteristics of Xct . Since all features are independent, the joint distribution is the product of all the marginal distributions: fXct (x) = Π I i=1fi(x), where fXct (x) is the Probability density function (PDF) of Xct and fi(x) is the marginal PDF of X c t in dimension i. As Assumption (c) indicates that xi ∈ [−ri, ri], if not selecting the i-th feature for the subgoal space, fi(x) is a continuous uniform distribution U [−ri, ri], so the variance in dimension i can be denoted as σ2i = r2i 3 .\nHowever, if we select the i-th feature for the subgoal space, the distribution is modified since the low-level policy is optimal (i.e., the agent moves to the subgoal as close as possible during c steps). Besides, notice that by Assumption (d), the i-th element of subgoal gi is uniformly distributed in [−rg, rg]. Therefore, when gi lies within the interval [−ri, ri], the agent can reach the subgoal within c steps. When gi > ri (gi < −ri), the agent can only reach as far as ri (−ri). Denote the changed Cumulative Distribution Function (CDF) of Xct in dimension i as F ′ i (x), if rg > ri,\nF ′i (x) = 0 x < −ri rg−ri 2rg\n+ 12rg (x+ ri) −ri ≤ x < ri 1 ri ≤ x . (8)\nIn contrast, if rg ≤ ri, the agent can reach any subgoal within the interval [−rg, rg], so we have\nF ′i (x) = 0 x < −rg x+rg 2rg\n−rg ≤ x ≤ rg 1 rg < x . (9)\nIn both cases, the mean vector µ is still 0, but the variance σ2i will increase to r 2 i − 2r3i 3rg when rg > ri and σ2i will decrease to r2g 3 when rg ≤ ri. Denote the selection operation as an operator S, and we\nhave\nσ2i = r2i 3 ,\nS(σ2i ) =\n{ r2i − 2r3i 3rg\nri ≤ rg r2g 3 ri > rg .\n(10)\nFinally, we want to prove if and only if q(x) = qslow(x), DKL(p‖q) can reach the minimum with the constraint of fixed subgoal space dimension k. Consider a distribution q(x) brought by randomly selecting k features from the state space as the subgoal space, since p(x) and q(x) are both multivariate Gaussian distribution (Assumption (b)), the KL divergence from q(x) to p(x) is\nDKL(p‖q) = 1\n2\n[ log det (Σq)\ndet (Σp) − I + tr\n( Σ−1q Σp ) + (µq − µp)T Σ−1q (µq − µp) ] , (11)\nwhere I is the dimension of q(x), and tr stands for the trace of the matrix (Duchi, 2007). Now we want to prove the KL divergence reaches the minimum if and only if q(x) = qslow(x). Since the covariance matrix Σ is symmetric positive definite, there exists a full rank orthogonal matrix U containing of the eigenvectors of Σ as its columns and a diagonal matrix Λ such that Σ = UΛUT (Horn & Johnson, 2012). ThenNI can be transformed into a standard multivariate Gaussian distribution through rotation and stretching.\nZ = B−1(Y − µ),Z ∼ NI(0, I), (12)\nwhere Y ∼ NI (x;µ,Σ),B = UΛ1/2, and Λ1/2 is a diagonal matrix whose entries are the square roots of the corresponding entries from Λ (Do, 2008). Since q(x) is symmetrical, thus rotation won’t change the KL divergence, so we only need to consider the case where Σ is a diagonal matrix, which can be denoted as below.\nΣ = σ21 0 0 · · · 0 0 σ22 0 · · · 0 0 0 σ23 · · · 0 ... ... ... . . . ...\n0 0 0 · · · σ2I , (13) where σ2i is the variance in dimension i. Notice µi = 0, thus Eq. 11 can be rewritten as\nDKL(p‖q) = 1\n2\n[ log σ21σ 2 2 . . . σ 2 I\nr2I − I + I∑ i=1 r2 σ2i + I∑ i=1 µ2i r2\n]\n= 1\n2 [ I∑ i=1 log σ2i r2 − I + I∑ i=1 r2 σ2i ]\n= 1\n2 [ I∑ i=1 ( log σ2i r2 + r2 σ2i ) − I ] .\n(14)\nConsider a function:\nf(σ2i ) = log σ2i r2 + r2 σ2i . (15)\nIt’s easy to find that f(σ2i ) is monotonically decreasing when σ 2 i ∈ (0, r2) (since r is large enough, the condition is easily met), which means increasing the variance in dimension i will decrease DKL(p‖q). Consider feature dimension i and feature dimension j, and ri ≤ rj . Then we prove\nf(S(σ2i )) + f(σ2j ) ≤ f(σ2i ) + f(S(σ2j )), (16)\nwhere σ2i = r2i 3 , σ 2 j = r2j 3 . We consider three cases:\n(1) ri ≤ rg ≤ rj . Recall that selecting the i-th feature for the subgoal space can increase σ2i when rg > ri while decrease σ2i when rg < ri. Therefore, we have:\nf(S(σ2i )) + f(σ2j ) ≤ f(σ2i ) + f(σ2j ) ≤ f(σ2i ) + f(S(σ2j )). (17)\n(2) rg ≤ ri ≤ rj . Since f(S(σ2i )) + f(σ2j ) = f( r2g 3 ) + f(σ 2 j ), and f(σ 2 i ) + f(S(σ2j )) =\nf(σ2i ) + f( r2g 3 ). Notice that f(σ 2 j ) < f(σ 2 i ), then Eq. 16 holds.\n(3) ri ≤ rj ≤ rg . Rewrite Eq. 16 as\nf(S(σ2i ))− f(σ2i ) ≤ f(S(σ2j ))− f(σ2j ) =⇒f ( r2i −\n2r3i 3rg\n) − f ( σ2i ) ≤ f ( r2j −\n2r3j 3rg\n) − f(σ2j )\n=⇒ log (\n3− 2ri rg\n) +\n6r2ri − 6r2rg r2i (3rg − 2ri)\n≤ log (\n3− 2rj rg\n) +\n6r2rj − 6r2rg r2j (3rg − 2rj) .\n(18)\nTo prove Eq. 18, we consider another function:\ng(x) = log (3− 2x) + 6t 2x− 6t2\nx2(3− 2x) , (19)\nwhere t = rrg >> 1. It’s easy to find that g ′(x) > 0 when x ∈ (0, 1), which means g(x) is monotonically increasing when x ∈ (0, 1). Therefore, Eq. 18 holds.\nNow we have proved that Eq. 16 holds under all possible conditions, which means selecting the slower features as the subgoal space will lead to a smaller KL divergence, i.e., better exploration. Thus Theorem 1 follows immediately." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "B.1 ENVIRONMENTS\nThe environments of Point Maze, Ant Maze, Ant Push, and Ant Fall are as described in Nachum et al. (2019), shown in Figure 7. In each navigation task, we create an environment composed of 4× 4× 4 blocks, some movable and some with fixed position. During training, the target locations (x, y) are randomly selected by the environment from all possible points. Final results are evaluated on a single challenging goal denoted by a small green block. For the ‘Images’ versions of these environments, we zero-out the x, y coordinates in the observation and append a low-resolution 5× 5× 3 top-down view of the environment, equal to that used in Nachum et al. (2019).\nThe Ant FourRooms task has a much larger maze structure, which is four times as large as the Ant Maze task. So the maximal episode length for Ant FourRooms is also larger, which equals 1000. The maximal episode lengths of the other tasks are 500.\nB.2 NETWORK STRUCTURE\nThe actor network for each level is a Multi-Layer Perceptron (MLP) with two hidden layers of dimension 256 using ReLU activations. The critic network structure for each level is identical to that of the actor network. We scale the outputs of the actor networks of both levels to the range of corresponding action space with tanh nonlinearities. The representation function φ(s) is parameterized by an MLP with one hidden layer of dimension 100 using ReLU activations.\nB.3 TRAINING PARAMETERS\n• Discount factor γ = 0.99 for both levels. • Adam optimizer; learning rate 0.0002. • Soft update targets τ = 0.005 for both levels. • Replay buffer of size 1e6 for both levels. • Reward scaling of 0.1 for both levels. • Entropy coefficient of SAC α = 0.2 for both levels. • Low-level policy length c = 10 for the Point robot and c = 20 for the Ant robot except for\nthe Ant Push task. In the Ant Push task, c = 50. • Subgoal dimension of size 2. We train the high-level policy to output actions in [−10, 10]2\nwhen c = 10 or c = 20 ([−20, 20]2 when c = 50). These actions correspond to desired deltas in state representation.\nWe did not perform a grid search on hyper-parameters, therefore better performances might be possible for these experiments.\nB.4 EVALUATION\nLearned hierarchical policies are evaluated every 25000 timesteps by averaging performance over 10 random episodes." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "Table 1 demonstrates that the dynamics of the features learned by our method are slow. NOR has a relatively good performance in Ant Maze and Ant Maze (Images), since the NOR features in these two tasks are slower than those in other tasks. The state space of the Point robot is low-dimensional and contains little information other than the (x, y) position, so the slow features (positions) are easy to be selected by a random strategy. But NOR projects the state space of the Point robot to a latent space with fast dynamics, which results in unsatisfactory performance." } ]
2,021
null
SP:1d56942da0ed8d8280bd444bf9265b79b33b07eb
[ "This paper introduces “source-aware” GMM attention and applies it to offline, online, long-form ASR. The value of source-aware GMM attention appears to be its ability to “ignore” long segments of silence in the input audio, which could potentially be more difficult to do using other attention mechanisms. Fairly competitive results are presented for offline ASR. For online ASR, the results are state-of-the-art amongst sequence-to-sequence-based models. " ]
Transformers with soft attention have been widely adopted in various sequence-tosequence (Seq2Seq) tasks. Whereas soft attention is effective for learning semantic similarities between queries and keys based on their contents, it does not explicitly model the order of elements in sequences which is crucial for monotonic Seq2Seq tasks. Learning monotonic alignments between input and output sequences may be beneficial for longform and online inference applications that are still challenging for the conventional soft attention algorithm. Herein, we focus on monotonic Seq2Seq tasks and propose a sourceaware Gaussian mixture model attention in which the attention scores are monotonically calculated considering both the content and order of the source sequence. We experimentally demonstrate that the proposed attention mechanism improved the performance on the online and long-form speech recognition problems without performance degradation in offline in-distribution speech recognition.
[]
[ { "authors": [ "Naveen Arivazhagan", "Colin Cherry", "Wolfgang Macherey", "Chung-Cheng Chiu", "Semih Yavuz", "Ruoming Pang", "Wei Li", "Colin Raffel" ], "title": "Monotonic infinite lookback attention for simultaneous machine translation", "venue": null, "year": 2019 }, { "authors": [ "Eric Battenberg", "RJ Skerry-Ryan", "Soroosh Mariooryad", "Daisy Stanton", "David Kao", "Matt Shannon", "Tom Bagby" ], "title": "Location-relative attention mechanisms for robust long-form speech synthesis", "venue": null, "year": 2020 }, { "authors": [ "William Chan", "Chitwan Saharia", "Geoffrey Hinton", "Mohammad Norouzi", "Navdeep Jaitly" ], "title": "Imputer: Sequence modelling via imputation and dynamic programming", "venue": null, "year": 2020 }, { "authors": [ "Chung-Cheng Chiu", "Colin Raffel" ], "title": "Monotonic chunkwise attention", "venue": null, "year": 2018 }, { "authors": [ "Chung-Cheng Chiu", "Wei Han", "Yu Zhang", "Ruoming Pang", "Sergey Kishchenko", "Patrick Nguyen", "Arun Narayanan", "Hank Liao", "Shuyuan Zhang", "Anjuli Kannan", "Rohit Prabhavalkar", "Zhifeng Chen", "Tara Sainath", "Yonghui Wu" ], "title": "A comparison of end-to-end models for long-form speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine", "venue": null, "year": 2014 }, { "authors": [ "Jan Chorowski", "Dzmitry Bahdanau", "Dmitriy Serdyuk", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Attention-based models for speech recognition", "venue": "Neurips,", "year": 2015 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc V. Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 2019 }, { "authors": [ "Linhao Dong", "Bo Xu" ], "title": "Cif: Continuous integrate-and-fire for end-to-end speech recognition", "venue": null, "year": 2020 }, { "authors": [ "Linhao Dong", "Shuang Xu", "Bo Xu" ], "title": "Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition", "venue": null, "year": 2018 }, { "authors": [ "Linhao Dong", "Feng Wang", "Bo Xu" ], "title": "Self-attention aligner: A latency-control end-to-end model for asr using self-attention network and chunk-hopping", "venue": null, "year": 2019 }, { "authors": [ "Maha Elbayad", "Laurent Besacier", "Jakob Verbeek" ], "title": "Efficient wait-k models for simultaneous machine", "venue": "translation. ArXiv,", "year": 2020 }, { "authors": [ "Alex Graves" ], "title": "Sequence transduction with recurrent neural networks", "venue": "ArXiv, abs/1211.3711,", "year": 2012 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "ArXiv, abs/1308.0850,", "year": 2013 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "onnectionist tem-poral classification: labelling unsegmented sequence data with recurrent neural networks", "venue": null, "year": 2006 }, { "authors": [ "Awni Hannun", "Ann Lee", "Qiantong Xu", "Ronan Collobert" ], "title": "Sequence-to-sequence speech recognition with time-depth separable convolutions", "venue": "ArXiv, abs/1904.02619,", "year": 2019 }, { "authors": [ "Wenyong Huang", "Wenchao Hu", "Yu Ting Yeung", "Xiao Chen" ], "title": "Conv-transformer transducer: Low latency, low frame rate, streamable end-to-end speech recognition", "venue": null, "year": 2020 }, { "authors": [ "Hirofumi Inaguma", "Masato Mimura", "Tatsuya Kawahara" ], "title": "Enhancing monotonic multihead attention for streaming", "venue": "asr. ArXiv,", "year": 2020 }, { "authors": [ "Shigeki Karita", "Nanxin Chen", "Tomoki Hayashi", "Takaaki Hori", "Hirofumi Inaguma", "Ziyan Jiang", "Masao Someki", "Nelson Enrique Yalta Soplin", "Ryuichi Yamamoto", "Xiaofei Wang", "Shinji Watanabe", "Takenori Yoshimura", "Wangyou Zhang" ], "title": "A comparative study on transformer vs rnn in speech applications", "venue": null, "year": 2019 }, { "authors": [ "Kwangyoun Kim", "Kyungmin Lee", "Dhananjaya Gowda", "Junmo Park", "Sungsoo Kim", "Sichen Jin", "Young-Yoon Lee", "Jinsu Yeo", "Daehyun Kim", "Seokyeong Jung", "Jungin Lee", "Myoungji Han", "Chanwoo Kim" ], "title": "Attention based on-device streaming speech recognition with large speech", "venue": null, "year": 2019 }, { "authors": [ "Suyoun Kim", "Takaaki Hori", "Shinji Watanabe" ], "title": "Joint ctc-attention based end-to-end speech recognition using multi-task learning", "venue": "ArXiv,", "year": 2016 }, { "authors": [ "Taku Kudo" ], "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "venue": null, "year": 2018 }, { "authors": [ "Hyeonseung Lee", "Woo Hyun Kang", "Sung Jun Cheon", "Hyeongju Kim", "Nam Soo Kim" ], "title": "Gated recurrent context: Softmax-free attention for online encoder-decoder speech recognition", "venue": null, "year": 2007 }, { "authors": [ "Jason Li", "Vitaly Lavrukhin", "Boris Ginsburg", "Ryan Leary", "Oleksii Kuchaiev", "Jonathan M Cohen", "Huyen Nguyen", "Ravi Teja Gadde" ], "title": "Jasper: An end-to-end convolutional neural acoustic model", "venue": null, "year": 2019 }, { "authors": [ "Naihan Li", "Yanqing Liu", "Yu Wu", "Shujie Liu", "Sheng Zhao", "Ming Liu" ], "title": "Robutrans: A robust transformerbased text-to-speech model", "venue": null, "year": 2020 }, { "authors": [ "Vitaliy Liptchinsky", "Gabriel Synnaeve", "Ronan Collobert" ], "title": "Letter-based speech recognition with gated convnets", "venue": "ArXiv,", "year": 2017 }, { "authors": [ "Haoran Miao", "Gaofeng Cheng", "Pengyuan Zhang", "Ta Li", "Yonghong Yan" ], "title": "Online hybrid ctc/attention architecture for end-to-end speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Niko Moritz", "Takaaki Hori", "Jonathan Le Roux" ], "title": "Triggered attention for end-to-end speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Niko Moritz", "Takaaki Hori", "Jonathan Le Roux" ], "title": "Streaming automatic speech recognition with the transformer model", "venue": "ArXiv, abs/2001.02674,", "year": 2020 }, { "authors": [ "Vassil Panayotov", "Guoguo Chen", "Daniel Povey", "Sanjeev Khudanpur" ], "title": "Librispeech: an asr corpus based on public domain audio", "venue": "books. ICASSP,", "year": 2015 }, { "authors": [ "Daniel S Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D Cubuk", "Quoc V Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Rohit Prabhavalkar", "Tara N. Sainath", "Yonghui Wu", "Patrick Nguyen", "Zhifeng Chen", "Chung-Cheng Chiu", "Anjuli Kannan" ], "title": "Minimum word error rate training for attention-based sequence-to-sequence models", "venue": null, "year": 2018 }, { "authors": [ "Colin Raffel", "Minh-Thang Luong", "Peter J. Liu", "Ron J. Weiss", "Douglas Eck" ], "title": "Online and linear-time attention by enforcing monotonic alignments", "venue": null, "year": 2017 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text", "venue": "transformer. ArXiv,", "year": 2019 }, { "authors": [ "Sara Sabour", "William Chan", "Mohammad Norouzi" ], "title": "Optimal completion distillation for sequence learning", "venue": null, "year": 2019 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": null, "year": 2018 }, { "authors": [ "Gabriel Synnaeve", "Qiantong Xu", "Jacob Kahn", "Tatiana Likhomanenko", "Edouard Grave", "Vineel Pratap", "Anuroop Sriram", "Vitaliy Liptchinsky", "Ronan Collobert" ], "title": "End-to-end asr: from supervised to semisupervised learning with modern architectures", "venue": "Workshop on Self-supervision in Audio and Speech,,", "year": 2020 }, { "authors": [ "Emiru Tsunoo", "Yosuke Kashiwagi", "Shinji Watanabe" ], "title": "Streaming transformer asr with blockwise synchronous inference", "venue": "ArXiv, abs/2006.149411,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Neurips,", "year": 2017 }, { "authors": [ "Pete Warden" ], "title": "Speech commands: A dataset for limited-vocabulary speech", "venue": "recognition. ArXiv,", "year": 2018 }, { "authors": [ "Shinji Watanabe", "Takaaki Hori", "Suyoun Kim", "John R. Hershey", "Tomoki Hayashi" ], "title": "Hybrid ctc/attention architecture for end-to-end speech recognition", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2017 }, { "authors": [ "Ching-Feng Yeh", "Jay Mahadeokar", "Kaustubh Kalgaonkar", "Yongqiang Wang", "Duc Le", "Mahaveer Jain", "Kjell Schubert", "Christian Fuegen", "Michael L. Seltzer" ], "title": "Transformer-transducer: End-to-end speech recognition with self-attention", "venue": null, "year": 1910 }, { "authors": [ "Albert Zeyer", "André Merboldt", "Ralf Schlüter", "Hermann Ney" ], "title": "A comprehensive analysis on attention models", "venue": "NIPS: Workshop IRASL,", "year": 2018 }, { "authors": [ "Qian Zhang", "Han Lu", "Hasim Sak", "Anshuman Tripathi", "Erik McDermott", "Stephen Koo", "Shankar Kumar" ], "title": "Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t", "venue": "loss. ArXiv,", "year": 2020 }, { "authors": [ "Dong" ], "title": "In the speech command dataset experiment, We trained the multi-headed transformer Vaswani et al", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, transformer models with soft attention have been widely adopted in various sequence generation tasks (Raffel et al., 2019; Vaswani et al., 2017; Parmar et al., 2018; Karita et al., 2019). Soft attention does not explicitly model the order of elements in a sequence and attends all encoder outputs for each decoder step. However, the order of elements is crucial for understanding monotonic sequence-to-sequence (Seq2Seq) tasks, such as automatic speech recognition (ASR), video analysis, and lip reading. Learning monotonic alignments enables the model to attend to a subset of the encoder output without performance degradation in these tasks. In comparison, soft attention is not suitable for streaming inference applications because the softmax operation needs to wait until all encoder outputs are produced. Figure 1 (b) shows the attention plot for soft-attention. Soft attention learns the alignments between queries and keys based on their similarities; it requires all encoder tokens prior to the attention score calculation. Furthermore, soft attention cannot easily decode long-form sequences that are not considered in the training corpus.\nThe Gaussian Mixture Model (GMM) attention Graves (2013); Battenberg et al. (2020); Chiu et al. (2019) have been proposed for learning the monotonic mapping between encoder and decoder states for long-form sequence generation. The GMM attention is a pure location-aware algorithm in which encoder contents are not considered during attention score calculation. However, each element in the encoder output sequence contains different amounts of information and should be attended considering their contents. In figure 1 (c), the GMM attention fails to learn the detailed alignments and attends to many tokens simultaneously.\nIn this study, we adopted the GMM attention mechanism to the modern transformer structure and proposed the Source-Aware Gaussian Mixture Model (SAGMM) attention which considers both contents and orders of source sequences. Each component in the SAGMM is multi-modal and discards non-informative tokens in the attention window. For online inference, we propose a truncated SAGMM (SAGMM-tr) that discards the long-tail of the attention score in the SAGMM. To the best of our knowledge, this is the first\nattempt to adopt a GMM-based attention to online sequence generation tasks. Learning accurate monotonic alignments enables the SAGMM-tr to attend to a relevant subset of sequences for each decoder step and improves the performance of the model in terms of streaming and long-form sequence generation tasks. Figure 1 (d) shows the monotonic alignments learned by the SAGMM-tr, enabling online inference. Experiments involving streaming and long-form ASR showed substantial performance improvements compared with conventional algorithms without performance degradation in offline in-distribution ASR. Furthermore, we tested the SAGMM-tr in a machine translation task and demonstrated the performance of the proposed algorithm in non-monotonic tasks." }, { "heading": "2 SOURCE-AWARE GMM ATTENTION", "text": "" }, { "heading": "2.1 SOFT ATTENTION", "text": "Herein, we abbreviate the head index h during attention score calculation for simplicity. In dot-product multi-head soft attention Vaswani et al. (2017) without relative positional encoding , the attention score from soft attention αSoft is derived from the query matrix Q ∈ RI×d and key matrix K ∈ RJ×d as follows:\nαSoft = softmax ( QKᵀ/ √ d )\n(1)\nwhere d, I , and J are the feature dimension, decoder and encoder sequence length, respectively. The attention context matrix from the h-th head Hh and the multi-head output M are expressed as\nHh = αhSoft V h (2) M = concat [ H1; ...; Hnh ] WO (3)\nwhere αhSoft denotes the αSoft for the h-th head, V h ∈ RJ×d the value matrix for the h-th head, and nh the number of heads. In this study, we adopted relative positional encoding Shaw et al. (2018); Dai et al. (2019) which provides a stronger baseline for long-form sequence generation for self-attention layers." }, { "heading": "2.2 GMM ATTENTION", "text": "The previous studies regarding GMM attention Battenberg et al. (2020); Chiu et al. (2019) were based on early content-based attention (Cho et al., 2014). Li et al. (2020) adopted the GMM attention to the transformer framework, but did not provide detailed descriptions. Herein, we adopt v2 model in Battenberg et al. (2020) which improved the performance of the original GMM attention mechanism Graves (2013).\nWe define the GMM attention as a variant of multi-head attention by considering a Gaussian distribution component as an attention score of single-head in a multi-head mechanism. In the study by Battenberg et al. (2020), the value matrix was shared for all Gaussian components, whereas multi-head value matrices were multiplied with the probability from corresponding components in this study to attend to information from different representation subspaces (Vaswani et al., 2017). Hence, the multi-head GMM attention introduced here is a more generalized algorithm compared with the early GMM attention.\nLet us denote the i-th row of Q as Qi ∈ R1×d. The normal distribution parameters for the i-th step are expressed\n∆i, σi, φi = ζ (QiW∆) , ζ (QiWσ) , QiWφ (4) µi = ∆i + µi−1 (5)\nwhere ζ(x) is a softplus function of x; W∆ ∈ Rd×1, Wσ ∈ Rd×1, and Wφ ∈ Rd×1. The softplus function was adopted, similar to the study of Battenberg et al. (2020) in which softplus activation demonstrated better performances than the exponential operation. A mixture component of the GMM attention, from the i-th decoder step to the j-th encoder token, is derived from the αGMM‖i,j is defined as follows:\nN (j, µi, σi) = 1√\n2πσi exp\n( − (j − µi) 2\n2σi\n) (6)\nαGMM‖i,j = N (j, µi, σi) (7) Hhi = softmaxh ( φhi )∑\nj\nαhGMM‖i,jV h j . (8)\nwhere softmaxh denotes the softmax function over heads.\nThe conventional GMM attention mechanism is analogous to integral of source sequences with a uniform axis spacing as shown in Figure 2 (a). In this figure, each rectangle denotes the attention score with specified Gaussian component parameters. The GMM attention assumes that each encoder output is equally important. However, this assumption is not satisfied for many input modalities, e.g. speech and videos from real environments. Moreover, the number of modes in the GMM attention is limited by the number of mixture components. To learn robust alignments for monotonic Seq2Seq tasks, we propose the SAGMM which considers both contents and locations for the attention mechanism." }, { "heading": "2.3 SOURCE-AWARE GMM ATTENTION", "text": "Figure 2 (b) shows the scheme of the proposed SAGMM attention. Compared with the GMM attention, the SAGMM is analogous to an integral of normal distribution with non-uniform spacing based on encoder contents. In the figure 2 (b), the width of rectangle δj controls the model to selectively attend to the informative tokens during the attention score calculation. This content-aware property of the SAGMM enables the model to learn stable monotonic alignments from the training corpus. Furthermore, SAGMM can easily discard non-informative tokens and aggregate information distributed over several remote tokens.\nIn the SAGMM, normal distribution parameter ∆i, σi, φi, and µi are derived from equation 4 and equation 5. To encode the contents of the source sequences, the weight for each encoder output δj is provided from the j-th row of the key matrix K as follows:\nδj = sigmoid (KjWδ) . (9)\nwhere Wδ ∈ Rd×1 and the sigmoid function are introduced to smoothly bound the maximum weight δj similar to Dong & Xu (2020).\nSubsequently, the probability of normal distribution N(νj ;µi, σi) is calculated from its cumulative sum νj expressed as\nνj = δj + νj−1 (10)\nN (νj ;µi, σi) = 1√\n2πσi exp\n( − (νj − µi) 2\n2σi\n) (11)\nFinally, the SAGMM attention score αSAGMM is defined as follows:\nαSAGMM‖i,j = δj N (νj ;µi, σi) (12)\nwhere δj where δj is multiplied to describe uneven step sizes in summation.\nIn the early stage of training, we introduced a length penalty loss to facilitate alignment learning between µ and ν; it is expressed as\nLlength = λlength ∗ ( (µI −min (I, J))2 + (νJ −min (I, J))2 ) . (13)\nwith λlength = 0.0005 where I and J denote the length of the decoder and encoder sequences, respectively. We turned off the Llength after 200K of training steps in the experiments. Finally, we modified 5 as µi = µi−1 + min(max(∆i, 0), 3) to scale µi and νj similar to the token indices. This modification facilitates the interpretability of µ and ν.\nThe attention score for each encoder token in the SAGMM was determined independently because they do not rely on the softmax operation over the encoder outputs. It is noteworthy that ∑ j δj N (νj ;µi, σi) approximates the integral of the Gaussian distribution. Hence, the sum of the attention weights is approximately 1, thereby facilitating numerical stability and learning without using softmax." }, { "heading": "2.4 SAGMM-TR FOR ONLINE INFERENCE", "text": "Since the attention score in SAGMM is generated without softmax normalization over all encoder tokens, we can simply build the attention for streaming inference by cropping the long-tail of the Gaussian distribution. In SAGMM-tr, the normal distribution is truncated to limit the past and future contexts as follows:\nNtr (νj ;µi, σi) =\n{ 1√\n2πσi exp\n( − (νj−µi) 2\n2σi\n) , if µi − 2 √ σi < νj < µi + 2 √ σi\n0, else. (14)\nαSAGMM -tr‖i,j = δj Ntr (νj ;µi, σi) (15) Hhi = softmaxh ( φhi ) ∑ j∈µi−2 √ σi<νj<µi+2 √ σi αhSAGMM−tr‖i,jV h j . (16)\nDiscarding the tokens with threshold of 2 √ σi removes approximately 5% of the attention score. In online inference with the SAGMM-tr, Hhi can be calculated after νj exceeds µi + 2 √ σi. It is noteworthy that when νj ≥ µi + 2 √ σi satisfies in for a current token j = βi, then the equation satisfies for all j > βi and Hhi can be emitted without for waiting future contexts.\nWe started from the SAGMM model and fine-tuned SAGMM-tr until the performance converged. We discovered that the models with the SAGMM-tr demonstrated slightly better performances than the SAGMM; hence, we mainly report the results involving the SAGMM-tr model herein.\nFor online speech recognition using the SAGMM-tr, we randomly concatenated the 1-vector after the end of the source sequence with a probability peos in the training stage and performed training to emit the endof-sequence (EOS) token only with the utterances containing the 1-vector. The 1-vector concatenation\nsuppresses the EOS token in the long silence part. Finally, we built a unidirectional encoder whose maximum latency was similar to those of Zhang et al. (2020); Dong et al. (2019) by adopting a block-wise mask on self-attention layers. A detailed explanation on the block-wise masking is provided in the Appendix. Algorithm 1 in the Appendix shows a pseudo code for the SAGMM-tr attention in inference stage.\nThe number of tokens required to 16 was determined by model parameters. We trained and tested the SAGMM-tr with a fixed attention window width c, to demonstrate the performance in environments with maximum latency constraint. In this version, equation 14 - 16 were modified as follows:\nγi = arg max γi−1≤j<J N (νj ;µi, σi) (17)\nNtr (νj ;µi, σi) =\n{ 1√\n2πσi exp\n( − (νj−µi) 2\n2σi\n) , if γ − c2 < j < γi + c 2\n0, else. (18)\nαSAGMM -tr‖i,j = δj Ntr (νj ;µi, σi) (19) Hhi = ∑\nγi− c2<j<γi+ c 2\nαSAGMM -tr‖i,jVj . (20)\nIn equation 20, we wait c2 additional tokens before producing the output. We compared the performance of the SAGMM-tr with adaptive and fixed window widths through experiments." }, { "heading": "3 RELATED WORKS", "text": "Chiu & Raffel (2018) propose monotonic chunkwise attention (MoChA) based on hard monotonic sampling over attention window (Raffel et al., 2017). The monotonic multi-head attention Ma et al. (2019); Inaguma et al. (2020) and monotonic infinite lookback attention Arivazhagan et al. (2019) are based on the concept similar to that reported by (Chiu & Raffel, 2018). MoChA replaces hard sampling operation to probability distribution over memory in the training stage. By contrast, the SAGMM uses the same operations for training and inference. Furthermore, MoChA relies on a numerically unstable cumulative product that requires additional effort for training (Miao et al., 2019; Raffel et al., 2017). By contrast, the SAGMM adopts a stable cumulative sum to describe monotonic alignments. Finally, MoCha is not effective for long-form speech recognition compared with GMM attention Chiu et al. (2019). Meanwhile, our model outperformed the GMM in the “test-long” set in experiments.\nDong & Xu (2020) introduce the encoder output weight for ASR task. By contrast, our attention mechanism considers the decoder content during attention score calculation. Furthermore, the SAGMM allows the overlap between successive attention windows, facilitating the semantics that span several decoder tokens. Lee et al. (2020) proposed an online softmax-free attention mechanism by aggregating the information of the encoded sequence using an update gate. Similar to soft attention, their approach does not explicitly model monotonic alignments. Furthermore, the update gate for online inference in Lee et al. (2020) is uni-modal.\nThe location-aware attention Chorowski et al. (2015); Watanabe et al. (2017); Moritz et al. (2019) which introduces previous attention score as additional argument to the current step was proposed to employ positional information in a content-based attention mechanism. The location-aware attention improved the performance of offline inference, whereas we introduced monotonicity in our study for online inference.\nNon-attentive neural network-based approaches that do not rely on encoder-decoder attention have been investigated based on Connectionist Temporal Classification (CTC) Graves et al. (2006); Liptchinsky et al. (2017); Li et al. (2019), RNN-transducer (RNN-T) Graves (2012), Transformer-transducer (Transformer-T) Zhang et al. (2020); Yeh et al. (2019), and Imputer Chan et al. (2020). Herein, we focus on the Seq2Seq approach which does not require an assumption regarding sequence lengths and uses the simple beam search algorithm for inference. Block-wise inference has been widely investigated using manually defined decoding\nblocks Jaitly et al. (2015); Tsunoo et al. (2020) or joint decoding using a CTC model Moritz et al. (2020). We doid not use joint CTC decoding Kim et al. (2016) nor human supervision for the blocked inference." }, { "heading": "4 EXPERIMENT", "text": "We conducted experiments in speech command Warden (2018) and LibriSpeech Panayotov et al. (2015) datasets to compare the performances of our model in standard, online and long-form ASR tasks. We only compared the end-to-end speech recognition model without external language model (LM) rescoring. Furthermore, we performed preliminary experiments on the translation task to demonstrate the performance of the SAGMM-tr on non-monotonic tasks." }, { "heading": "4.1 EXPERIMENTS ON CONCATENATED SPEECH COMMAND DATASET", "text": "First, we performed a speech recognition experiment with limited vocabulary to demonstrate the difficulty of using the typical soft attention algorithm in decoding utterances with unseen sequence lengths. The speech command dataset consists of 1 second long speeches from various speakers uttering single words from a vocabulary of 30 words. We built 100K utterances in the training corpusWarden (2018) by concatenating randomly selected 5-9 utterances in the training data and 500 test utterances by concatenating {3, 7, 10, 15, 20} randomly selected words in the test set. We cropped both the start and end of selected utterances randomly from 0.05s to 0.15s before concatenation to prevent the SAGMM from yielding a trivial solution.\nWe compared the word error rate (WER) of transformers with soft and SAGMM attentions on test sets. As shown in Table 1, the soft attention mechanism tends to memorize the sequence length distribution from the training corpus and fails to decode correctly in the unseen sequence length. Meanwhile, the proposed SAGMM algorithm was robust to sequence length mismatches. Detailed hyperparameters for the experiment, decoded examples for soft attention and the SAGMM are shown in the Appendix." }, { "heading": "4.2 EXPERIMENTS ON LIBRISPEECH DATASET", "text": "We conducted experiments on LibriSpeech whose training dataset comprised 960h of read audio books (Panayotov et al., 2015). We validated the performance of the models on development sets and reported the WERs in the tests sets with a fixed beam width of 4.\nFirst, we compared the performance of the SAGMM-tr with conventional algorithms in offline speech recognition. We trained the multi-head transformers similarly to previous experiments. The encoder comprised 10 layers of self-attention blocks, with 768 hidden nodes and 3,072 filters. The decoder was constructed by stacking 4 self-attention blocks with the same node and filter sizes. The encoder and decoder self-attention employed soft-attention with relative positional encoding (Dai et al., 2019). We trained and tested models that employed soft attention, GMM attention, and SAGMM-tr attention mechanisms as the encoder-decoder attention. We adopted the auxiliary CTC loss on the encoder output with λctc = 0.1 for a fair comparison with the previous attention-based models Lee et al. (2020); Dong & Xu (2020); Kim et al. (2019). The WER of the SAGMM-tr model without CTC loss was 3.84% on test-clean dataset. Detailed hyperparameters used in the experiment and the performance of SAGMM-tr without CTC loss are shown in the Appendix.\nTable 2 shows the performance of various non-streamable models. In the non-streamable models, our baseline transformer demonstrated performances similar to those of previous studies considering the number of parameters. Furthermore, we compared the performances of GMM and SAGMM-tr models with a bidirectional encoder to those of other streamable decoder algorithms. The performance of the SAGMM-tr\nattention was consistent with that of our transformer model, whereas it was better compared with those of other streamable Seq2Seq models. The results showed that the SAGMM-tr successfully described the monotonic alignments between speech and transcription without attending to all encoder outputs. Next, we investigated the performance of the SAGMM-tr in an online ASR task. We trained single-head and multi-head SAGMM-tr models using the block-wise masked encoder. Additionally, the truncated GMM model was tested, but discarded in experiments because the model attended to the future context as shown in A.4 and failed to learn the correct alignments. In the multi-head SAGMM-tr, several heads in the first decoder layer were not trained and softmax(φhi ) approached zero. Therefore, we pruned these heads after the performance on the development set had converged and fine-tuned for 30K additional steps. We believe that training and head pruning with φhi can prevent over-fitting when the desired monotonic alignments do not require many heads, particularly in the first decoder layer.\nTable 3 shows the performances of models based on various online inference approaches. As shown, the single-head SAGMM-tr outperformed other CTC and attention-based algorithms. The multi-head mechanism further improved the performance of SAGMM-tr. Adopting unidireciotnal encoder with block-wise mask increased the WER on the test-clean set by 0.31%. Our model demonstrated slightly worse performance than state-of-the-art algorithms with transformer-T (Zhang et al., 2020). Subword regularization Kudo (2018); Hannun et al. (2019) and sequence-level loss Sabour et al. (2019); Prabhavalkar et al. (2018) may enable the SAGMM-tr to achieve performances similar to the best transducer models. Furthermore, the SAGMM-tr can be adopted in various monotonic Seq2Seq tasks easily; this is because it can be inferred via a simple beam search without probability marginalization in transducers and does not require the assumption that the target should be equal or shorter than the source.\nFigure 3 shows the attention plots of the single-head SAGMM-tr which successfully captures alignments between speech and transcription for an utterance with a long silence. Furthermore, Figure 1 shows an\nattention alignments from multi-head soft attention, truncated GMM attention and SAGMM-tr attentions(the full plots are shown in Appendix A.4). In soft attention, several heads attend to the speech absence frames constantly for every decoder step. These heads might facilitate in the utterance-level normalization on the attention context vectors. In future studied, the role of silence-attending heads in soft attention should be investigated and introduced to the SAGMM-tr attention. The truncated GMM model failed to learn the alignment and attended to a wide range of encoder tokens. Finally, the SAGMM-tr attention learned the monotonic alignments successfully. The SAGMM-tr attention plots in the lower layers were more blurred compared with the higher layers. We suppose that the lower layers found the relevant subsets in the sequence and the higher layers utilized the context vectors from the lower encoder-decoder attentions and relatively more focused, partly resembling the behavior of soft attention.\nThe multi-head SAGMM-tr model showed more blurred alignments compared with the single-head model. Our hypothesis for this phenomenon is that several heads attended larger windows to provide context information. The fixed attention window width in equation 17 - 20 or regularization on σi can minimize the maximum latency of online inference. In this study, we tested the fixed attention window width. We began with the SAGMM-tr models trained for the online ASR experiment and measured the WERs before and after fine-tuning with a constant window width (c = 15 frames). From the results in Table 4 (a), the SAGMM-tr attention mechanism can be adopted to the models with a fixed attention window width.\nFinally, we concatenated utterances from the same speaker in the “test-clean” set to build a “test-long” set for the long-form speech recognition. The average length of utterances in the test-long set was 54s. In the SAGMM-tr model, the beam search algorithm was slightly modified from Algorithm 1 to suppress the EOS token until the attention window encompassed νh,J . The performance of the GMM attention did not improved by the same modification since GMM attention attends to νh,J prior to the end of a sentence.\nAs shown in Table 4 (b), the SAGMM-tr outperformed the conventional soft and GMM attention models in long-form speech recognition. Comparing the SAGMM-tr models, the multi-head SAGMM-tr showed worse performance than the single-head model. The performance degradation of the multi-head mechanism on the “test-long” set might arise from accumulated attention windows range mismatches. Attention window ranges in multi-headed models should be investigated in future studies." }, { "heading": "4.3 EXPERIMENTS ON NON-MONOTONIC TASK", "text": "The proposed algorithm focuses on strict monotonic tasks that do not allow reordering. However, the selfattention layers in the encoder can reorder the semantics in input sequences before monotonic alignments from the SAGMM-tr. We conducted machine translation experiments on the WMT EN-DE environment to demonstrate the performance of the SAGMM-tr attention in non-monotonic tasks. Whereas the machine translation is a non-monotonic task, various approaches have been proposed to improve the performance of online inference and simultaneous translation (Arivazhagan et al., 2019; Elbayad et al., 2020). In this study, we conduct simple experiments to figure out if the GMM attention can manage non-monotonic tasks.\nWe trained the encoder-decoder model similar to the transformer-base in Vaswani et al. (2017) except that the head size was 4 in the encoder-decoder and decoder self-attention. We trained a model with 4.5M pairs of sentences and measured the BLEU score on the newstest2013 set after performing 100K of training steps. For the uni-directional encoder, a block-wise mask with M = 5 was applied to the SAGMM-tr model with a bi-directional encoder and then fine-tuned for 40K additional steps.\nTable 5 shows the performances of soft and SAGMM-tr attention models on the newstest2013 set. As expected, the performance of the SAGMM-tr was not consistent with that of conventional transformer models with soft attention. Furthermore, the uni-directional encoder deteriorated the performance of the SAGMM-tr, similar to the speech recognition experiments. Because the SAGMM-tr does not consider local reordering, it is dif-\nficult for the proposed algorithm to learn locally non-monotonic functions. In our future studies, we will improve the performance of SAGMM-tr for local reordering tasks by positive constraint relaxation on ∆i." }, { "heading": "5 CONCLUSION", "text": "We proposed the SAGMM, which attends the subset of source sequences according to the normal distribution on a weighted encoder output axis, considering both contents and order of elements in sequences. Furthermore, we proposed the SAGMM-tr for online/real-time inference applications. Based on results on various speech recognition tasks, it was discovered that the proposed attention mechanism can learn monotonic alignments between source and target sequences without human supervision. The performance of a transformer with the SAGMM-tr improved for online and long-form speech recognition without performance degradation in the standard offline ASR task. In future studies, we plan to adopt SAGMM-based attention mechanisms for natural language processing tasks that allow local reordering during sequence generation. We are also interested in latency minimization for online inference by controlling µ and σ in both the training and inference stages." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 BLOCK-WISE MASK FOR ENCODER SELF-ATTENTION LAYER", "text": "The block-wise uni-directional mask was adopted for encoder self-attention layers to block future context more than M tokens. 4 shows the block-wise masking with M = 3 case. In the experiment with SAGMMtr, we chose M = 30 whose gmaximum latency is comparable to other studies with uni-directional encoders (Zhang et al., 2020; Dong et al., 2019). The blcok-wise masking is different from lookahead operation since encoder outputs from the block-wise masking are not guaranteed to look future M token depends on their position in block while the lookahead operation provides maximum future context for all encoder outputs." }, { "heading": "A.2 PSEUDO CODE FOR ONLINE INFERENCE WITH SAGMM-TR ATTENTION", "text": "Algorithm 1: Decoding process of SAGMM-tr attention mechanism for a h-th head. Data: encoder key and value vectors Khj , V hj for j ∈ {1, 2, ...J}, i = 1, µh0 = νh0 = 0, y0 = SOS while yi−1 6= EOS do\nQhi = feedforward (yi−1) /* Lower decoder layers */ ∆hi , σ h i , φ h i = GMMparam ( Qhi ) , µhi = µ h i−1 + min ( max ( ∆hi , 0 ) , 3 ) /* equation 4 */ poshs ,= µ h i − 2 √ σhi , pos h e = µ h i + 2 √ σhi , Attend\nh = 0 for j = 1 to J do δhj = sigmoid ( KhjW h δ ) , νhj = ν h j−1 + δ h j\nif νhj > poshe then Break\nelse if νhj > poshs then αhSAGMM -tr‖i,j = δhj Ntr ( νhj ;µ h i , σ h i ) /* equation 14, 15 */\nAttendh = Attendh + αhSAGMM -tr‖i,j V hj\nyi = Output ( Decoder ( softmaxh ( φhi ) Attendh )) , i = i+ 1" }, { "heading": "A.3 EXPERIMENT CONFIGURATIONS", "text": "For all speech recognition experiments, the 80-dimensional log-Mel filterbank features were extracted with a 25ms window and shifted every 10ms. Each input vector stacked 3 log-Mel feature vectors, downsampled to 30ms frame rate.\nIn the speech command dataset experiment, We trained the multi-headed transformer Vaswani et al. (2017); Dong et al. (2018) with soft attention and SAGMM attention for encoder-decoder alignments. We adopt the\nrelative positional encoding for both the encoder and decoder self-attention (Dai et al., 2019). The encoder consisted of 6 layers of self-attention block with 512 nodes and 2,048 filter size, respectively. The decoder was constructed by stacking 3 self-attention block with the same node and filter sizes. The number of heads for encoder self attention, encoder-decoder attention, and decoder self attention were {8, 4, 4}, respectively. We employed label smoothing of value ls = 0.15, and applied parallel scheduled sampling with probability 0.2 after 100k (Duckworth et al., 2019). We used the Adam optimizer with β1 = 0.9, β2 = 0.98 and = 10−9). The learning rate was linearly increased to 0.1 until 16k steps and set constant in following training steps. We used the tensorflow framework and trained the model until 500K step with 4 m40 GPUs. We averaged 5 last checkpoints before the inference. The dropout ratio for attention weights, rectified linear units, output of sub-layers, and neural network input after positional encoding were 0.2, 0.1, 0.2, and 0.2, respectively. The Specaug algorithm Park et al. (2019) was also applied during training. The width of beam was set to 4 in all experiments. The output token segments are grepheme of the reference.\nIn the LibriSpeech experiment, The learning rate was linearly increased to 0.1 until 16k steps and set constant until the model converges on dev-clean dataset. Then, we lowered learning rate to 0.02 and fine-tuned until the model converges. We used the tensorflow framework and the model is trained from 7 to 10 days on 8 p40 GPUs. We averaged {5, 10, 15} last checkpoints before the performance evaluation. The 1K wordpiece vocabularies extracted from training dataset is used for output token segmentation. Other parameters are the same with those in the speech command dataset experiment." }, { "heading": "A.4 ATTENTION PLOT FOR VARIOUS ENCODER-DECODER MODELS", "text": "To show the SAGMM-tr is able to learn alignments between encoder and decoders, we plot the attention score for single and multi-head models. We used the utterance “Chapter eleven ¡long pause¿ the morrow brought a very sober looking morning the sun making only a few efforts to appear and Catherine augured from it everything most favourable to her wishes” in LibriSpeech.\nThe truncated GMM model fails to learn the true alignment attended several distinct windows in 5. Surprisingly, the performance of truncated GMM was 4.03% and 10.16% for test-clean and test-other, respectively. The decent performance of truncated GMM implies that the GMM learns to attend as many frames as possible instead of learning alignments between sequences. Therefore, the truncated GMM attention is not suitable for streaming inference.\nIn the soft attention model, the several heads learns monotonic alignments between inputs and outputs as shown in Figure 6. However, other heads attend to the whole speech presence or absence intervals. We assume these heads help utterance-level normalization. While a few heads show monotonicity, the soft attention is not suitable for online inference due to softmax operation over all frames.\nThe attention alignment figure for single and multi-headed SAGMM-tr model is shown in Figures 7, 8 and 9. From the figures, both models learn the optimal alignments between input wave and transcription and distinguish the word boundaries without relying on human knowledge. The width of attention windows in multi-head SAGMM-tr with adaptive window width are longer than those in the single-head model. The additional objective function for regularizing σi would be helpful to reduce the window width in figure 8.\nIn the multi-head model with fixed window width, the model learns the multi-head monotonic alignment without increasing the window width. Considering the attention window width as a hyperparameter from external knowledge, it is shown that additional information could improve the model latency in SAGMM-tr while the model can also be trained without human knowledge." }, { "heading": "A.5 ABLATION STUDIES", "text": "We tested the three ablation experiments on SAGMM-tr attention in Figure 6. First, we tested the model without CTC loss on encoder output. The CTC loss improves the word error rate performances of model similar to previous studies, but the model showed decent performance without the CTC loss. Second, we tested the SAGMM-tr model with uni-directional encoder without initialization from the bi-directional encoder. The result shows slightly worse performance compared to the original SAGMM-tr. Finally, we train the SAGMM model in the training and decoded with the SAGMM-tr decoding scheme without fine-tuning. The performance degradation in 6 shows that the mismatch between training and test stages should be removed by the fine-tuning with truncated normal distribution." }, { "heading": "A.6 ATTENTION PLOTS FOR SAGMM-TR IN MACHINE TRANSLATION EXPERIMENTS", "text": "We plot the first three decoder layers of the SAGMM-tr model with bi-directional encoder for an example utterance “The patient really needs to be made to understand the degree of risk of his cancer, by offering him the options available, not necessarily treating prostate cancers that are not long-term life threatening, and opting instead, in such cases, for active monitoring of the disease.” and model output “Der Patient muss wirklich verstanden werden, um den Grad des Risikos seines Krebses zu verstehen, indem er ihm die Möglichkeiten zur Verfügung stellt, nicht notwendigerweise mit Prostatakrebs umzugehen, die nicht langfristiges Leben bedrohen und sich stattdessen in solchen Fällen für eine aktive Überwachung der Krankheit entscheiden.”.\nWe also plot the first three encoder-decoder attention and encoder-self attention layers of the SAGMM-tr model with uni-directional encoder for an example utterance “The patient really needs to be made to understand the degree of risk of his cancer, by offering him the options available, not necessarily treating prostate cancers that are not long-term life threatening, and opting instead, in such cases, for active monitoring of the disease.” and model output “Der Patient muss wirklich verstanden werden, um das Risiko seines Krebses zu verstehen, indem er ihm die Möglichkeiten zur Verfügung stellt, nicht notwendigerweise mit Prostatakrebs umzugehen, die nicht langfristiges Leben bedrohen, und stattdessen in solchen Fällen für eine aktive Überwachung der Krankheit zu entscheiden.”.\nFrom the figures 10, 11 and 12, the transformers with SAGMM-tr approximates the machine translation task as a strict monotonic task and finds optimal alignments under the assumption. Since the machine translation is not strictly monotonic task, the performance of the SAGMM-tr attention deteriorates compared to the soft attention. However, the transformer with SAGMM-tr attention and uni-directional encoder enables online inference for simultaneous translation." }, { "heading": "A.7 DECODED EXAMPLES OF SOFT ATTENTION AND SAGMM MODELS ON SPEECH COMMAND AND LIBRISPEECH DATASETS", "text": "Table 7 and 8 show the decoded examples from the models with unseen sequence lengths. From the tables, soft attention has difficulty to generate the sequences that are significantly shorter or longer than the training corpus distribution. In contrast, the SAGMM-tr is robust to the sequence length mismatch and generates the transcription to arbitrary length." } ]
2,020
null
SP:22bf1d0b48da000c80613747d59bc93c1270064e
[ "The paper addresses the problem of adversarial robustness in 3D point cloud representations. It claims that two of the previous defense designs do not prevent adaptive attacks. The authors then propose to use adversarial training (AT) to improve the robustness. It claims that the standard MAX pooling operation within PointNet-derivates contribute to the weaknesses. It then proposes a new pooling operation that improves the robustness under AT." ]
3D point clouds play pivotal roles in various safety-critical fields, such as autonomous driving, which desires the corresponding deep neural networks to be robust to adversarial perturbations. Though a few defenses against adversarial point cloud classification have been proposed, it remains unknown whether they can provide real robustness. To this end, we perform the first security analysis of state-of-the-art defenses and design adaptive attacks on them. Our 100% adaptive attack success rates demonstrate that current defense designs are still vulnerable. Since adversarial training (AT) is believed to be the most effective defense, we present the first in-depth study showing how AT behaves in point cloud classification and identify that the required symmetric function (pooling operation) is paramount to the model’s robustness under AT. Through our systematic analysis, we find that the default used fixed pooling operations (e.g., MAX pooling) generally weaken AT’s performance in point cloud classification. Still, sorting-based parametric pooling operations can significantly improve the models’ robustness. Based on the above insights, we further propose DeepSym, a deep symmetric pooling operation, to architecturally advance the adversarial robustness under AT to 47.0% without sacrificing nominal accuracy, outperforming the original design and a strong baseline by 28.5% (∼ 2.6×) and 6.5%, respectively, in PointNet.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Yulong Cao", "Chaowei Xiao", "Benjamin Cyr", "Yimeng Zhou", "Won Park", "Sara Rampazzi", "Qi Alfred Chen", "Kevin Fu", "Z Morley Mao" ], "title": "Adversarial sensor attack on lidar-based perception in autonomous driving", "venue": "In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Xiaoyi Dong", "Dongdong Chen", "Hang Zhou", "Gang Hua", "Weiming Zhang", "Nenghai Yu" ], "title": "Selfrobust 3d point recognition via gather-vector guidance", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Qi-An Fu", "Xiao Yang", "Tianyu Pang", "Hang Su", "Zihao Xiao", "Jun Zhu" ], "title": "Benchmarking adversarial robustness on image classification", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Van Der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "arXiv preprint arXiv:1711.00117,", "year": 2017 }, { "authors": [ "Yulan Guo", "Hanyun Wang", "Qingyong Hu", "Hao Liu", "Li Liu", "Mohammed Bennamoun" ], "title": "Deep learning for 3d point clouds: A survey", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Abdullah Hamdi", "Sara Rojas", "Ali Thabet", "Bernard Ghanem" ], "title": "Advpc: Transferable adversarial perturbations on 3d point clouds", "venue": "arXiv preprint arXiv:1912.00461,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Peter J Huber" ], "title": "Robust statistics, volume 523", "venue": null, "year": 2004 }, { "authors": [ "Maximilian Ilse", "Jakub Tomczak", "Max Welling" ], "title": "Attention-based deep multiple instance learning", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "arXiv preprint arXiv:1804.08598,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman" ], "title": "Spatial transformer networks. In Advances in neural information processing", "venue": null, "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2014 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Alex H Lang", "Sourabh Vora", "Holger Caesar", "Lubing Zhou", "Jiong Yang", "Oscar Beijbom" ], "title": "Pointpillars: Fast encoders for object detection from point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Daniel Liu", "Ronald Yu", "Hao Su" ], "title": "Extending adversarial attacks and defenses to deep 3d point cloud classifiers", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": "arXiv preprint arXiv:2002.08599,", "year": 2020 }, { "authors": [ "Dongyu Meng", "Hao Chen" ], "title": "Magnet: a two-pronged defense against adversarial examples", "venue": "In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security,", "year": 2017 }, { "authors": [ "Naila Murray", "Florent Perronnin" ], "title": "Generalized max pooling", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Dawn Song", "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Florian Tramer", "Atul Prakash", "Tadayoshi Kohno" ], "title": "Physical adversarial examples for object detectors", "venue": "In 12th {USENIX} Workshop on Offensive Technologies ({WOOT}", "year": 2018 }, { "authors": [ "Hang Su", "Subhransu Maji", "Evangelos Kalogerakis", "Erik Learned-Miller" ], "title": "Multi-view convolutional neural networks for 3d shape recognition", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Jiachen Sun", "Yulong Cao", "Qi Alfred Chen", "Z. Morley Mao" ], "title": "Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures", "venue": "In 29th USENIX Security Symposium (USENIX Security", "year": 2020 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "arXiv preprint arXiv:2002.08347,", "year": 2020 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Aaron van den Oord", "Pushmeet Kohli" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "arXiv preprint arXiv:1802.05666,", "year": 2018 }, { "authors": [ "Mikaela Angelina Uy", "Quang-Hieu Pham", "Binh-Son Hua", "Duc Thanh Nguyen", "Sai-Kit Yeung" ], "title": "Revisiting point cloud classification: A new benchmark dataset and classification model on realworld data", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Edward Wagstaff", "Fabian B Fuchs", "Martin Engelcke", "Ingmar Posner", "Michael Osborne" ], "title": "On the limitations of representing functions on sets", "venue": null, "year": 1901 }, { "authors": [ "Yida Wang", "David Joseph Tan", "Nassir Navab", "Federico Tombari" ], "title": "Softpoolnet: Shape descriptor for point cloud completion and classification", "venue": "arXiv preprint arXiv:2008.07358,", "year": 2020 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E. Sarma", "Michael M. Bronstein", "Justin M. Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Yuxin Wen", "Jiehong Lin", "Ke Chen", "Kui Jia" ], "title": "Geometry-aware generation of adversarial and cooperative point clouds", "venue": "arXiv preprint arXiv:1912.11171,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Shuran Song", "Aditya Khosla", "Fisher Yu", "Linguang Zhang", "Xiaoou Tang", "Jianxiong Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Chong Xiang", "Charles R Qi", "Bo Li" ], "title": "Generating 3d adversarial point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Cihang Xie", "Alan Yuille" ], "title": "Intriguing properties of adversarial training at scale", "venue": "arXiv preprint arXiv:1906.03787,", "year": 2019 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Yuyin Zhou", "Lingxi Xie", "Alan Yuille" ], "title": "Adversarial examples for semantic segmentation and object detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Cihang Xie", "Mingxing Tan", "Boqing Gong", "Alan Yuille", "Quoc V Le" ], "title": "Smooth adversarial training", "venue": "arXiv preprint arXiv:2006.14536,", "year": 2020 }, { "authors": [ "Yuzhe Yang", "Guo Zhang", "Dina Katabi", "Zhi Xu" ], "title": "Me-net: Towards effective adversarial robustness with matrix estimation", "venue": "arXiv preprint arXiv:1905.11971,", "year": 2019 }, { "authors": [ "Lequan Yu", "Xianzhi Li", "Chi-Wing Fu", "Daniel Cohen-Or", "Pheng-Ann Heng" ], "title": "Pu-net: Point cloud upsampling network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tan Yu", "Jingjing Meng", "Junsong Yuan" ], "title": "Multi-view harmonized bilinear network for 3d object recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Manzil Zaheer", "Satwik Kottur", "Siamak Ravanbakhsh", "Barnabas Poczos", "Russ R Salakhutdinov", "Alexander J Smola" ], "title": "URL http://papers.nips.cc/paper/ 6931-deep-sets.pdf", "venue": "Deep sets", "year": 2017 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Sven Gowal", "Robert Stanforth", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": null, "year": 1906 }, { "authors": [ "Yan Zhang", "Jonathon Hare", "Adam Prügel-Bennett" ], "title": "Fspool: Learning set representations with featurewise sort pooling", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hang Zhou", "Kejiang Chen", "Weiming Zhang", "Han Fang", "Wenbo Zhou", "Nenghai Yu" ], "title": "Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Madry" ], "title": "The step size of the adversarial optimization is 0.01 and we allow at most 500 iterations of optimization in each binary search to find the adversarial examples", "venue": "For the L∞ norm-based PGD attack,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the prominent achievements that deep neural networks (DNN) have reached in the past decade, adversarial attacks (Szegedy et al., 2013) are becoming the Achilles’ heel in modern deep learning deployments, where adversaries generate imperceptible perturbations to mislead the DNN models. Numerous attacks have been deployed in various 2D vision tasks, such as classification (Carlini & Wagner, 2017), object detection (Song et al., 2018), and segmentation (Xie et al., 2017). Since adversarial robustness is a critical feature, tremendous efforts have been devoted to defending against 2D adversarial images (Guo et al., 2017; Papernot et al., 2016; Madry et al., 2018). However, Athalye et al. (2018) suggest that most of the current countermeasures essentially try to obfuscate gradients, which give a false sense of security. Besides, certified methods (Zhang et al., 2019) often provide a lower bound of robustness, which are not helpful in practice. Therefore, adversarial training is widely believed as the most and only effective defense solution.\nThe emergence of 3D point cloud applications in safety-critical areas like autonomous driving raises public concerns about their security of DNN pipelines. A few studies (Xiang et al., 2019; Cao et al., 2019; Sun et al., 2020) have demonstrated that various deep learning tasks on point clouds are indeed vulnerable to adversarial examples. Among them, point cloud classification models have laid solid foundations upon which other complex models are built (Lang et al., 2019; Yu et al., 2018a). While it seems intuitive to extend convolutional neural networks (CNN) from 2D to 3D for point cloud classification, it is actually not a trivial task. The difficulty mainly inherits from that point cloud is an unordered set structure that CNN cannot handle. Modern point cloud classification models (Qi et al., 2017a; Zaheer et al., 2017) address this problem by leveraging a symmetric function, which is permutation-invariant to the order of points, to aggregate local features, as shown in Figure 2.\nRecently, a number of countermeasures have been proposed to defend against 3D adversarial point clouds. However, the failure of gradient obfuscation-based defenses in the 2D space motivates us to re-think whether current defense designs provide real robustness for 3D point cloud classification. Especially, DUP-Net (Zhou et al., 2019) and GvG-PointNet++ (Dong et al., 2020a) claim to improve the adversarial robustness significantly. However, we find that both defenses belong to gradient\nobfuscation through our analysis, hence further design white-box adaptive attacks to break their robustness. Unfortunately, our 100% attack success rates demonstrate that current defense designs are still vulnerable.\nAs mentioned above, adversarial training (AT) is considered the most effective defense strategy; we thus perform the first rigorous study of how AT behaves in point cloud classification by exploiting projected gradient descent (PGD) attacks (Madry et al., 2018). We identify that the default used symmetric function weakens the effectiveness of AT. Specifically, popular models (e.g., PointNet) utilize fixed pooling operations like MAX and SUM pooling as their symmetric functions to aggregate features. Different from CNN-based models that usually apply pooling operations with a small sliding window (e.g., 2× 2), point cloud classification models leverage such fixed pooling operations to aggregate features from a large number of candidates (e.g., 1024). We find that those fixed pooling operations inherently lack flexibility and learnability, which are not appreciated by AT. Moreover, recent research has also presented parametric pooling operations in set learning (Wang et al., 2020; Zhang et al., 2020), which also preserve permutation-invariance.We take a step further to systematically analyze point cloud classification models’ robustness with parametric pooling operations under AT. Experimental results show that the sorting-based pooling design benefits AT well, which vastly outperforms MAX pooling, for instance, in adversarial accuracy by 7.3% without hurting the nominal accuracy1.\nLastly, based on our experimental insights, we propose DeepSym, a sorting-based pooling operation that employs deep learnable layers, to architecturally advance the adversarial robustness of point cloud classification models under AT. Experimental results show that DeepSym reaches the best adversarial accuracy in all chosen backbones, which on average, is a 10.8% improvement compared to the default architectures. We also explore the limits of DeepSym based on PointNet due to its broad adoption (Guo et al., 2020). We obtain the best robustness on ModelNet40, which achieves the adversarial accuracy of 47.0%, significantly outperforming the default MAX pooling design by 28.5% (∼ 2.6×). In addition, we demonstrate that PointNet with DeepSym also reaches the best adversarial accuracy of 45.2% under the most efficient AT on ModelNet10 (Wu et al., 2015), exceeding MAX pooling by 17.9% (∼ 1.7×)." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "3D point cloud classification. Early works attempt to classify point clouds by adapting deep learning models in the 2D space (Su et al., 2015; Yu et al., 2018b). DeepSets (Zaheer et al., 2017) and PointNet (Qi et al., 2017a) are the first to achieve end-to-end learning on point cloud classification and formulate a general specification (Figure 2) for point cloud learning. PointNet++ (Qi et al., 2017b) and DGCNN (Wang et al., 2019) build upon PointNet set abstraction to better learn local features. Lately, DSS (Maron et al., 2020) generalizes DeepSets to enable complex functions in set learning. Besides, ModelNet40 (Wu et al., 2015) is the most popular dataset for benchmarking point cloud classification, which consists of 12,311 CAD models belonging to 40 categories. The numerical range of the point cloud data is normalized to [−1, 1] in ModelNet40. Adversarial attacks and defenses on point clouds. Xiang et al. (2019) perform the first study to extend C&W attack (Carlini & Wagner, 2017) to point cloud classification. Wen et al. (2019) improve the loss function in C&W attack to realize attacks with smaller perturbations and Hamdi et al. (2019) present black-box attacks on point cloud classification. Recently, Zhou et al. (2019) and Dong et al. (2020a) propose to defend against adversarial point clouds by input transformation and adversarial detection. Besides, Liu et al. (2019) conduct a preliminary investigation on extending countermeasures in the 2D space to defend against simple attacks like FGSM (Goodfellow et al., 2014) on point cloud data. In this work, we first design adaptive attacks to break existing defenses and analyze the adversarial robustness of point cloud classification under adversarial training." }, { "heading": "3 BREAKING THE ROBUSTNESS OF EXISTING DEFENSES", "text": "" }, { "heading": "3.1 ADAPTIVE ATTACKS ON DUP-NET", "text": "DUP-Net (ICCV’19) presents a denoiser layer and upsampler network structure to defend against adversarial point cloud classification. The denoiser layer g : X → X′ leverages kNN (k-nearest\n1In this paper, we use nominal and adversarial accuracy to denote the model’s accuracy on clean and adversarially perturbed data, respectively.\nneighbour) for outlier removal. Specifically, the kNN of each point xi in point cloud X is defined as knn(xi, k) so that the average distance di of each point xi to its kNN is denoted as:\ndi = 1\nk ∑ xj∈knn(xi,k) ||xi − xj ||2 , i = {1, 2, . . . , n} (1)\nwhere n is the number of points. The mean µ = 1n ∑n i=1 di and standard deviation σ =√\n1 n ∑n i=1(di − µ)2 of all these distances are computed to determine a distance threshold as µ+α·σ to trim the point clouds, where α is a hyper-parameter. As a result, the denoised point cloud is represented as X′ = {xi | di < µ + α · σ}. The denoised point cloud X′ will be further fed into PU-Net (Yu et al., 2018a), defined as p : X′ → X′′, to upsample X′ to a fixed number of points. Combined with the classifier f , the integrated DUP-Net can be noted as (f ◦ p ◦ g)(X). The hypothesis is that the denoiser layer will eliminate the adversarial perturbations and the upsampler network will re-project the denoised off-manifold point cloud to the natural manifold.\nAnalysis. The upsampler network p (i.e., PU-Net) is differentiable and can be integrated with the classification network f . Therefore, f ◦ p is clearly vulnerable to gradient-based adaptive attacks. Although the denoiser layer g is not differentiable, it can be treated as deterministic masking: M(xi) = 1di<µ+α·σ so that the gradients can still flow through the masked points. By involving M(xi) into the iterative optimization process: ∇xi(f ◦p◦g)(X)|xi=x̂ ≈ ∇xi(f ◦p)(X)|xi=x̂·M(x̂), similar to BPDA (Athalye et al., 2018), attackers may still find adversarial examples.\nExperimentation. We leverage the open-sourced codebase2 of DUP-Net for experimentation. Specifically, a PointNet (Qi et al., 2017a) trained on ModelNet40 is used as the target classifier f . For the PU-Net, the upsampled number of points is 2048, and the upsampling ratio is 2. For the adaptive attacks, we exploit targeted L2 norm-based C&W attack and untargeted L∞ norm-based PGD attack with 200 iterations (PGD-200). Detailed setups are elaborated in Appendix A.1.\nDiscussion. As shown in Table 1, adaptive C&W attacks achieve 100% success rates on DUP-Net. Though the mean distance of adversarial examples targeting DUP-Net is larger than those targeting PU-Net, they are almost indistinguishable by human perception, as visualized in Appendix A.2. We find that naı̈ve PGD attacks are also effective since the upsampler network is sensitive to L∞ norm-based perturbations. The design of DUP-Net is similar to ME-Net (Yang et al., 2019) in the 2D space, which recently has been shown vulnerable to adaptive attacks (Tramer et al., 2020). We hereby demonstrate that such input transformation-based defenses cannot offer real robustness to point cloud classification, either." }, { "heading": "3.2 ADAPTIVE ATTACKS ON GVG-POINTNET++", "text": "GvG-PointNet++ (CVPR’20) introduces gather vectors in the 3D point clouds as an adversarial indicator. The original PointNet++ aggregates local features fi hierarchically to make final classification. Gather vectors vi are learned from local features fi to indicate the global center ci of a point cloud sample. If the indicated global center ci is far away from the ground-truth global center cg , the corresponding local feature fi will be masked out:\nci = xci + vi ; Mi = 1||cg−ci||<r ; Fg = {fi · Mi} (2) 2https://github.com/RyanHangZhou/DUP-Net\nwhere xci is the geometry center of the local point set, r is the distance threshold to mask the local feature, and Fg is the cleaned feature set for final classification. To train GvG-PointNet++, it is necessary to optimize a surrogate loss to correctly learn the gather vectors besides the cross-entropy (xent) loss:\nLtotal = Lxent + λ · Lgather , Lgather = n′∑ i=1 ||ci − cg||1 (3)\nwhere n′ is the number of local features and λ is a hyper-parameter. Thus, GvG-PointNet++ essentially applies self-attention to the local features and relies on it for robustness enhancement.\nAnalysis. Dong et al. (2020a) evaluate white-box adversaries on GvG-PointNet++ with naı̈ve L2 norm-based PGD attacks. Specifically, only Lxent is utilized in the adversarial optimization process so that the maskingMi will hinder the gradient propagation. However, sinceMi is learned from the network itself, it is highly possible to further break this obfuscation with Lgather considered. The adaptive attack can be then formulated as an optimization problem with the loss function:\nLadv = Lxent − β · Lgather (4) where β is a hyper-parameter. By maximizing Ladv with L2 norm-based PGD attacks, adversaries strive to enlarge the adversarial effect but also minimize the perturbations on gather vectors. We also find that GvG-PointNet++ is by design vulnerable to PGD attacks on Lgather as such perturbations will potentially affect most gather vector predictions to make gi masked out so that insufficient for final classification.\nExperimentation. We train GvG-PointNet++ based on single-scale grouped PointNet++ (Qi et al., 2017b) on ModelNet40 and set r = 0.08 and λ = 1 as suggested by Dong et al. (2020a). The model is trained by Adam (Kingma & Ba, 2014) optimizer with 250 epochs using batch size 16, and the initial learning rate is 0.01. For the adaptive attack, we use 10-step binary search to find a appropriate β. The setup of L2 norm-based PGD attacks is identical to Dong et al. (2020a), and we also leverage L∞ norm-based PGD-200 in the evaluation. Detailed setups are elaborated in Appendix A.1.\nDiscussion. As shown in Table 2, both adaptive PGD attacks achieve high success rates on GvGPointNet++. we also observe that the L∞ norm-based PGD attack is more effective on Lgather since L∞ norm perturbations assign the same adversarial budget to each point, which can easily impact a large number of gather vector predictions. However, it is hard for the L2 norm-based PGD attack to influence so many gather vector predictions because it prefers to perturb key points rather than the whole point set. GvG-PointNet++ leverages DNN to detect adversarial perturbations, which is similar to MagNet (Meng & Chen, 2017) in the 2D space. We validate that adversarial detection also fails to provide real robustness under adaptive white-box adversaries in point cloud classification." }, { "heading": "4 ADVERSARIAL TRAINING WITH DIFFERENT SYMMETRIC FUNCTIONS", "text": "We have so far demonstrated that state-of-the-art defenses against 3D adversarial point clouds are still vulnerable to adaptive attacks. While gradient obfuscation cannot offer real adversarial robust-\nness, adversarial training (AT) is widely believed to be the most effective method. In this section, we conduct the first thorough study showing how AT performs in point cloud classification." }, { "heading": "4.1 ADVERSARIAL TRAINING PRELIMINARIES", "text": "Madry et al. (2018) formulate AT as a paddle point problem in Equation 5, where D is the underlying data distribution, L(·, ·, ·) is the loss function, x is the training data with its label y, is the adversarial perturbation, and S denotes the boundary of such perturbations.\narg min θ E(x,y)∼D [ max ∈S L(x+ ,y,θ) ] (5)\nAdversarial training setups. We choose PointNet (Qi et al., 2017a), DeepSets (Zaheer et al., 2017), and DSS (Maron et al., 2020) as the backbone networks. As shown in Section 3 and demonstrated by Madry et al. (2018), L∞ norm-based PGD attack tends to be a universal first-order adversary. Thus, we select PGD-7 into the training recipe, and empirically set the maximum per-point perturbation = 0.05 out of the point cloud range [−1, 1]. We follow the default PointNet training setting to train models on the ModelNet40 training set. In the evaluation, we utilize PGD-200 to assess their robustness on the ModelNet40 validation set with the same adversarial budget = 0.05. Meanwhile, we also report the nominal accuracy on the clean validation set. Each PGD attack starts from a random point in the allowed perturbation space. More details can be found in Appendix B." }, { "heading": "4.2 ADVERSARIAL TRAINING WITH FIXED POOLING OPERATIONS", "text": "As shown in Figure 2, modern models fundamentally follow a general specification (σ ◦ ρ ◦ Φ)(X) for point cloud classification. Φ(·) represents a set of permutation-equivariant layers to learn local features from each point. ρ(·) is a column-wise symmetric (permutation-invariant) function to aggregate the learned local features into a global feature, and σ(·) are fully connected layers for final classification. PointNet, DeepSets, and DSS leverage different Φ(·) for local feature learning, but all depend on fixed pooling operations as their ρ(·). Specifically, MAX pooling is by default used in DeepSets (for point cloud classification) and PointNet (Zaheer et al., 2017; Qi et al., 2017a), while DSS utilizes SUM pooling (Maron et al., 2020). We also additionally select MEDIAN pooling due to its robust statistic feature (Huber, 2004). Though models with fixed pooling operations have achieved satisfactory accuracy under standard training, they face various difficulties under AT. As shown in Table 3, models with MEDIAN pooling achieve better nominal accuracy among fixed pooling operations, but much worse adversarial accuracy, while SUM pooling performs contrarily. Most importantly, none of them reach a decent balance of nominal and adversarial accuracy.\nAnalysis. AT consists of two stages: 1) inner maximization to find the worst adversarial examples and 2) outer minimization to update model parameters. Fixed pooling operations essentially leverage\na single statistic to represent the distribution of a feature dimension (Murray & Perronnin, 2014). Although MEDIAN pooling, as a robust statistic, intuitively should enhance the robustness, we find it actually hinders the inner maximization stage from making progress. We utilize L∞ norm-based PGD attack to maximize the xent loss of standard trained model with three fixed pooling operations. Figure 3 validates that MEDIAN pooling takes many more steps to maximize the loss. Therefore, MEDIAN pooling fails to find the worst adversarial examples in the first stage with limited steps. Though MAX and SUM pooling are able to achieve higher loss value, they encounter challenges in the second stage. MAX pooling backward propagates gradients to a single point at each dimension so that the rest n−1n features do not contribute to model learning. Since n is oftentimes a large number (e.g., 1024), the huge information loss and non-smoothness will fail AT (Xie et al., 2020). While SUM pooling realizes a smoother backward propagation, it lacks discriminability because by applying the same weight to each element, the resulting representations are strongly biased by the adversarial perturbations. Thus, with SUM pooling, the models cannot generalize well on clean data." }, { "heading": "4.3 ADVERSARIAL TRAINING WITH PARAMETRIC POOLING OPERATIONS", "text": "Recent studies have also presented trainable parametric pooling operations for different tasks in set learning, e.g., multiple instance learning, which are also qualified to be the symmetric function ρ(·) in point cloud classification models. Thus, we first group them into two classes: 1) attentionbased and 2) sorting-based pooling, and further benchmark their robustness under AT in point cloud classification. It is worth noting that none of those parametric pooling operations are proposed to improve the adversarial robustness, and we are the first to conduct such an in-depth analysis of how they behave as the symmetric function under AT in point cloud classification." }, { "heading": "4.3.1 ATTENTION-BASED POOLING OPERATIONS", "text": "An attention module can be abstracted as mapping a query and a set of key-value pairs to an output, making the models learn and focus on the critical information (Bahdanau et al., 2014). Figure 4(a) shows the design principle of attention-based pooling, which leverages a compatibility function to learn point-level importance. The aggregated global feature is computed as a column-wise weighted sum of the local features. Two attention-based pooling operations, ATT and ATT-GATE, are first proposed for multiple instance learning (Ilse et al., 2018). Let F = {f1,f2, . . . ,fn} be a set of features, ATT aggregates the global feature g by:\ng = n∑ i=1 ai · fi , ai = exp(w> · tanh(V · f>i ))∑n j=1 exp(w > · tanh(V · f>j )) (6)\nwherew ∈ RL×1 and V ∈ RL×dm are learnable parameters. ATT-GATE improves the expressiveness of ATT by introducing another non-linear activation sigmoid(·) and more trainable parameters into weight learning. Furthermore, PMA (Lee et al., 2019) is proposed for general set learning, which leverages multi-head attention (Vaswani et al., 2017) for pooling. We detail the design and our implementation of ATT, ATT-GATE, and PMA in Appendix B.3, and adversarially train the backbone models with these attention-based pooling operations." }, { "heading": "4.3.2 SORTING-BASED POOLING OPERATIONS", "text": "Sorting has been recently considered in the set learning literature due to its permutation-invariant characteristic, as shown in Figure 4(b). Let F ∈ Rn×dm be the matrix version of the feature set F,\nFSPool (Zhang et al., 2020) aggregates F by feature-wise sorting in a descending order:\nF̃i,j = sort↓(F:,j)i ; gj = n∑ i=1 Wi,j · F̃i,j (7)\nwhere W ∈ Rn×dm are learnable parameters. Therefore, the pooled representation is column-wise weighted sum of F̃ . SoftPool (Wang et al., 2020) re-organizes F so that its j-th dimension is sorted in a descending order, and picks the top k point-level embeddings F ′j ∈ Rk×dm to further form F̃ = [F ′1,F ′ 2, . . . ,F ′ dm\n]. Then, SoftPool applies CNN to each F̃j → gj so that the pooled representation is g = [g1, g2, . . . , gdm ]. Implementation details of SoftPool are elaborated in Appendix B.3. We also adversarially train the backbone models with FSPool and SoftPool." }, { "heading": "4.3.3 EXPERIMENTAL ANALYSIS", "text": "Table 4 shows the results of AT with different parametric pooling operations. To meet the requirement of permutation-invariance, attention-based pooling is restricted to learn point-level importance. For example, ATT applies the same weight to all dimensions of a point embedding. As a result, attention barely improves the pooling operation’s expressiveness as it essentially re-projects the point cloud to a single dimension (e.g., fi → ai in ATT) and differentiates them based on it, which significantly limits their discriminability. Therefore, little useful information can be learned from the attention module, explaining why they perform similarly to SUM pooling that applies the same weight to each point under AT, as shown in Table 4. Sorting-based pooling operations naturally maintain permutation-invariance as sort↓(·) re-organizes the unordered feature set F to an ordered feature map F̃ . Thus, FSPool and SoftPool are able to further apply feature-wise linear transformation and CNN. The insight is that feature dimensions are mostly independent of each other, and each point expresses to a different extent in every dimension. By employing feature-wise learnable parameters, the gradients also flow smoother through sorting-based pooling operations. Table 4 validates that sorting-based pooling operations achieve much better adversarial accuracy, e.g., on average, 7.3% better than MAX pooling while maintaining comparable nominal accuracy.\n5 IMPROVING THE ADVERSARIAL ROBUSTNESS WITH DEEPSYM\nIn the above analysis, we have shed light on that sorting-based pooling operations can benefit AT in point cloud classification. We hereby explore to further improve the sorting-based pooling design inspired by existing arts. First, we notice that both FSPool and SoftPool apply sort↓(·) right after a ReLU function (Nair & Hinton, 2010). However, ReLU leads to some neurons being zero (Goodfellow et al., 2016), which makes sort↓(·) unstable. Second, recent studies have shown that AT appreciates deeper neural networks (Xie & Yuille, 2019). Nevertheless, FSPool only employs one linear layer to aggregate features, and SoftPool requires dm to be a small number. The reason is that scaling up the depth in these existing sorting-based pooling designs requires exponential growth of parameters, which will make the end-to-end learning intractable.\nTo address the above limitations, we propose a simple yet effective pooling operation, DeepSym, that embraces the benefits of sorting-based pooling and also applies deep learnable layers to the pooling process. Given a feature set after ReLU activation F ∈ R+n×dm , DeepSym first applies another linear transformation to re-map F into Rn×dm so that f ′i = W · fi\n> where W ∈ Rdm×dm and F′ = {f ′1,f ′2, . . . ,f ′n}. Let F ′ be the matrix version of F′, DeepSym also sorts F ′ in a descending order (Equation 7) to get F̃ ′. Afterwards, we apply column-wise shared MLP on F̃ ′:\ngj = MLP(F̃ ′:,j) , j = {1, 2, . . . , dm} (8)\nto learn the global feature representation g. Each layer of the MLP composes of a linear transformation, a batch normalization module (Ioffe & Szegedy, 2015), and a ReLU activation function. Compared to FSPool that applies different linear transformations to different dimensions, DeepSym employs a shared MLP to different dimensions. By doing so, DeepSym deepens the pooling process to be more capable of digesting the adversarial perturbations. DeepSym can also address the problem of SoftPool that is only achievable with limited dm because the MLP is shared by all the feature channels so that it can scale up to a large number of dm with little complexity increases. Moreover, DeepSym generalizes MAX and SUM pooling by specific weight vectors. Therefore, it can also theoretically achieve universality with dm ≥ n (Wagstaff et al., 2019) while being more expressive in its representation and smoother in gradients propagation. To deal with the variable-size point clouds, DeepSym adopts column-wise linear interpolation in F̃ ′ to form a continuous feature map and then re-samples it to be compatible with the trained MLP (Jaderberg et al., 2015). Last but not least, DeepSym is by design flexible with the number of pooled features from each dimension. In the paper, we only allow DeepSym to output a single feature for a fair comparison with others. However, it is hard for other pooling operations to have this ability. For example, it requires a linear complexity increase for FSPool to enable this capability." }, { "heading": "5.1 EVALUATIONS", "text": "We implement a 5-layer DeepSym with [512, 128, 32, 8, 1] hidden neurons on three backbone networks and adversarially train them on ModelNet40 the same way introduced in Section 4.1. Table 4 shows that almost all models with DeepSym reach the best results in both nominal and adversarial accuracy, outperforming the default architecture by 10.8%, on average. Taking PointNet as an example, DeepSym (33.6%) improves the adversarial accuracy by 17.5% (∼ 2.1×) compared to the original MAX pooling architecture. Besides, DeepSym also achieves a 3.5% improvement in adversarial accuracy compared to FSPool and SoftPool. Overall, we demonstrate that DeepSym can benefit AT significantly in point cloud classification.\nWe further leverage various white- and black-box adversarial attacks to cross validate the robustness improvements of DeepSym on PointNet. Specifically, we exploit well-known FGSM (Szegedy et al., 2013), BIM (Kurakin et al., 2016), and MIM (Dong et al., 2018) as the white-box attack methods. We set the adversarial budget = 0.05, and leverage 200 steps for the iterative attacks, as well. For the black-box attacks, we choose two score-based methods: SPSA (Uesato et al., 2018) and NES (Ilyas et al., 2018), and a decision-based evolution attack (Dong et al., 2020b). We still select = 0.05 and allow 2000 queries to find each adversarial example. The detailed setups are elaborated in Appendix C.1. As shown in Table 5, PointNet with DeepSym consistently achieves the best adversarial accuracy under white-box attacks, except for FGSM. The reason is that FGSM is a single-step method that has limited ability to find adversarial examples. Besides, we find the black-box attacks are not as effective as the white-box attacks, which also demonstrate that adversarial training with DeepSym is able to improve the robustness of point cloud classification without gradient obfuscation (Carlini et al., 2019).\nSince DeepSym brings deep trainable layers into the original backbones, it is necessary to report its overhead. We leverage TensorFlow (Abadi et al., 2016) and NVIDIA profilers to measure the inference time, the number of trainable parameters, and GPU memory usage on PointNet. Specifically, the inference time is averaged from 2468 objects in the validation set, and the GPU memory is measured on an RTX 2080 with batch size = 8. As shown in Table 6, DeepSym indeed introduces more computation overhead by leveraging the shared MLP. However, we believe the overhead is relatively small and acceptable, compared to its massive improvements on the adversarial robustness. To fur-\nther have a lateral comparison, point cloud classification backbones are much more light-weight than image classification models. For example, ResNet-50 (He et al., 2016) and VGG-16 (Simonyan & Zisserman, 2014) have 23 and 138 million trainable parameters, respectively, and take much longer time to do the inference. The reason that models with SoftPool and PMA have fewer trainable parameters is that they limit the number of dimensions in the global feature by design.\n5.2 EXPLORING THE LIMITS OF DEEPSYM\nThere is a trade-off between the training cost and adversarial robustness in AT. Increasing the number of PGD attack steps can create harder adversarial examples (Madry et al., 2018), which could further improve the model’s robustness. However, the training time also increases linearly with the number of attack iterations increasing. Due to PointNet’s broad adoption (Guo et al., 2020), we here analyze how it performs under various AT settings. Specifically, we exploit the most efficient AT with PGD1 on ModelNet10 (Wu et al., 2015), a dataset consisting of 10 categories with 4899 objects, and a relatively expensive AT with PGD-20 on ModelNet40 to demonstrate the effectiveness of DeepSym. Other training setups are identical to Section 4.1.\nFigure 5 shows the results of the robustness of adversarially trained PointNet with various pooling operations under PGD-200. We demonstrate that PointNet with DeepSym still reaches the best adversarial accuracy of 45.2% under AT with PGD-1 on ModelNet10, which outperforms the original MAX pooling by 17.9% (∼ 1.7×) and SoftPool by 4.0%. Surprisingly, PointNet with DeepSym also achieves the best nominal accuracy of 88.5%. Moreover, DeepSym further advances itself under AT with PGD-20 on ModelNet40. Figure 5(b) shows that PointNet with DeepSym reaches the best 47.0% adversarial accuracy, which are 28.5% (∼ 2.6×) and 6.5% improvements compared to MAX pooling and SoftPool, respectively while maintaining competent nominal accuracy. We also report detailed evaluations using different PGD attack steps and budgets in Appendix C.1." }, { "heading": "6 CONCLUSION", "text": "In this work, we perform the first rigorous study on the adversarial robustness of point cloud classification. We design adaptive attacks and demonstrate that state-of-the-art defenses against adversarial point clouds cannot provide real robustness. Furthermore, we conduct a thorough analysis of how the required symmetric function affects the AT performance of point cloud classification models. We are the first to identify that the fixed pooling generally weakens the models’ robustness under AT, and on the other hand, sorting-based parametric pooling benefits AT well. Lastly, we propose DeepSym that further architecturally advances the adversarial accuracy of PointNet to 47.0% under AT, outperforming the original design and a strong baseline by 28.5% (∼ 2.6×) and 6.5%." }, { "heading": "A ADAPTIVE ATTACK EXPERIMENTAL SETUP AND VISUALIZATIONS", "text": "" }, { "heading": "A.1 EXPERIMENTAL SETUPS", "text": "Since DUP-Net is open-sourced, we target the publicly released PointNet and PU-Net models. For the L2 norm-based C&W attack, we set the loss function as:\nL = (max i 6=t′ (Z(X ′)i)−Z(X ′)t′)+ + λ · ||X −X ′||2 (9)\nwhere X ∈ Rn×3 is the matrix version of point cloud X, X ′ is the optimized adversarial example, Z(X)i is the i-th element of the output logits, and t′ is the target class. We leverage 10-step binary search to find the appropriate hyper-parameter λ from [10, 80]. As suggested by Xiang et al. (2019), we choose 10 distinct classes and pick 25 objects in each class from the ModelNet40 validation set for evaluation. The step size of the adversarial optimization is 0.01 and we allow at most 500 iterations of optimization in each binary search to find the adversarial examples.\nFor the L∞ norm-based PGD attack, we adopt the formulation in Madry et al. (2018): Xt+1 = ΠX+S(Xt + α · sign(∇XtL(Xt,θ,y))) (10)\nwhere Xt is the adversarial example in the t-th attack iteration, Π is the projection function to project the adversarial example to the pre-defined perturbation space S, which is the L∞ norm ball in our setup, and α is the step size. We select the boundary of allowed perturbations = {0.01, 0.025, 0.05, 0.075} out of the point cloud data range [−1, 1]. Since point cloud data is continuous, we set the step size α = 10 .\nFor GvG-PointNet++, we train it based on the single scale grouping (SSG)-PointNet++ backbone. The backbone network has three PointNet set abstraction module to hierarchically aggregate local features, and we enable gather vectors in the last module, which contains 128 local features (i.e., n′ = 128 in Section 3.2) with 256 dimensions. To learn the gather vectors, we apply three fully connected layers with 640, 640, and 3 hidden neurons respectively, as suggested by Dong et al. (2020a). Since the data from ModelNet40 is normalized to [-1,1], the global object center is cg = [0, 0, 0].\nFor the L∞ norm-based PGD attack, we leverage the same setup as the attack on DUP-Net. For the L2 norm-based PGD attack, we follow the settings in Dong et al. (2020a) to set the L2 norm threshold = δ √ n× din, where δ is selected in {0.08, 0.16, 0.32}, n is the number of points, and din is the dimension of input point cloud (i.e., 3). The attack iteration is set to 50, and the step size α = 50 .\nA.2 VISUALIZATIONS\nWe visualize some adversarial examples generated by adaptive attacks on PU-Net and DUP-Net in Figure 6 and Figure 7. It is expected that adversarial examples targeting DUP-Net are noisier than the ones targeting PU-Net as the former needs to break the denoiser layer. However, as mentioned in Section 3.1, they are barely distinguishable from human perception. We also visualize some adversarial examples generated by untargeted adaptive PGD attacks on GvG-PointNet++ in Figure 8 with different perturbation budgets ." }, { "heading": "B ADVERSARIAL TRAINING SETUP", "text": "" }, { "heading": "B.1 PGD ATTACK IN ADVERSARIAL TRAINING", "text": "We also follow the formulation in Equation 18 to find the worst adversarial examples. Specifically, we empirically select = 0.05 into the training recipe as there is no quantitative study on how much humans can bear the point cloud perturbations. Figure 8 shows that adversarial examples with = 0.05 are still recognizable by human perception. Moreover, because point cloud data is continuous, we set the step size of PGD attacks as:\nα = step , step < 10\n10 , step ≥ 10\n(11)\nin both training and evaluation phases to make sure PGD attacks reach the allowed maximum perturbations." }, { "heading": "B.2 POINT CLOUD CLASSIFICATION MODEL ARCHITECTURE DETAILS", "text": "PointNet, DeepSets, and DSS are the fundamental architectures in point cloud classification. Other models, such as PointNet++ and DGCNN, are built upon PointNet and DeepSets. Moreover, complex models oftentimes apply non-differentiable layers like knn(·) into end-to-end learning, which will make the adversarial training ineffective. In this work, we aim at exploring how the symmetric (permutation-invariant) function can benefit adversarial training. To this end, we choose PointNet, DeepSets, and DSS as the backbone networks. For the ModelNet40 dataset, we follow the default setting to split into 9,843 objects for training and 2,468 objects for validation (Wu et al., 2015). We randomly sample 1024 points from each object to form its point cloud, if not otherwise stated.\nPointNet. We leverage the default architecture in PointNet codebase3 and exclude the transformation nets (i.e., T-Net) and dropout layers for simplicity and reproducibility. PointNet leverages shared fully connected (FC) layers as the permutation-equivariant layer φl : FCl(Fl:,i) → Fl+1:,i and MAX pooling as the symmetric function ρ(·).\n3https://github.com/charlesq34/pointnet\nDeepSets. We leverage the default architecture in DeepSets codebase4. Different from PointNet, DeepSets first applies a symmetric function to each feature map and aggregate it with the original feature map. Afterwards, DeepSets also leverages FC layers to further process the features: φl : FCl(Fl:,i − ζ(Fl)) → Fl+1:,i, where ζ(·) is column-wise MAX pooling in the original implementation. Similarly, MAX pooling is still used as ρ(·) in DeepSets. DSS. DSS generalizes DeepSets architecture and applies another FC layer to ζ(Fl) in DeepSets so that φl : FCl1(Fl:,i) + FCl2(ζ(Fl))→ Fl+1:,i. Different from other two achitectures, DSS utilizes SUM pooling as ρ(·). Since there is no available codebase at the time of writing, we implement DSS by ourselves.\nWe visualize the differences of φ(·) in Figure 9, and summarize the layer information in Table 7." }, { "heading": "B.3 PARAMETRIC POOLING DESIGN AND IMPLEMENTATION", "text": "We have introduced ATT in Section 4.3.1. In our implementation, we choose L = 512 so that V ∈ R512×1024 to train the backbone models.\n4https://github.com/manzilzaheer/DeepSets\nATT-GATE is a variant of ATT with more learnable parameters:\ng = n∑ i=1 ai · fi , ai = exp(w> · (tanh(V · f>i ) sigm(U · f>i )))∑n j=1 exp(w > · (tanh(V · f>j ) sigm(U · f>j ))) (12)\nwhere U ,V ∈ RL×M , sigm(·) is the sigmoid activation function, and is an element-wise multiplication. We also choose L = 512 in ATT-GATE to train the backbone models.\nPMA (Lee et al., 2019) adopts multi-head attention into pooling on a learnable set of k seed vectors S ∈ Rk×dm Let F ∈ Rn×dm be the matrix version of the set of features.\nPMAk(F ) = MAB(S,FC(F )) (13)\nMAB(X,Y ) = H + FC(H) (14)\nwhere H = X + Multihead(X,Y ,Y ;w) (15)\nwhere FC(·) is the fully connected layer and Multihead(·) is the multi-head attention module (Vaswani et al., 2017). We follow the implementation in the released codebase5 to choose k = 1, the number of head = 4, and the hidden neurons in FC(·) = 128 to train the backbone models. Since SoftPool (Wang et al., 2020) sorts the feature set in each dimension, it requires the number of dimensions dm to be relatively small. We follow the description in their paper to choose dm = 8 and k = 32 so that each Fj ′ ∈ R32×8. We apply one convolutional layer to aggregate each F ′j into gj ∈ R1×32 so that the final g ∈ R1×256. Therefore, for all backbone networks with SoftPool, we apply the last equivariant layer as φ : n× dm−1 → n× 8 and ρ : n× 8→ 256.\n5https://github.com/juho-lee/set_transformer\nC DEEPSYM ABLATIONS\nIt is worth noting that DeepSym does not require the final layer to have only one neuron. However, to have a fair comparison with other pooling operations that aggregate into one feature from each dimension, our implementation of DeepSym also aggregates into one feature from each dimension." }, { "heading": "C.1 EVALUATION DETAILS", "text": "We also perform extensive evaluations using different PGD attack steps and budgets on PGD20 trained PointNet. Figure 10 shows that PointNet with DeepSym consistently achieves the best adversarial accuracy. We also validate MEDIAN pooling indeed hinders the gradient backward propagation. The adversarial accuracy of PointNet with MEDIAN pooling consistently drops even after PGD-1000. However, the adversarial accuracy of PointNet with other pooling operations usually converges after PGD-200. Figure 11 shows that DeepSym also outperforms other pooling operations under different adversarial budgets .\nWe leverage the default setup in FGSM, BIM, and MIM in our evaluation. FGSM is a single-step attack method, which can be represented as:\nXadv = X + · sign(∇XL(X,θ,y)) (16) The BIM attack is similar to PGD attacks described in Appendix A.1. The differences are 1) the attack starts from the original point cloud X and 2) the step size α = /T , where T is the number of attack steps. The MIM attack introduces momentum terms into the adversarial optimization:\ngt+1 = µ · gt + ∇XtL(Xt,θ,y) ||∇XtL(Xt,θ,y))||1\n(17)\nXt+1 = Xt + α · sign(gt+1) (18)\nSimilar to BIM, the attack starts from the original point cloudX and the step size α = /T . We set µ = 1 following the original setup (Dong et al., 2018).\nDue to the computational resource constraints, we set the sample size = 32 and allow 2000 quires to find each adversarial example in the score-based black-box attack (Uesato et al., 2018; Ilyas et al., 2018). For the evolution attack, we use the default loss L as the fitness score, and initialize 32 sets of perturbations from a Gaussian distributionN (0, 1). 4 sets of perturbations with top fitness scores will remain for the next iteration, while others will be discarded. We also allow 2000 generations of evolution to find the adversarial example." }, { "heading": "C.2 EVALUATION ON SCANOBJECTNN", "text": "We also evaluate the adversarial robustness of different pooling operations on a new point cloud dataset, ScanObjectNN (Uy et al., 2019), which contains 2902 objects belonging to 15 categories. We leverage the same adversarial training setup as ModelNet10 (i.e., PGD-1). Table 8 shows the results. We find that PointNet with DeepSym still achieves the best adversarial robustness. Since the point clouds from ScanObjectNN are collected from real-world scenes, which suffers from occlusion and imperfection, both nominal and adversarial accuracy drops compared to the results ModelNet40. We find that even some clean point clouds cannot be correctly recognized by human perception. Therefore, the performance degradation is also expected and we believe the results are not as representative as ones on ModelNet40." }, { "heading": "Pooling Operation Nominal Accuracy Adversarial Accuracy", "text": "" }, { "heading": "C.3 T-SNE VISUALIZATIONS", "text": "We visualize the global feature embeddings of adversarially trained PointNet under PGD-20 with different pooling operations in Figure 12 and their logits in Figure 13. Since it is hard to pick 40 distinct colors, though we put all data from 40 classes into the T-SNE process, we only choose 10 categories from ModelNet40 to realize the visualizations.\nDeepSym pooling operations. Three columns correspond to training data, validation data, and PGD-200 adversarial validation data, from left to right." } ]
2,020
null
SP:1dff36cb48bfef13cafeed2e263fa0fd9c85ab08
[ "This paper proposes a method to do medical entity extraction from HER data by fine-tuning a transformer model pretrained on a large EHR dataset. The model combines a two-step process of NER and NEN into a single step on a multi-label classification task by distantly supervised training. The main contribution of this paper is to exploit a single transformer model to perform NER and NEN for HER data simultaneously by using the representation of EHR for a single multi-label classification task. Empirical studies are performed to show the expected recall." ]
Medical entity extraction (EE) is a standard procedure used as a first stage in medical texts processing. Usually Medical EE is a two-step process: named entity recognition (NER) and named entity normalization (NEN). We propose a novel method of doing medical EE from electronic health records (EHR) as a singlestep multi-label classification task by fine-tuning a transformer model pretrained on a large EHR dataset. Our model is trained end-to-end in an distantly supervised manner using targets automatically extracted from medical knowledge base. We show that our model learns to generalize for entities that are present frequently enough, achieving human-level classification quality for most frequent entities. Our work demonstrates that medical entity extraction can be done end-to-end without human supervision and with human quality given the availability of a large enough amount of unlabeled EHR and a medical knowledge base.
[]
[ { "authors": [ "Alan R Aronson", "Olivier Bodenreider", "H Florence Chang", "Susanne M Humphrey", "James G Mork", "Stuart J Nelson", "Thomas C Rindflesch", "W John Wilbur" ], "title": "The nlm indexing initiative", "venue": "In Proceedings of the AMIA Symposium,", "year": 2000 }, { "authors": [ "Guthrie S Birkhead", "Michael Klompas", "Nirav R Shah" ], "title": "Uses of electronic health records for public health surveillance to advance public health", "venue": "Annual review of public health,", "year": 2015 }, { "authors": [ "Olivier Bodenreider" ], "title": "The unified medical language system (umls): integrating biomedical terminology", "venue": "Nucleic acids research,", "year": 2004 }, { "authors": [ "Leyang Cui", "Yue Zhang" ], "title": "Hierarchically-refined label attention network for sequence labeling", "venue": "arXiv preprint arXiv:1908.08676,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Greg Durrett", "Dan Klein" ], "title": "A joint model for entity analysis: Coreference", "venue": "typing, and linking. Transactions of the association for computational linguistics,", "year": 2014 }, { "authors": [ "Jennifer D’Souza", "Vincent Ng" ], "title": "Sieve-based entity linking for the biomedical domain", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume", "year": 2015 }, { "authors": [ "Jason Fries", "Sen Wu", "Alex Ratner", "Christopher Ré" ], "title": "Swellshark: A generative model for biomedical named entity recognition without labeled data", "venue": "arXiv preprint arXiv:1704.06360,", "year": 2017 }, { "authors": [ "Omid Ghiasvand", "Rohit J Kate. R" ], "title": "Uwm: Disorder mention extraction from clinical text using crfs and normalization using learned edit distance patterns", "venue": "In In: Proc. SemEval", "year": 2014 }, { "authors": [ "David A Hanauer", "Mohammed Saeed", "Kai Zheng", "Qiaozhu Mei", "Kerby Shedden", "Alan R Aronson", "Naren Ramakrishnan" ], "title": "Applying metamap to medline for identifying novel associations in a large clinical dataset: a feasibility analysis", "venue": "Journal of the American Medical Informatics Association,", "year": 2014 }, { "authors": [ "Wenqi He" ], "title": "Autoentity: automated entity detection from massive text corpora", "venue": null, "year": 2017 }, { "authors": [ "Peter B Jensen", "Lars J Jensen", "Søren Brunak" ], "title": "Mining electronic health records: towards better research applications and clinical care", "venue": "Nature Reviews Genetics,", "year": 2012 }, { "authors": [ "Zongcheng Ji", "Qiang Wei", "Hua Xu" ], "title": "Bert-based ranking for biomedical entity normalization", "venue": "AMIA Summits on Translational Science Proceedings,", "year": 2020 }, { "authors": [ "Yohan Jo", "Natasha Loghmanpour", "Carolyn Penstein Rosé" ], "title": "Time series analysis of nursing notes for mortality prediction via a state transition topic model", "venue": "In Proceedings of the 24th ACM international on conference on information and knowledge management,", "year": 2015 }, { "authors": [ "David C Kaelber", "Wendy Foster", "Jason Gilder", "Thomas E Love", "Anil K Jain" ], "title": "Patient characteristics associated with venous thromboembolic events: a cohort study using pooled electronic health record data", "venue": "Journal of the American Medical Informatics Association,", "year": 2012 }, { "authors": [ "Ning Kang", "Bharat Singh", "Zubair Afzal", "Erik M van Mulligen", "Jan A Kors" ], "title": "Using rule-based natural language processing to improve disease normalization in biomedical text", "venue": "Journal of the American Medical Informatics Association,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Mikhail Korobov" ], "title": "Morphological analyzer and generator for russian and ukrainian languages", "venue": "In International Conference on Analysis of Images, Social Networks and Texts,", "year": 2015 }, { "authors": [ "Yuri Kuratov", "Mikhail Arkhipov" ], "title": "Adaptation of deep bidirectional multilingual transformers for russian language", "venue": "arXiv preprint arXiv:1905.07213,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ], "title": "Neural architectures for named entity recognition", "venue": "arXiv preprint arXiv:1603.01360,", "year": 2016 }, { "authors": [ "Hoang-Quynh Le", "Mai-Vu Tran", "Thanh Hai Dang", "Nigel Collier" ], "title": "The uet-cam system in the biocreative v cdr task. In Fifth BioCreative challenge evaluation workshop", "venue": null, "year": 2015 }, { "authors": [ "Robert Leaman", "Zhiyong Lu" ], "title": "Taggerone: joint named entity recognition and normalization with semi-markov", "venue": "models. Bioinformatics,", "year": 2016 }, { "authors": [ "Robert Leaman", "Rezarta Islamaj Doğan", "Zhiyong Lu" ], "title": "Dnorm: disease name normalization with pairwise learning to rank", "venue": null, "year": 2013 }, { "authors": [ "Paea LePendu", "Yi Liu", "Srinivasan Iyer", "Madeleine R Udell", "Nigam H Shah" ], "title": "Analyzing patterns of drug use in clinical notes for patient safety", "venue": "AMIA Summits on Translational Science Proceedings,", "year": 2012 }, { "authors": [ "Haodi Li", "Qingcai Chen", "Buzhou Tang", "Xiaolong Wang", "Hua Xu", "Baohua Wang", "Dong Huang" ], "title": "Cnn-based ranking for biomedical entity normalization", "venue": "BMC bioinformatics,", "year": 2017 }, { "authors": [ "Jing Li", "Aixin Sun", "Jianglei Han", "Chenliang Li" ], "title": "A survey on deep learning for named entity recognition", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "Peng-Hsuan Li", "Ruo-Ping Dong", "Yu-Siang Wang", "Ju-Chieh Chou", "Wei-Yun Ma" ], "title": "Leveraging linguistic structures for named entity recognition with bidirectional recursive neural networks", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Ying Li", "Hojjat Salmasian", "Santiago Vilar", "Herbert Chase", "Carol Friedman", "Ying Wei" ], "title": "A method for controlling complex confounding effects in the detection of adverse drug reactions using electronic health records", "venue": "Journal of the American Medical Informatics Association,", "year": 2014 }, { "authors": [ "Yanan Lu", "Donghong Ji", "Xiaoyuan Yao", "Xiaomei Wei", "Xiaohui Liang" ], "title": "Chemdner system with mixed conditional random fields and multi-scale word clustering", "venue": "Journal of cheminformatics,", "year": 2015 }, { "authors": [ "Yen-Fu Luo", "Anna Rumshisky" ], "title": "Interpretable topic features for post-icu mortality prediction", "venue": "In AMIA Annual Symposium Proceedings,", "year": 2016 }, { "authors": [ "Yen-Fu Luo", "Weiyi Sun", "Anna Rumshisky" ], "title": "A hybrid method for normalization of medical concepts in clinical narrative", "venue": "IEEE International Conference on Healthcare Informatics (ICHI),", "year": 2018 }, { "authors": [ "Yi Luo", "Guojie Song", "Pengyu Li", "Zhongang Qi" ], "title": "Multi-task medical concept normalization using multi-view convolutional neural network", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Xuezhe Ma", "Eduard Hovy" ], "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "venue": "arXiv preprint arXiv:1603.01354,", "year": 2016 }, { "authors": [ "Frank J Manion", "Marcelline R Harris", "Ayse G Buyuktur", "Patricia M Clark", "Lawrence C An", "David A Hanauer" ], "title": "Leveraging ehr data for outcomes and comparative effectiveness research in oncology", "venue": "Current oncology reports,", "year": 2012 }, { "authors": [ "Jason S Mathias", "Dana Gossett", "David W Baker" ], "title": "Use of electronic health record data to evaluate overuse of cervical cancer screening", "venue": "Journal of the American Medical Informatics Association,", "year": 2012 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Mike Mintz", "Steven Bills", "Rion Snow", "Dan Jurafsky" ], "title": "Distant supervision for relation extraction without labeled data", "venue": "In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 1003–1011,", "year": 2009 }, { "authors": [ "Daisuke Okanohara", "Yusuke Miyao", "Yoshimasa Tsuruoka", "Jun’ichi Tsujii" ], "title": "Improving the scalability of semi-markov conditional random fields for named entity recognition", "venue": "In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics,", "year": 2006 }, { "authors": [ "Jyotishman Pathak", "Abel N Kho", "Joshua C Denny" ], "title": "Electronic health records-driven phenotyping: challenges, recent advances, and perspectives", "venue": null, "year": 2013 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Xiang Ren", "Ahmed El-Kishky", "Chi Wang", "Fangbo Tao", "Clare R Voss", "Jiawei Han" ], "title": "Clustype: Effective entity recognition and typing by relation phrase-based clustering", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Alan Ritter", "Luke Zettlemoyer", "Mausam", "Oren Etzioni" ], "title": "Modeling missing data in distant supervision for information extraction", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2013 }, { "authors": [ "N Sager", "Carol Friedman", "E Chi", "C Macleod", "S Chen", "S Johnson" ], "title": "The analysis and processing of clinical", "venue": "narrative. Medinfo,", "year": 1986 }, { "authors": [ "Jingbo Shang", "Liyuan Liu", "Xiang Ren", "Xiaotao Gu", "Teng Ren", "Jiawei Han" ], "title": "Learning named entity tagger using domain-specific dictionary", "venue": "arXiv preprint arXiv:1809.03599,", "year": 2018 }, { "authors": [ "Yijun Shao", "April F Mohanty", "Ali Ahmed", "Charlene R Weir", "Bruce E Bray", "Rashmee U Shah", "Douglas Redd", "Qing Zeng-Treitler" ], "title": "Identification and use of frailty indicators from text to examine associations with clinical outcomes among patients with heart failure", "venue": "In AMIA Annual Symposium Proceedings,", "year": 2016 }, { "authors": [ "Emma Strubell", "Patrick Verga", "David Belanger", "Andrew McCallum" ], "title": "Fast and accurate entity recognition with iterated dilated convolutions", "venue": null, "year": 2017 }, { "authors": [ "Maxim Topaz", "Kenneth Lai", "Dawn Dowding", "Victor J Lei", "Anna Zisberg", "Kathryn H Bowles", "Li Zhou" ], "title": "Automated identification of wound information in clinical notes of patients with heart diseases: Developing and validating a natural language processing", "venue": "application. International journal of nursing studies,", "year": 2016 }, { "authors": [ "Xiaoyan Wang", "George Hripcsak", "Marianthi Markatou", "Carol Friedman" ], "title": "Active computerized pharmacovigilance using natural language processing, statistics, and electronic health records: a feasibility study", "venue": "Journal of the American Medical Informatics Association,", "year": 2009 }, { "authors": [ "Adam B Wilcox" ], "title": "Leveraging electronic health records for phenotyping", "venue": "In Translational Informatics,", "year": 2015 }, { "authors": [ "Jun Xu", "Hee-Jin Lee", "Zongcheng Ji", "Jingqi Wang", "Qiang Wei", "Hua Xu" ], "title": "Uth ccb system for adverse drug reaction extraction from drug labels at tac-adr 2017", "venue": "In TAC,", "year": 2017 }, { "authors": [ "Mingbin Xu", "Hui Jiang", "Sedtawut Watcharawittayakul" ], "title": "A local detection approach for named entity recognition and mention detection", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Ying Yu", "Min Li", "Liangliang Liu", "Yaohang Li", "Jianxin Wang" ], "title": "Clinical big data and deep learning: Applications, challenges, and future outlooks", "venue": "Big Data Mining and Analytics,", "year": 2019 }, { "authors": [ "Ping Zhang", "Fei Wang", "Jianying Hu", "Robert Sorrentino" ], "title": "Towards personalized medicine: leveraging patient similarity and drug similarity analytics", "venue": "AMIA Summits on Translational Science Proceedings,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Wide adoption of electronic health records (EHR) in the medical care industry has led to accumulation of large volumes of medical data (Pathak et al., 2013). This data contains information about the symptoms, syndromes, diseases, lab results, patient treatments and presents an important source of data for building various medical systems (Birkhead et al., 2015). Information extracted from medical records is used for clinical support systems (CSS) (Shao et al., 2016) (Topaz et al., 2016) (Zhang et al., 2014), lethality estimation (Jo et al., 2015) (Luo & Rumshisky, 2016), drug side-effects discovery (LePendu et al., 2012) (Li et al., 2014) (Wang et al., 2009), selection of patients for clinical and epidemiological studies (Mathias et al., 2012) (Kaelber et al., 2012) (Manion et al., 2012), medical knowledge discovery (Hanauer et al., 2014) (Jensen et al., 2012) and personalized medicine (Yu et al., 2019). Large volumes of medical text data and multiple applicable tasks determine the importance of accurate and efficient information extraction from EHR.\nInformation extraction from electronic health records is a difficult natural language processing task. EHR present a heterogeneous dynamic combination of structured, semi-structured and unstructured texts. Such records contain patients’ complaints, anamneses, demographic data, lab results, instrumental results, diagnoses, drugs, dosages, medical procedures and other information contained in medical records (Wilcox, 2015). Electronic health records are characterised by several linguistic phenomena making them harder to process.\n• Rich special terminology, complex and volatile sentence structure. • Often missing term parts and punctuation. • Many abbreviations, special symbols and punctuation marks. • Context-dependant terms and large number of synonims. • Multi-word terms, fragmented and non-contiguous terms.\nFrom practical point of view the task of medical information extraction splits into entity extraction and relation extraction. We focus on medical entity extraction in this work. In the case of medical texts such entities represent symptoms, diagnoses, drug names etc.\nEntity extraction, also referred as Concept Extraction is a task of extracting from free text a list of concepts or entities present. Often this task is combined with finding boundaries of extracted entities as an intermediate step. Medical entity extraction in practice divides into two sequential tasks: Named entity recognition (NER) and Named entity normalization (NEN). During NER sequences of tokens that contain entities are selected from original text. During NEN each sequence is linked with specific concepts from knowledge base (KB). We used Unified Medical Language System (UMLS) KB (Bodenreider, 2004) as the source of medical entities in this paper.\nIn this paper we make the following contributions. First, we show that a single transformer model (Devlin et al., 2018) is able to perform NER and NEN for electronic health records simultaneously by using the representation of EHR for a single multi-label classification task. Second, we show that provided a large enough number of examples such model can be trained using only automatically assigned labels from KB to generalize to unseen and difficult cases. Finally, we empirically estimate the number of examples needed to achieve human-quality medical entity extraction using such distantly-supervised setup." }, { "heading": "2 RELATED WORK", "text": "First systems for named entity extraction from medical texts combined NER and NEN using term vocabularies and heuristic rules. One of the first such systems was the Linguistic String Project - Medical Language Processor, described in Sager et al. (1986). Columbia University developed Medical Language Extraction and Encoding System (MedLEE), using rule-based models at first and subsequently adding feature-based models (Friedman, 1997). Since 2000 the National Library of Medicine of USA develops the MetaMap system, based mainly on rule-based approaches (Aronson et al., 2000). Rule-based approaches depend heavily on volume and fullness of dictionaries and number of applied rules. These systems are also very brittle in the sense that their quality drops sharply when applied to texts from new subdomains or new institutions.\nEntity extraction in general falls into three broad categories: rule-based, feature-based and deeplearning (DL) based. Deep learning models consist of context encoder and tag decoder. The context encoder applies a DL model to produce a sequence of contextualized token representation used as input for tag decoder which assign entity class for each token in sequence. For a comprehensive survey see (Li et al., 2020). In most entity extraction systems the EE task is explicitly (or for some DL models implicitly) separated into NER an NEN tasks.\nFeature-based approaches solve the NER task as a sequence markup problem by applying such feature-based models as Hidden Markov Models (Okanohara et al., 2006) and Conditional Random Fields (Lu et al., 2015). The downside of such models is the requirement of extensive feature engineering. Another method for NER is to use DL models (Ma & Hovy, 2016) (Lample et al., 2016). This models not only select text spans containing named entities but also extract quality entity representations which can be used as input for NEN. For example in (Ma & Hovy, 2016) authors combine DL bidirectional long short-term memory network and conditional random fields.\nMain approaches for NEN task are: rule-based (D’Souza & Ng, 2015) (Kang et al., 2013), featurebased (Xu et al., 2017a) (Leaman et al., 2013) and DL methods (Li et al., 2017a) (Luo et al., 2018b) and their different combinations (Luo et al., 2018a). Among DL approaches a popular way is to use distance metrics between entity representations (Ghiasvand & Kate, 2014) or ranking metrics (Xu et al., 2017a) (Leaman et al., 2013). In addition to ranking tasks DL models are used to create contextualized and more representative term embeddings. This is done with a wide range of models: Word2Vec (Mikolov et al., 2013), ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018). The majority of approaches combine several DL models to extract contextaware representations which are used for ranking or classification using a dictionary of reference entity representations (Ji et al., 2020).\nThe majority of modern medical EE systems sequentially apply NER and NEN. Considering that NER and NEN models themselves are often multistage the full EE systems are often complex combinations of multiple ML and DL models. Such models are hard to train end-to-end and if the NER task fails the whole system fails. This can be partially mitigated by simultaneous training of NER and NEN components. In (Durrett & Klein, 2014) a CRF model is used to train NER and NEN simultaneously. In Le et al. (2015) proposed a model that merged NER and NEN at prediction\ntime, but not during training. In Leaman & Lu (2016) proposed semi-Markov Models architecture that merged NER and NEN both at training and inference time. Even with such merging of NER and NEN both tasks were present in the pipeline which proves problematic in difficult cases with multi-word entities or single entities with non-relevant text insertions.\nA number of deep-learning EE models (Strubell et al., 2017), (Li et al., 2017b), (Xu et al., 2017b), (Devlin et al., 2018), (Cui & Zhang, 2019) do not split the EE task into NER and NEN implicitly and use a single linear classification layer over token representations as the tag decoder. Our model is mostly identical to the model described in (Devlin et al., 2018) with the difference that instead of using a contexualized representation of each token to classify it as an entity we use the representation of the whole text to extract all entities present in the text at once.\nSupervised training of EE systems requires large amount of annotated data, this is especially challenging for domain-specific EE where domain-expert annotations is costly and/or slow to obtain. To avoid the need of hand-annotated data various weakly-supervised methods were developed. A particular instance of weak annotation is distant annotation which relies on external knowledge base to automatically label texts with entities from KB (Mintz et al., 2009), (Ritter et al., 2013), (Shang et al., 2018). Distant supervision can been applied to automatically label training data, and has gained successes in various natural language processing tasks, including entity recognition (Ren et al., 2015), (Fries et al., 2017), (He, 2017). We use distant annotation in this paper to label our train and test datasets." }, { "heading": "3 DATA", "text": "" }, { "heading": "3.1 ELECTRONIC HEALTH RECORDS DATASETS", "text": "In this work we used two proprietary Russian language EHR datasets, containing anonymized information. First one contains information about 2,248,359 visits of 429,478 patients to two networks of private clinics from 2005 to 2019. This dataset does not contain hospital records and was used for training the model. The second dataset was used for testing purposes and comes from a regional network of public clinics and hospitals. Testing dataset contains 1,728,259 visits from 2014 to 2019 of 694,063 patients." }, { "heading": "3.2 MEDICAL KNOWLEDGE BASE", "text": "We used UMLS as our medical KB and a source of medical entity dictionary for this paper. A subset of UMLS, Medical Dictionary for Regulatory Activities (MedDRA) was used to obtain translation of terms to Russian language. After merging the synonymous terms we selected 10000 medical entities which appeared most frequently in our training dataset. To find the terms we used a distantly supervised labelling procedure as described in next section. To increase the stability of presented results we decided to keep only terms that appear at least 10 times in the test dataset reducing the total number of entities to 4434. Medical terms were grouped according to UMLS taxonomy, statistics for group distribution are shown in Table 1." }, { "heading": "3.3 DISTANT SUPERVISION LABELING", "text": "Combining an EHR dataset and a list of terms from medical KB we used a simple rule-based model for train and test datasets labeling. The exact procedure for each record was as follows:\n• Input text was transformed to lower case, all known abbreviations where expanded, and all words were lemmatized using pymorphy2 (Korobov, 2015)\n• We selected all possible candidates using sliding window with lengths from 1 to 7 words\n• All possible candidates where compared to all possible synonims of medical entities\n• Exact matches between candidate and medical terms from KB where considered to be positive cases." }, { "heading": "4 MODEL", "text": "In this paper we used a RuBERT model pretrained on general russian texts (Kuratov & Arkhipov, 2019) and further pretrained on electronic health records. A linear classification layer with 10000 outputs was added as the last model layer (Fig 1.). This layer was initialized with weights from normal distribution with mean=-0,1 and std=0,11 to have at the start of training a low prediction probability for all classes.\nWe trained our model with binary crossentropy loss and Adam optimizer (Kingma & Ba, 2014) with learning rate 0.00001 making one pass over training dataset with training batches of size 20. To speed up training we used dynamic class weightings, classes not present in the current batch were given less weight compared to classes present. Model architecture is shown on Figure 1." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 DISTANT LABELS", "text": "Using distantly-generated labels we calculated the recall of our model on the test dataset. Our expectations were that with enough training examples the model should achieve recall close to 1.\nWe found that for some categories for like ’Pharmacologic Substance’ the model did not learn to correctly find entities even with a large enough number of training instances. The relation between number of training examples an recall on the test for entity classes: Sign or Symptom, Finding and Pharmacological Substance, and for all entities is shown on Fig 2.\nAs can be seen in Table 2 the number of training examples needed to achieve given recall differs between classes with some classes needing noticeably more examples. There could be numerous sources of such difference: tokenization, number of synonims, difficult context (substances are often as encountered lists mixed with dosages) and others. Even for the harder classes fifty thousand examples are enough to find nearly all distant labels" }, { "heading": "5.2 HUMAN LABELING", "text": "A major disadvantage of our labelling procedure is its incompletness. Any slight change of known term, a typo or a rare abbreviation will lead to missed entities. This makes estimating the precision of the model impossible with such labels. To compare our model with a human benchmark we randomly selected 1500 records from the testing dataset for hand labelling by a medical practitioner with 7 years of experience. These records where labeled for 15 most common entities in train dataset. After labeling we further analysed the cases where the model disagreed with human annotator by splitting all instances into following cases:\nFrom the results presented in Table 3 we can conclude that our model in general extracts most frequent entities with human-level quality. Large number of annotator errors for entities ’Illness’ and ’Infection’ stem from their occurrence in multiple acronyms and so are easily missed. Large number of model errors in case of ’Coughing’ are due to a single term synonym that was almost completely absent from train dataset and present in test dataset.\nTable 4: Examples of generalization by the entity extraction model\nOriginal text Extracted entity Comments\nleakage of urine into the diaper Urinary incontinence\nA correct entity is extracted even though the form used is not in the list of synonims from the knowledge base.\nprickling pains with feeling of pressure in the heart Pain in the heart region\nCorrect entity extraction in with extra words inside the entity span.\ncomplaints of pain pain in the lumbar joint Pain in lumbar spine\nUsing the word joint as an anchor the model correcctly selected the term ’Pain in lumbar spine’ instead of closely related terms ’Low back pain’ or ’Lumbar pain’.\ncomplaints of pain in the abdomen, right hypochondrium Right upper quadrant pain ...\nThe entity is extracted correctly even with body location ’Abdomen’ in the middle of the phrase.\ncomplaints of trembling fingers when excited Shaking of hands\nCorrect extraction of unusual entity form.\nblood pressure occasionally rises Increase in blood pressure; Blood pressure fluctuation\nUsing the word ’occasionaly’ the model in addition to general entity ’Increase in blood pressure’ successfully extracts a correct more specific entity ’Blood pressure fluctuation’.\na child on disability since 2008 after a cytomegalovirus infection with damage to the heart, hearing organs, vision, central nervous system Central nervous system lesion ...\nModel correctly connects the word damage with term central nervous system even though they are separated by several words and punctuation marks and extracts the corresponding entity.\nintercost neurlgia Intercostal neuralgia\nTypos ignored when extracting the correct entity" }, { "heading": "5.3 EXAMPLES", "text": "In this section we selected several examples of model generalising in difficult cases. In Table 4 we provide the original text translated into English and the extracted entity also in English form with our comments." }, { "heading": "6 CONCLUSION", "text": "In this paper we show that a single transformer model can be used for one-step medical entity extraction from electronic health records. This model shows excellent classification quality for most frequent entities and can be further improved by better language pretraining on general or in-domain texts, hyperparameter tuning or applying various ensembling methods. Not all entity classes are easily detected by model. Some classes like ’Farmacologial Substances’ are noticeably harder to classify correctly. This can be due to number factors including differences in context, number of synonims and the difference between train and test dataset.\nWe have shown that 50.000 training examples are enough for achieving near perfect-recall on automatic labels even for hard classes. Most frequent entities, with more that 150.000 training examples\nare classified with human-level quality. We did not explicitly search for lower limit of training examples needed to achieve such quality so it can be substantially lower. Also we showed that such quality is achieved even when using testing dataset which greatly differs in entity distribution, geographic location and institution types (clinic vs hospital) from training dataset. This implies the model ability to generalize to new, unseen and difficult cases after training on a limited variety of reference strings for each entity.\nThe number of errors made by human annotator highlights the hardships that medical entity annotation poses to humans, including the need to find inner entities in complex and abbreviated terms. The markup of the complete medical entities vocabulary is also problematic due to both a large number of entities possibly present in each record and to the fact that some entities are exceedingly rare. Less than half of training entities appearing at least 10 times in the testing dataset. A complete markup of such infrequent entities is not really feasible as it would involve looking through an extremely large number of records to get a reasonable number of entity occurrences.\nThe proposed distantly-supervised method can be used to extract with human-level accuracy a limited number of most frequent entities. This number can be increased by both improving the quality of the model and by adding new unlabeled examples. Distantly supervised entity extraction systems made in line with our model can be used for fast end resource-light extraction of medical entities for any language. While currently limited to a small vocabulary of common terms such systems show big promise in light of increasingly available amounts of medical data." } ]
2,020
null
SP:efd742fa15a8751c1b97e553bb6259944b2be339
[ "This paper claims that decentralized parallel SGD (DPSGD) performs better than synchronous SGD (SSGD) and noisy version of synchronous SGD (SSGD*) in large batch setting. Theoretically, it shows that the noise in DPSGD is landscape-dependent, which may help generalization. Experimental results on CV and ASR tasks show that DPSGD can outperform baselines when batch size is very large. Meanwhile, DPSGD is observed to adaptively adjust the effective learning rate and converge to flatter minima." ]
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large batch size may converge to sharp minima with poor generalization, and a large learning rate may harm convergence. Synchronous Stochastic Gradient Descent (SSGD) is the de facto DDL optimization method. Recently, Decentralized Parallel SGD (DPSGD) has been proven to achieve a similar convergence rate as SGD and to guarantee linear speedup for non-convex optimization problems. While there was anecdotal evidence that DPSGD outperforms SSGD in the large-batch setting, no systematic study has been conducted to explain why this is the case. Based on a detailed analysis of the DPSGD learning dynamics, we find that DPSGD introduces additional landscape-dependent noise, which has two benefits in the large-batch setting: 1) it automatically adjusts the learning rate to improve convergence; 2) it enhances weight space search by escaping local traps (e.g., saddle points) to find flat minima with better generalization. We conduct extensive studies over 12 stateof-the-art DL models/tasks and demonstrate that DPSGD consistently outperforms SSGD in the large batch setting; and DPSGD converges in cases where SSGD diverges for large learning rates. Our findings are consistent across different application domains, Computer Vision and Automatic Speech Recognition, and different neural network models, Convolutional Neural Networks and Long ShortTerm Memory Recurrent Neural Networks.
[]
[ { "authors": [ "Mahmoud Assran", "Nicolas Loizou", "Nicolas Ballas", "Mike Rabbat" ], "title": "Stochastic gradient push for distributed deep learning", "venue": "In PMLR, Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Carlo Baldassi", "Christian Borgs", "Jennifer T. Chayes", "Alessandro Ingrosso", "Carlo Lucibello", "Luca Saglietti", "Riccardo Zecchina" ], "title": "Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes", "venue": "Proceedings of the National Academy of Sciences,", "year": 2016 }, { "authors": [ "Pratik Chaudhari", "Stefano Soatto" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. 2018 Information Theory and Applications Workshop (ITA), Feb 2018", "venue": "doi: 10.1109/ita.2018.8503224. URL http://dx.doi.org/10.1109/ ita.2018.8503224", "year": 2018 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys, 2016", "venue": null, "year": 2016 }, { "authors": [ "Jeffrey Dean", "Greg S. Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Quoc V. Le", "Mark Z. Mao", "Marc’Aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "arXiv preprint arXiv:1703.11008,", "year": 2017 }, { "authors": [ "Yu Feng", "Yuhai Tu" ], "title": "How neural networks find generalizable solutions: Self-tuned annealing in deep learning", "venue": "ArXiv, abs/2001.01678,", "year": 2020 }, { "authors": [ "Rong Ge", "Furong Huang", "Chi Jin", "Yang Yuan" ], "title": "Escaping from saddle points—online stochastic gradient for tensor decomposition", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara Sainath", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition", "venue": "Signal Processing Magazine,", "year": 2012 }, { "authors": [ "Geoffrey E. Hinton", "Drew van Camp" ], "title": "Keeping the neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the Sixth Annual Conference on Computational Learning Theory,", "year": 1993 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andrew G. Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "CoRR, abs/1704.04861,", "year": 2017 }, { "authors": [ "G. Huang", "Z. Liu", "L. Van Der Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Stanisław Jastrzębski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos Storkey" ], "title": "Three factors influencing minima in sgd", "venue": "arXiv preprint arXiv:1711.04623,", "year": 2017 }, { "authors": [ "Stanislaw Jastrzebski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos J Storkey" ], "title": "Finding flatter minima with sgd", "venue": "In ICLR (Workshop),", "year": 2018 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "arXiv preprint arXiv:1609.04836,", "year": 2016 }, { "authors": [ "D.P. Kingma", "J.L. Ba" ], "title": "ADAM: a method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Robert Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "arXiv preprint arXiv:1802.06175,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Computer Science Department,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Sameer Kumar", "Victor Bitorff", "Dehao Chen", "Chiachen Chou", "Blake Hechtman", "HyoukJoong Lee", "Naveen Kumar", "Peter Mattson", "Shibo Wang", "Tao Wang", "Yuanzhong Xu", "Zongwei Zhou" ], "title": "Scale MLPerf-0.6 models on Google TPU-v3 Pods", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Hao Li", "Zheng Xu", "Gavin Taylor", "Christoph Studer", "Tom Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Mu Li", "David G Andersen", "Jun Woo Park", "Alexander J Smola", "Amr Ahmed", "Vanja Josifovski", "James Long", "Eugene J Shekita", "Bor-Yiing Su" ], "title": "Scaling distributed machine learning with the parameter server", "venue": "In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}", "year": 2014 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xiangru Lian", "Wei Zhang", "Ce Zhang", "Ji Liu" ], "title": "Asynchronous decentralized parallel stochastic gradient descent", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Kang Liu" ], "title": "Train CIFAR10 with PyTorch, 2020", "venue": "URL https://github.com/kuangliu/ pytorch-cifar. Available at https://github.com/kuangliu/pytorch-cifar", "year": 2020 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A pac-bayesian approach to spectrallynormalized margin bounds for neural networks", "venue": "arXiv preprint arXiv:1707.09564,", "year": 2017 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew G. Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation", "venue": "CVPR, abs/1801.04381,", "year": 2018 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "arXiv preprint arXiv:1710.06451,", "year": 2017 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "ICML, abs/1905.11946,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Saining Xie", "Ross B. Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks. CVPR, abs/1611.05431, 2017", "venue": "URL http://arxiv. org/abs/1611.05431", "year": 2017 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling SGD batch size to 32k for imagenet training", "venue": "CoRR, abs/1708.03888,", "year": 2017 }, { "authors": [ "Yang You", "Jing Li", "Jonathan Hseu", "Xiaodan Song", "James Demmel", "Cho-Jui Hsieh" ], "title": "Reducing BERT pre-training time from 3 days to 76", "venue": "URL http: //arxiv.org/abs/1904.00962", "year": 1904 }, { "authors": [ "Wei Zhang", "Suyog Gupta", "Xiangru Lian", "Ji Liu" ], "title": "Staleness-aware async-sgd for distributed deep learning", "venue": "In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Wei Zhang", "Suyog Gupta", "Fei Wang" ], "title": "Model accuracy and runtime tradeoff in distributed deep learning: A systematic study", "venue": "In IEEE International Conference on Data Mining,", "year": 2016 }, { "authors": [ "Wei Zhang", "Xiaodong Cui", "Ulrich Finkler", "Brian Kingsbury", "George Saon", "David Kung", "Michael Picheny" ], "title": "Distributed deep learning strategies for automatic speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Wei Zhang", "Xiaodong Cui", "Ulrich Finkler", "George Saon", "Abdullah Kayi", "Alper Buyuktosunoglu", "Brian Kingsbury", "David Kung", "Michael Picheny" ], "title": "A highly efficient distributed deep learning system for automatic speech recognition", "venue": "In INTERSPEECH’2019, Sept 2019b", "year": 2019 }, { "authors": [ "Wei Zhang", "Xiaodong Cui", "Abdullah Kayi", "Mingrui Liu", "Ulrich Finkler", "Brian Kingsbury", "George Saon", "Youssef Mroueh", "Alper Buyuktosunoglu", "Payel Das", "David Kung", "Michael Picheny" ], "title": "Improving efficiency in large-scale decentralized distributed training", "venue": "In ICASSP’2020,", "year": 2020 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices. CVPR, abs/1707.01083, 2018a", "venue": "URL http: //arxiv.org/abs/1707.01083", "year": 2018 }, { "authors": [ "Yao Zhang", "Andrew M. Saxe", "Madhu S. Advani", "Alpha A. Lee" ], "title": "Energy–entropy competition and the effectiveness of stochastic gradient descent in machine learning", "venue": "Molecular Physics,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Learning (DL) has revolutionized AI training across application domains: Computer Vision (CV) (Krizhevsky et al., 2012; He et al., 2015), Natural Language Processing (NLP) (Vaswani et al., 2017), and Automatic Speech Recognition (ASR) (Hinton et al., 2012). Stochastic Gradient Descent (SGD) is the fundamental optimization method used in DL training. Due to massive computational requirements, Distributed Deep Learning (DDL) is the preferred mechanism to train large scale Deep Learning (DL) tasks. In the early days, Parameter Server (PS) based Asynchronous SGD (ASGD) training was the preferred DDL approach (Dean et al., 2012; Li et al., 2014) as it did not require strict system-wide synchronization. Recently, ASGD has lost popularity due to its unpredictability and often inferior convergence behavior (Zhang et al., 2016b). Practitioners now favor deploying Synchronous SGD (SSGD) on homogeneous High Performance Computing (HPC) systems. The degree of parallelism in a DDL system is dictated by batch size: the larger the batch size, the more parallelism and higher speedup can be expected. However, large batches require a larger learning rate and overall they may negatively affect model accuracy because 1) large batch training usually converges to sharp minima which do not generalize well (Keskar et al., 2016) and 2) large learning rates may violate the conditions (i.e., the smoothness parameter) required for convergence in nonconvex optimization theory (Ghadimi & Lan, 2013). Although training longer with large batches could lead to better generalization (Hoffer et al., 2017), doing so gives up some or all of the speedup we seek. Through meticulous hyper-parameter design (e.g., learning rate) tailored to each specific task, SSGD-based DDL systems have enabled large batch training and shortened training time for some challenging CV tasks (Goyal et al., 2017; You et al., 2017) and NLP tasks (You et al., 2019) from weeks to hours or less. However, it is observed that SSGD with large batch size leads to large training loss and inferior model quality for ASR tasks (Zhang et al., 2019b), as illustrated in Figure 1a (red curve). In this paper we found for other types of tasks (e.g. CV) and DL models, large batch SSGD has the same problem (Figure 1b and Figure 1c). The cause of this problem could be that training gets trapped at saddle points since large batches reduce the magnitude of noise in the\nstochastic gradient and prevent the algorithm from exploring the whole parameter space. To solve this problem, one may add isotropic noise (e.g., spherical Gaussian) to help SSGD escape from saddle points (Ge et al., 2015). However, this is not a good solution for high-dimensional DL training as shown in the blue curves of Figure 1. One possible reason is that the complexity of escaping a saddle point by adding isotropic noise has a polynomial dependency on the dimension of the parameter space, so adding such noise in a high dimensional space (such as deep learning) does not bring significant benefits. In this paper, we have found that Decentralized Parallel SGD (DPSGD) (Lian et al., 2017b) greatly improves large batch training performance, as illustrated in the green curves in Figure 1. Unlike SSGD, where each learner updates its weights by taking a global average of all learners’ weights, DPSGD updates each learner’s weights by taking a partial average (i.e., across a subset of neighboring learners). Therefore, in DPSGD, each learner’s weights differ from the weights of other learners.1 The key difference among SSGD, SSGD with Gaussian noise 2 and DPSGD is the source of noise during the update, and this noise directly affects performance in deep learning. This naturally motivates us to study Why decentralized training outperform synchronous training in the large batch setting? More specifically, we try to understand whether their performance difference is caused by their different noise. We answer these questions from both theoretical and empirical perspectives. Our contributions are:\n• We analyze the dynamics of DDL algorithms, including both SSGD and DPSGD. We show, both theoretically and empirically, that the intrinsic noise in DPSGD can 1) reduce the effective learning rate when the gradient is large to help convergence; 2) enhance the search in weight space for flat minima with better generalization. • We conduct extensive empirical studies of 12 CV and ASR tasks with state-of-the-art CNN and LSTM models. Our experimental results demonstrate that DPSGD consistently outperforms SSGD, across application domains and Neural Network (NN) architectures in the large batch setting, without any hyper-parameter tuning. To the best of our knowledge, we are unaware of any generic algorithm that can improve SSGD large batch training on this many models/tasks.\nThe remainder of this paper is organized as follows. Section 2 details the problem formulation and learning dynamics analysis of SSGD, SSGD+Gaussian, and DPSGD; Section 3 and Section 4 detail the empirical results; and Section 5 concludes the paper." }, { "heading": "2 ANALYSIS OF STOCHASTIC LEARNING DYNAMICS AND EFFECTS OF LANDSCAPE-DEPENDENT NOISE", "text": "We first formulate the dynamics of an SGD based learning algorithm with multiple (n > 1) learners indexed by j = 1, 2, 3, ...n following the same theoretical framework established for a single learner (Chaudhari & Soatto, 2018). At each given time (iteration) t, each learner has its own weight vector ~wj(t), and the average weight vector ~wa(t) is defined as: ~wa(t) ⌘ n 1 Pn j=1 ~wj(t).\n1The detailed DPSGD algorithm and its learning dynamics are described in Section 2. 2We use the terms ”SSGD with Gaussian noise” and \"SSGD⇤\" interchangeably in this paper.\nEach learner j updates its weight vector according to the cross-entropy loss function Lµj(t)(~w) for minibatch µj(t) that is assigned to it at time t. The size of the local minibatch is B, and the overall batch size for all learners is nB. Two multi-learner algorithms are described below.\n(1) Synchronous Stochastic Gradient Descent (SSGD): In the synchronous algorithm, the learner j 2 [1, n] starts from the average weight vector ~wa and moves along the gradient of its local loss function Lµj(t) evaluated at the average weight ~wa:\n~wj(t+ 1) = ~wa(t) ↵rLµj(t)(~wa(t)), (1)\nwhere ↵ is the learning rate.\n(2) Decentralized Parallel SGD (DPSGD): In the DPSGD algorithm (Lian et al., 2017a), learner j computes the gradient at its own local weight ~wj(t). The learning dynamics follows:\n~wj(t+ 1) = ~ws,j(t) ↵rLµj(t)(~wj(t)). (2)\nwhere ~ws,j(t) is the starting weight set to be the average weight of a subset of “neighboring\" learners of learner-j, which corresponds to the non-zero entries in the mixing matrix defined in (Lian et al., 2017a) (note that ~ws,j = ~wa if all learners are included as neighbors).\nBy averaging over all learners, the learning dynamics for the average weight ~wa for both SSGD and DPSGD can be written formally the same way as: ~wa(t + 1) = ~wa(t) ↵~ga, where ~ga = n 1 Pn\nj=1 ~gj is the average gradient and ~gj is the gradient from learner-j. The difference between SSGD and DPSGD is the weight at which ~gj is computed: ~gj ⌘ rLµj(t)(~wa(t)) is computed at ~wa for SSGD; ~gj ⌘ rLµj(t)(~wj(t)) is computed at ~wj for DPSGD. By projecting the weight displacement vector ~wa ⌘ ↵~ga onto the direction of the gradient ~g ⌘ rL(~wa) of the overall loss function L at ~wa, we can write the learning dynamics as:\n~wa(t+ 1) = ~wa(t) ↵e~g + ~⌘, (3)\nwhere ↵e ⌘ ↵~ga · ~g/||~g||2 is an effective learning rate and ~⌘ = ↵~ga + ↵e~g is the noise term that describes the stochastic weight dynamics in directions orthogonal to ~g. The noise term has zero mean h~⌘iµ = 0 and its strength is characterized by its variance (t) ⌘ ||~⌘||2. and ↵e are related by the equality: ↵2e||~g||2 + = ↵2||~ga||2, which indicates that a higher noise strength leads to a lower effective learning rate ↵e.\nThe noise strength (and hence ↵e) is different in SSGD and DPSGD. The DPSGD noise DP is larger than the SSGD noise S by an additional noise (2)(> 0) that originates from the difference of local weights (~wj) from their mean (~wa): DP = S + (2), see Appendix B for details. By expanding (2) w.r.t. ~wj ⌘ ~wj ~wa, we obtain the average (2) over minibatch ensemble {µ}:\nh (2)iµ ⌘ ↵2h||n 1 nX\nj=1\n[rLµj (~wj) rLµj (~wa)]||2iµ ⇡ ↵2 X\nk,l,l0\nHklHkl0Cll0 , (4)\nwhere Hkl = r2klL is the Hessian matrix of the loss function and Cll0 = n 2 Pn j=1 wj,l wj,l0\nis the weight covariance matrix. It is clear that (2) depends on the loss landscape – it is larger in rough landscapes and smaller in flat landscapes.\nIt is important to stress that the noise ~⌘ in Eq.3 is not an artificially added noise. It is intrinsic to the use of minibatches (random subsampling) in SGD-based algorithms and is enhanced by the difference among different learners in DPSGD. The noise strength varies in weight space via its dependence on the loss landscape, as explicitly shown in Eq.4. However, besides its landscape dependence, SGD noise also depends inversely on the minibatch size B (Chaudhari & Soatto, 2018). With n synchronized learners, the noise in SSGD scales as 1/(nB), which is too small to be effective for a large batch size nB. A main finding of our paper is that the additional landscape-dependent noise (2) in DPSGD can make up for the small SSGD noise when nB is large and help enhance convergence and generalization in the large batch setting.\nIn the following, we investigate the effects of this landscape-dependent noise for SSGD and DPSGD using the MNIST dataset where each learner is a fully connected network with two hidden layers (50 units per layer). We focus on the large batch setting using nB = 2000 in the experiments." }, { "heading": "2.1 NOISE IN DPSGD REDUCES EFFECTIVE LEARNING RATE TO HELP CONVERGENCE", "text": "First, we study a case with a large learning rate ↵ = 1. In this experiment, we used n = 5, and ~ws,j = ~wa for DPSGD. As shown in the upper panel of Fig. 2(a), DPSGD converges to a solution with low loss (2.1% test error), but SSGD fails to converge. As shown in Fig. 2(a) (lower panel), the effective learning rate ↵e is reduced in DPSGD during early training (0 t 700) while ↵e in SSGD remains roughly the same as ↵. This reduction of ↵e caused by the stronger noise in DPSGD is essential for convergence by avoiding overshoots when gradients are large in the beginning of the training process. In the later stage of the training process when gradients are smaller, the landscape-dependent DPSGD noise decreases and ↵e increases back to be ⇡ ↵. To show the importance of the landscape-dependent noise, we introduce a variant of SSGD, SSGD⇤, by injecting a Gaussian noise with a constant variance to weights in SSGD. However, most choices of this injected noise fail to converge. Only by fine tuning the injected noise strength can SSGD⇤ converge, but to an inferior solution with much higher loss and test error (5.7%). The poor performance is likely due to the persistent reduction of ↵e even in the later stage of training (see Fig. 2(a) (lower panel)) since the added Gaussian noise in SSGD⇤ is independent of the loss landscape.\nThis insight on reducing learning rate is consistent with nonconvex optimization theory (Ghadimi & Lan, 2013; Lian et al., 2017b). When we use a larger batch size, stochastic gradient has smaller variance, and nonconvex optimization is able to choose a larger learning rate without affecting its convergence. However, the learning rate should be limited by 1/ls where ls is the smoothness parameter. In the very large batch setting, the learning rate under the linear scaling rule (Goyal et al., 2017) may indeed exceed this limit (1/ls). Here, we show that these conflicting requirements can be resolved in DPSGD where the enhanced landscape-dependent noise adaptively adjusts the effective learning rate by reducing ↵e when the loss landscape is rough with large gradients and restoring to the original large ↵ when the landscape is smooth. In Appendix E, we consider a simple synthetic problem where we show that the larger noise in DPSGD allows the algorithm to escape saddle points in the loss function landscape while the SSGD algorithm gets stuck for a long time." }, { "heading": "2.2 NOISE IN DPSGD ENHANCES SEARCH TO FIND FLAT MINIMA WITH BETTER GENERALIZATION", "text": "Next, we consider a case with a smaller learning rate ↵ = 0.2. Here we used n = 6 and ~ws,j(t) in DPSGD is the average weight of 2 neighbors on each side. In this case, both SSGD and DPSGD can converge to a solution, but their learning dynamics are different. As shown in Fig. 2(b) (upper panel), while the training loss L of SSGD (red) decreases smoothly, the DPSGD training loss (green) fluctuates widely during the time window (1000-3000) when it stays significantly above the SSGD training loss. As shown in Fig. 2(b) (lower panel), these large fluctuations in L are caused by the high and increasing noise level in DPSGD. This elevated noise level in DPSGD allows the algorithm to search in a wider region in weight space. At around time 3000(batch), the DPSGD loss decreases suddenly and eventually converges to a solution with a similar training loss as SSGD. However, despite their similar final training loss, the DPSGD loss landscape is flatter (contour lines further\napart) than SSGD landscape. Remarkably, the DPSGD solution has a lower test error (2.3%) than the test error of the SSGD solution (2.6%). We have also tried the SSGD⇤ algorithm, but the performance (3.9% test error) is worse than both SSGD and DPSGD.\nTo understand their different generalization performance, we studied the loss function landscape around the SSGD and DPSGD solutions. The contour plots of the loss function L around the two solutions are shown in the two right panels in Fig. 2(b). We found that the loss landscape near the DPSGD solution is flatter than the landscape near the SSGD solution despite having the same minimum loss. Our observation is consistent with (Keskar et al., 2016) where it was found that SSGD with a large batch size converges to a sharp minimum which does not generalize well. Our results are in general agreement with the current consensus that flatter minima have better generalization (Hinton & van Camp, 1993; Hochreiter & Schmidhuber, 1997; Baldassi et al., 2016; Chaudhari et al., 2016; Zhang et al., 2018b). It was recently suggested that the landscape-dependent noise in SGD-based algorithms can drive the system towards flat minima (Feng & Tu, 2020). However, in the large batch setting, the SSGD noise is too small to be effective. The additional landscape-dependent noise (2) in DPSGD, which also depends inversely on the flatness of the loss function (see Eq. 4), is thus critical for the system to find flatter minima in the large batch setting." }, { "heading": "3 EXPERIMENTAL METHODOLOGY", "text": "We implemented SSGD and DPSGD using PyTorch, OpenMPI, and NVidia NCCL. We run experiments on a cluster of 8-V100-GPU x86 servers. For CV tasks, we evaluated on CIFAR-10 (50,000 samples, 178MB). For ASR tasks, we evaluate on SWB-300 (300 hours training data, 4,000,000 samples, 30GB) and SWB-2000 (2000 hours training data, 30,000,000 samples, 216GB)3. We evaluate on 12 state-of-the-art NN models: 10 CNN models and 2 6-layer bi-directional LSTM models. We summarize the model size and training time in Table 1. We refer readers to Appendix D for software implementation, hardware configuration, dataset and Neural Network (NN) model details." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "All the large batch experiments are conducted on 16 GPUs (learners) if not stated otherwise. Batches are evenly distributed among learners, e.g., each learner uses a local batch size of 128, when the overall batch size is 2048. A learner randomly picks a neighbor with which to exchange weights in each iteration (Zhang et al., 2020). 4.1 SSGD AND DPSGD COMPARISON ON CV TASKS For CIFAR-10 tasks, we use the hyper-parameter setup proposed in (Liu, 2020): a baseline batch size 128 and learning rate 0.1 for the first 160 epochs, learning rate 0.01 for the next 80 epochs, and learning rate 0.001 for the remaining 80 epochs. Using the same learning rate schedule, we keep increasing the batch size up to 8192. Table 2 records test accuracy under different batch sizes. Model accuracy consistently deteriorates beyond batch size 1024 because the learning rate is too small for the number of parameter updates.\nTo improve model accuracy beyond batch size 1024, we apply the linear scaling rule (i.e., linearly increase learning rate w.r.t batch size) (He et al., 2015; Zhang et al., 2019a; Goyal et al., 2017; Zhang et al., 2016b;a). We use learning rate 0.1 for batch size 1024, 0.2 for batch size 2048, 0.4 for batch size 4096, and 0.8 for batch size 8192. Table 3 compares SSGD and DPSGD performance running with 16 GPUs (learners). SSGD and DPSGD perform comparably up to batch size 4096. When the batch size increases to 8192, DPSGD outperforms SSGD in all but one case. Most noticeably, SSGD\n3SWB-2000 training is more challenging than ImageNet. It takes over 200 hours on 1 V100 GPU to finish training SWB-2000. SWB-2000 has 32,000 highly unevenly distributed classes whereas ImageNet has 1000 evenly distributed classes.\ndiverges in EfficientNet-B0 and SENet-18 when the batch-size is 8192. Figure 6 in Appendix C details model accuracy progression w.r.t epochs in each setting.\nTo better understand the loss landscape in SSGD and DPSGD training, we visualize the landscape contour 2D projection and Hessian 2D projection, using the same mechanism as in (Li et al., 2018). For both plots, we randomly select two N -dim vectors (where N is the number of parameters in each model) and multiply with a scaling factor evenly sampled from -0.1 to 0.1 in a K ⇥K grid to generate K2 perturbations of the trained model. To produce a contour plot, we calculate the testing data loss of the perturbed model at each point in the K ⇥K grid. Figure 3 depicts the 2D contour plot for representative models (at the end of the 320th epoch) in a 50 ⇥ 50 grid. DPSGD training leads not only to a lower loss but also much more widely spaced contours, indicating a flatter loss landscape and more generalizable solution. For the Hessian plot, we first calculate the maximum eigen value max and minimum eigen value min of the model’s Hessian matrix at each sample point in a 4x4 grid. We then calculate the ratio r between | min| and | max|. The lower r is, the more likely it is in a convex region and less likely in a saddle region. We then plot the heatmap of this r value in this 4x4 grid. The corresponding models are trained at the 16-th epoch (i.e. the first 5% training phase) and the corresponding Hessian plot Figure 4 indicates DPSGD is much more effective at avoiding early traps (e.g., saddle points) than SSGD.\nSummary DPSGD outperforms SSGD for 9 out of 10 CV tasks in the large batch setting. Moreover, SSGD diverges on the EfficientNet-B0 and SENet-18 tasks. DPSGD is more effective at avoiding early traps (e.g., saddle points) and reaching better solutions than SSGD in the large batch setting." }, { "heading": "4.2 SSGD AND DPSGD COMPARISON ON ASR TASKS", "text": "For the SWB-300 and SWB-2000 tasks, we follow the same learning rate schedule proposed in (Zhang et al., 2019a): we use learning rate 0.1 for baseline batch size 256, and linearly warmup\nlearning rate w.r.t the baseline batch size for the first 10 epochs before annealing learning rate by 1p 2 for the remaining 10 epochs. For example, when using a batch size 2048, we linearly warmup the learning rate to 0.8 by the end of the 10th epoch before annealing. Table 4 illustrates heldout loss for SWB-300 and SWB-2000. In the SWB-300 task, SSGD diverges beyond batch size 2048 and DPSGD converges well until batch size 8192. In the SWB-2000 task, SSGD diverges beyond batch size 4096 and DPSGD converges well until batch size 8192. Figure 7 in Appendix C details heldout loss progression w.r.t epochs.\nSummary For ASR tasks, SSGD diverges whereas DPSGD converges to baseline model accuracy in the large batch setting." }, { "heading": "4.3 NOISE-INJECTION AND LEARNING RATE TUNING", "text": "In 4 out of 12 studied tasks, a large batch setting leads to a complete divergence in SSGD: EfficientNetB0, SENet-18, SWB-300 and SWB-2000. As discussed in Section 2, the intrinsic landscapedependent noise in DPSGD effectively helps escape early traps (e.g., saddle points) and improves training by automatically adjusting learning rate. In this section, we demonstrate these facts by systematically adding Gaussian noise (the same as the SSGD⇤ algorithm in Section 2) and decreasing the learning rate. We find that SSGD might escape early traps but still results in a much inferior model compared to DPSGD. Noise-injection In Figure 1, we systematically explore Gaussian noise injection with mean 0 and standard deviation (std) ranging from 10 to 0.00001 via binary search (i.e. roughly 20 configurations for each task). We found in the vast majority of the setups, noise-injection cannot escape early traps. In EfficientNet-B0, only when std is set to 0.04, does the model start to converge, but to a very bad loss (test accuracy 22.15% in SSGD vs 91.13% in DPSGD). In SENet-18, when std is set to 0.01, the model converges to a reasonable accuracy (84.86%) but still significantly lags behind its DPSGD counterpart (90.48%). In the SWB-300 case, when std is 0.01, SSGD shows an early sign of\nconverging for the first 3 epochs before it starts to diverge. In the SWB-2000 case, we didn’t find any configuration that can escape early traps. Figure 1 characterizes our best-effort Gaussian noise tuning and its comparison against SSGD and DPSGD. A plausible explanation is that Gaussian noise injection escapes saddle points very slowly, since Gaussian noise is isotropic and the complexity for finding local minima is dimension-dependent (Ge et al., 2015). Deep Neural Networks are usually over-parameterized (i.e., high-dimensional), so it may take a long time to escape local traps. In contrast, the heightened landscape-dependent noise in DPSGD is anisotropic (Chaudhari & Soatto, 2018; Feng & Tu, 2020) and can drive the system to escape in the right directions. Learning Rate Tuning Table 5 and Table 6 compare model quality (measured in either test accuracy or held-out loss) for different learning rates in the large batch size setting, for CV and ASR tasks. By using a smaller learning rate, SSGD can escape early traps, yet it consistently lags behind DPSGD in the large batch setting.\nSummary By systematically introducing landscape-independent noise and reducing the learning rate, SSGD could escape early traps (e.g., saddle points), but results in much inferior models compared to DPSGD in the large batch setting." }, { "heading": "4.4 END-TO-END RUN-TIME COMPARISON AND ADVICE FOR PRACTITIONERS", "text": "Please refer to Appendix F." }, { "heading": "5 CONCLUSION", "text": "In this paper, we investigate why DPSGD outperforms SSGD in the large batch training. Through detailed analysis on small-scale tasks and an extensive empirical study of a diverse set of modern DL tasks, we conclude that the landscape-dependent noise, which is strengthened in the DPSGD system, brings two benefits in the large batch setting: (1) It adaptively adjusts the effective learning rate according to the loss landscape, helping convergence. (2) It enhances search in weight space to find flat minima with better generalization. Based on our findings, we recommend that DDL practitioners consider DPSGD as an alternative when the batch size must be kept large, e.g., when a shorter run time to reach a reasonable solution is desired." } ]
2,020
WHY DOES DECENTRALIZED TRAINING OUTPER-
SP:8ac7287ce4e46fbcfd0b4231d6afab4238e7ca2c
[ "The paper proposes a benchmarking suite to overcome the problem low generalizability with black box optimization algorithm. The benchmarking suite consists of standard academic benchmarks to real world optimization problems. It also covers several scenarios such as dynamic-static, small to large-scale, discrete to mixed-integer etc. This is a relevant contribution to the machine learning however there are several drawbacks which pushes back it's acceptance into ICLR." ]
Existing studies in black-box optimization for machine learning suffer from low generalizability, caused by a typically selective choice of problem instances used for training and testing different optimization algorithms. Among other issues, this practice promotes overfitting and poor-performing user guidelines. To address this shortcoming, we propose in this work a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems, ranging from academic benchmarks to real-world applications, from discrete over numerical to mixed-integer problems, from small to very large-scale problems, from noisy over dynamic to static problems, etc. We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard. Using three different types of algorithm selection techniques, ABBO achieves competitive performance on all benchmark suites. It significantly outperforms previous state of the art on some of them, including YABBOB and LSGO. ABBO relies on many high-quality base components. Its excellent performance is obtained without any task-specific parametrization. The benchmark collection, the ABBO wizard, its base solvers, as well as all experimental data are reproducible and open source in OptimSuite.
[]
[ { "authors": [ "Youhei Akimoto", "Nikolaus Hansen" ], "title": "Projection-based restricted covariance matrix adaptation for high dimension", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2016 }, { "authors": [ "Anne Auger", "Nikolaus Hansen" ], "title": "Benchmarking the (1+1)-CMA-ES on the BBOB-2009 noisy testbed", "venue": "In Proc. of Genetic and Evolutionary Computation Conference (GECCO’09, Companion Material,", "year": 2009 }, { "authors": [ "Anne Auger", "Marc Schoenauer", "Olivier Teytaud" ], "title": "Local and global order 3/2 convergence of a surrogate evolutionary algorithm", "venue": "In Proc. of Genetic and Evolutionary Computation Conference", "year": 2005 }, { "authors": [ "Nicolas Baskiotis", "Michèle Sebag" ], "title": "C4.5 competence map: a phase transition-inspired approach", "venue": "In Proc. of International Conference on Machine Learning", "year": 2004 }, { "authors": [ "Nacim Belkhir", "Johann Dréo", "Pierre Savéant", "Marc Schoenauer" ], "title": "Per instance algorithm configuration of cma-es with limited budget", "venue": "In Proc. of Genetic and Evolutionary Computation Conference", "year": 2017 }, { "authors": [ "Vincent Berthier" ], "title": "Progressive differential evolution on clustering real world problems", "venue": "In Revised Selected Papers of the 12th International Conference on Artificial Evolution - Volume", "year": 2016 }, { "authors": [ "Hans-Georg Beyer" ], "title": "The Theory of Evolution Strategies", "venue": "Natural Computing Series. Springer, Heideberg,", "year": 2001 }, { "authors": [ "Hans-Georg Beyer", "Hans-Paul Schwefel" ], "title": "Evolution Strategies - A Comprehensive Introduction", "venue": "Natural Computing,", "year": 2002 }, { "authors": [ "Sébastien Bubeck", "Rémi Munos", "Gilles Stoltz" ], "title": "Pure exploration in finitely-armed and continuous-armed bandits", "venue": "Theor. Comput. Sci.,", "year": 2011 }, { "authors": [ "Marie-Liesse Cauwet", "Olivier Teytaud" ], "title": "Population control meets Doob’s martingale theorems: The noise-free multimodal case", "venue": "In Proc. of Genetic and Evolutionary Computation (GECCO’20, Companion Material),", "year": 2020 }, { "authors": [ "Marie-Liesse Cauwet", "Jialin Liu", "Baptiste Rozière", "Olivier Teytaud" ], "title": "Algorithm portfolios for noisy optimization", "venue": "Annals of Mathematics and Artificial Intelligence,", "year": 2016 }, { "authors": [ "Marie-Liesse Cauwet", "Camille Couprie", "Julien Dehos", "Pauline Luc", "Jérémy Rapin", "Morgane Riviere", "Fabien Teytaud", "Olivier Teytaud" ], "title": "Fully parallel hyperparameter search: Reshaped spacefilling", "venue": "arXiv preprint arXiv:1910.08406. To appear in Proc. of ICML 2020,", "year": 2019 }, { "authors": [ "Alexandre Adrien Chotard", "Anne Auger", "Nikolaus Hansen" ], "title": "Cumulative Step-size Adaptation on Linear Functions: Technical Report", "venue": "Research report, Inria Saclay,", "year": 2012 }, { "authors": [ "Yann Collette", "Nikolaus Hansen", "Gilles Pujol", "Daniel Salazar", "Rodolphe Le Riche" ], "title": "On objectoriented programming of optimizers - examples in scilab", "venue": null, "year": 2010 }, { "authors": [ "Rémi Coulom. Clop" ], "title": "Confident local optimization for noisyblack-box parameter tuning", "venue": "In Advances in Computer Games,", "year": 2012 }, { "authors": [ "Duc-Cuong Dang", "Per Kristian Lehre" ], "title": "Self-adaptation of mutation rates in non-elitist populations", "venue": "In Proc. of Parallel Problem Solving from Nature (PPSN’16),", "year": 2016 }, { "authors": [ "Jérémie Decock", "Olivier Teytaud" ], "title": "Noisy optimization complexity under locality assumption", "venue": "In Proceedings of the Twelfth Workshop on Foundations of Genetic Algorithms XII, FOGA XII", "year": 2013 }, { "authors": [ "Benjamin Doerr", "Huu Phuoc Le", "Régis Makhmara", "Ta Duy Nguyen" ], "title": "Fast genetic algorithms", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2017 }, { "authors": [ "Benjamin Doerr", "Carola Doerr", "Johannes Lengler" ], "title": "Self-adjusting mutation rates with provably optimal success rules", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2019 }, { "authors": [ "David Eriksson", "Michael Pearce", "Jacob Gardner", "Ryan D Turner", "Matthias Poloczek" ], "title": "Scalable global optimization via local Bayesian optimization", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Matteo Fischetti", "Michele Monaci" ], "title": "Exploiting erraticism in search", "venue": "Operations Research,", "year": 2014 }, { "authors": [ "Marcus Gallagher", "Sobia Saleem" ], "title": "Exploratory landscape analysis of the mlda problem set", "venue": "In PPSN’18 workshop,", "year": 2018 }, { "authors": [ "Nikolaus Hansen", "Andreas Ostermeier" ], "title": "Completely derandomized self-adaptation in evolution strategies", "venue": "Evolutionary Computation,", "year": 2003 }, { "authors": [ "Nikolaus Hansen", "Anne Auger", "Steffen Finck", "Raymond Ros" ], "title": "Real-parameter black-box optimization benchmarking 2009: Experimental setup", "venue": "Technical Report RR-6828,", "year": 2009 }, { "authors": [ "Nikolaus Hansen", "Raymond Ros", "Nikolas Mauny", "Marc Schoenauer", "Anne Auger" ], "title": "Impacts of Invariance in Search: When CMA-ES and PSO Face Ill-Conditioned and Non-Separable Problems", "venue": "Applied Soft Computing,", "year": 2011 }, { "authors": [ "Verena Heidrich-Meisner", "Christian Igel" ], "title": "Hoeffding and bernstein races for selecting policies in evolutionary direct policy search", "venue": "In Proc. of International Conference on Machine Learning", "year": 2009 }, { "authors": [ "Michael Hellwig", "Hans-Georg Beyer" ], "title": "Evolution under strong noise: A self-adaptive evolution strategy can reach the lower performance bound - the pcCMSA-ES", "venue": "In Proc. of Parallel Problem Solving from Nature (PPSN’16),", "year": 2016 }, { "authors": [ "John H. Holland" ], "title": "Adaptation in Natural and Artificial Systems", "venue": null, "year": 1975 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M. Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Population based training of neural networks, 2017", "venue": null, "year": 2017 }, { "authors": [ "Donald R. Jones", "Matthias Schonlau", "William J. Welch" ], "title": "Efficient global optimization of expensive black-box functions", "venue": "Journal of Global Optimization,", "year": 1998 }, { "authors": [ "Pascal Kerschke", "Heike Trautmann" ], "title": "Automated Algorithm Selection on Continuous Black-Box Problems By Combining Exploratory Landscape Analysis and Machine Learning", "venue": "Evolutionary Computation,", "year": 2018 }, { "authors": [ "Pascal Kerschke", "Holger H. Hoos", "Frank Neumann", "Heike Trautmann" ], "title": "Automated Algorithm Selection: Survey and Perspectives", "venue": "Evolutionary Computation,", "year": 2018 }, { "authors": [ "Lars Kotthoff" ], "title": "Algorithm Selection for Combinatorial Search Problems: A Survey", "venue": "AI Magazine,", "year": 2014 }, { "authors": [ "Xiaodong Li", "Ke Tang", "Mohammmad Nabi Omidvar", "Zhenyu Yang", "Kai Qin" ], "title": "Benchmark functions for the CEC’2013 special session and competition on large-scale global optimization", "venue": null, "year": 2013 }, { "authors": [ "Jialin Liu", "Antoine Moreau", "Mike Preuss", "Jeremy Rapin", "Baptiste Roziere", "Fabien Teytaud", "Olivier Teytaud" ], "title": "Versatile black-box optimization", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2020 }, { "authors": [ "Ilya Loshchilov" ], "title": "A computationally efficient limited memory cma-es for large scale optimization", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2014 }, { "authors": [ "Ilya Loshchilov", "T. Glasmachers" ], "title": "Black box optimization competition, 2017", "venue": "URL https: //www.ini.rub.de/PEOPLE/glasmtbl/projects/bbcomp/index.html", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Tobias Glasmachers", "Hans-Georg Beyer" ], "title": "Large scale black-box optimization by limited-memory matrix adaptation", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2018 }, { "authors": [ "Katherine Mary Malan", "Andries Petrus Engelbrecht" ], "title": "A Survey of Techniques for Characterising Fitness Landscapes and Some Possible Ways Forward", "venue": "Information Sciences (JIS),", "year": 2013 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search provides a competitive approach to reinforcement learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Olaf Mersmann", "Bernd Bischl", "Heike Trautmann", "Mike Preuss", "Claus Weihs", "Günter Rudolph" ], "title": "Exploratory Landscape Analysis", "venue": "In Proc. of Genetic and Evolutionary Computation Conferennce", "year": 2011 }, { "authors": [ "Laurent Meunier", "Carola Doerr", "Jérémy Rapin", "Olivier Teytaud" ], "title": "Variance reduction for better sampling in continuous domains", "venue": "In Proc. of Parallel Problem Solving from Nature (PPSN’20),", "year": 2020 }, { "authors": [ "D. Molina", "M. Lozano", "F. Herrera" ], "title": "Memetic algorithm with local search chaining for continuous optimization problems: A scalability test", "venue": "Ninth International Conference on Intelligent Systems Design and Applications,", "year": 2009 }, { "authors": [ "Mario Andrés Muñoz Acosta", "Yuan Sun", "Michael Kirley", "Saman K. Halgamuge" ], "title": "Algorithm Selection for Black-Box Continuous Optimization Problems: A Survey on Methods and Challenges", "venue": "Information Sciences (JIS),", "year": 2015 }, { "authors": [ "John A. Nelder", "Roger Mead" ], "title": "A simplex method for function minimization", "venue": "Computer Journal,", "year": 1965 }, { "authors": [ "Fernando Nogueira" ], "title": "Bayesian Optimization: Open source constrained global optimization tool for Python, 2014", "venue": "URL https://github.com/fmfn/BayesianOptimization", "year": 2014 }, { "authors": [ "Narayana Prasad Padhy" ], "title": "Unit commitment-a bibliographical survey", "venue": "IEEE Transactions on Power Systems,", "year": 2004 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg", "Jake Vanderplas", "Alexandre Passos", "David Cournapeau", "Matthieu Brucher", "Matthieu Perrot", "Édouard Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Erik Pitzer", "Michael Affenzeller" ], "title": "A Comprehensive Survey on Fitness Landscape Analysis", "venue": "Recent Advances in Intelligent Engineering Systems, Studies in Computational Intelligence, pp", "year": 2012 }, { "authors": [ "Michael J.D. Powell" ], "title": "An efficient method for finding the minimum of a function of several variables without calculating derivatives", "venue": "The Computer Journal,", "year": 1964 }, { "authors": [ "Michael J.D. Powell" ], "title": "A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation, pp. 51–67", "venue": null, "year": 1994 }, { "authors": [ "Nicholas J. Radcliffe", "Patrick D. Surry" ], "title": "Formal memetic algorithms", "venue": "Evolutionary Computing: AISB Workshop,", "year": 1994 }, { "authors": [ "Jeremy Rapin", "Olivier Teytaud" ], "title": "Nevergrad - A gradient-free optimization platform", "venue": "https: //GitHub.com/FacebookResearch/Nevergrad,", "year": 2018 }, { "authors": [ "Jeremy Rapin", "Olivier Teytaud" ], "title": "Dashboard of results for Nevergrad platform. https://dl", "venue": "fbaipublicfiles.com/nevergrad/allxps/list.html,", "year": 2020 }, { "authors": [ "Jérémy Rapin", "Pauline Dorval", "Jules Pondard", "Nicolas Vasilache", "Marie-Liesse Cauwet", "Camille Couprie", "Olivier Teytaud" ], "title": "Openly revisiting derivative-free optimization", "venue": "In Proc. of Genetic and Evolutionary Computation", "year": 2019 }, { "authors": [ "Ingo Rechenberg" ], "title": "Evolutionstrategie: Optimierung Technischer Systeme nach Prinzipien des Biologischen Evolution", "venue": "Fromman-Holzboog Verlag,", "year": 1973 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do CIFAR-10 classifiers generalize to CIFAR-10", "venue": null, "year": 2018 }, { "authors": [ "John Rischard Rice" ], "title": "The Algorithm Selection Problem", "venue": "Advances in Computers,", "year": 1976 }, { "authors": [ "Hartley Rogers" ], "title": "Theory of Recursive Functions and Effective Computability", "venue": null, "year": 1987 }, { "authors": [ "Raymond Ros", "Nikolaus Hansen" ], "title": "A simple modification in CMA-ES achieving linear time and space complexity", "venue": "In Proc. of Parallel Problem Solving from Nature", "year": 2008 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Ralf Salomon" ], "title": "Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. a survey of some theoretical and practical aspects of genetic", "venue": "algorithms. BioSystems,", "year": 1996 }, { "authors": [ "Ozan Sener", "Vladlen Koltun" ], "title": "Learning to guide random search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kate Amanda Smith-Miles" ], "title": "Cross-Disciplinary Perspectives on Meta-Learning for Algorithm Selection", "venue": "ACM Computing Surveys (CSUR),", "year": 2009 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P. Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2012 }, { "authors": [ "Rainer Storn", "Kenneth Price" ], "title": "Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces", "venue": "J. of Global Optimization,", "year": 1997 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "MuJoCo: A physics engine for model-based control", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Mauro Vallati", "Frank Hutter", "Lukás Chrpa", "Thomas Leo McCluskey" ], "title": "On the effective configuration of planning domain models", "venue": "In Proc. of International Joint Conference on Artificial Intelligence", "year": 2015 }, { "authors": [ "Konstantinos Varelas", "Anne Auger", "Dimo Brockhoff", "Nikolaus Hansen", "Ouassim Ait ElHara", "Yann Semet", "Rami Kassab", "Frédéric Barbaresco" ], "title": "A comparative study of large-scale variants of CMA-ES", "venue": "In Proc. of Parallel Problem Solving from Nature (PPSN’18),", "year": 2018 }, { "authors": [ "Linnan Wang", "Rodrigo Fonseca", "Yuandong Tian" ], "title": "Learning search space partition for black-box optimization using Monte Carlo Tree Search. arXiv:2007.00708", "venue": "To appear in Proc. of NeurIPS 2020,", "year": 2020 }, { "authors": [ "Lin Xu", "Frank Hutter", "Holger H. Hoos", "Kevin Leyton-Brown" ], "title": "SATzilla: Portfolio-based algorithm selection for SAT", "venue": "J. Artif. Int. Res.,", "year": 2008 }, { "authors": [ "Li" ], "title": "Additional problems (2): PowerSystems (1806 to 9646 neural decision variables) and LSGO (mix of partially separable, overlapping, shifted cases", "venue": "PowerSystems Lsgo", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION: STATE OF THE ART", "text": "Many real-world optimization challenges are black-box problems; i.e., instead of having an explicit problem formulation, they can only be accessed through the evaluation of solution candidates. These evaluations often require simulations or even physical experiments. Black-box optimization methods are particularly widespread in machine learning (Salimans et al., 2016; Wang et al., 2020), to the point that it is considered a key research area of artificial intelligence. Black-box optimization algorithms are typically easy to implement and easy to adjust to different problem types. To achieve peak performance, however, proper algorithm selection and configuration are key, since black-box optimization algorithms have complementary strengths and weaknesses (Rice, 1976; Smith-Miles, 2009; Kotthoff, 2014; Bischl et al., 2016; Kerschke & Trautmann, 2018; Kerschke et al., 2018). But whereas automated algorithm selection has become standard in SAT solving (Xu et al., 2008) and AI planning (Vallati et al., 2015), a manual selection and configuration of the algorithms is still predominant in the broader black-box optimization context. To reduce the bias inherent to such manual choices, and to support the automation of algorithm selection and configuration, sound comparisons of the different black-box optimization approaches are needed. Existing benchmarking suites, however, are rather selective in the problems they cover. This leads to specialized algorithm frameworks whose performance suffer from poor generalizability. Addressing this flaw in black-box optimization, we present a unified benchmark collection which covers a previously unseen breadth of problem instances. We use this collection to develop a high-performing algorithm selection wizard, ABBO. ABBO uses high-level problem characteristics to select one or several algorithms, which are run for the allocated budget of function evaluations. Originally derived from a subset of the available benchmark collection, in particular YABBOB, the excellent performance of ABBO generalizes across almost all settings of our broad benchmark suite. Implemented as a fork of Nevergrad (Rapin & Teytaud, 2018), the benchmark collection, the ABBO wizard, the base solvers, and all performance data are open source. The algorithms are automatically rerun at certain time intervals and all\nAlgorithm 1 High-level overview of ABBO. Selection rules are followed in this order, first match applied. d = dimension, budget b = number of evaluations. Details in (Anonymous, 2020).\nCase Choice Discrete decision variables only Noisy optimization with categorical variables Genetic algorithm mixed with bandits (Heidrich-Meisner & Igel, 2009; Liu et al., 2020). alphabets of size < 5, sequential evaluations (1 + 1)-Evolutionary Alg. with linearly decreasing stepsize alphabets of size < 5, parallel case Adaptive (1 + 1)-Evolutionary Alg. (Doerr et al., 2019). Other discrete cases with finite alphabets Convert to the continuous case using SoftMax as in (Liu et al., 2020) and apply CMandAS2 (Rapin et al., 2019) Presence of infinite discrete domains FastGA (Doerr et al., 2017) Numerical decision variables only, evaluations are subject to noise d > 100 progressive optimization as in (Berthier, 2016). d ≤ 30 TBPSA (Hellwig & Beyer, 2016) b > 100 sequential quadratic programming Other cases TBPSA (Hellwig & Beyer, 2016) Numerical decision variables only, high degree of parallelism Parallelism > b/2 or b < d MetaTuneRecentering (Meunier et al., 2020) Parallelism > b/5, d < 5, and b < 100 DiagonalCMA-ES (Ros & Hansen, 2008) Parallelism > b/5, d < 5, and b < 500 Chaining of DiagonalCMA-ES (100 asks), then CMA-ES+metamodel (Auger et al., 2005) Parallelism > b/5, other cases NaiveTBPSA as in (Cauwet & Teytaud, 2020) Numerical decision variables only, sequential evaluations b > 6000 and d > 7 Chaining of CMA-ES and Powell, half budget each. b < 30d and d > 30 (1 + 1)-Evol. Strategy w/ 1/5-th rule (Rechenberg, 1973) d < 5 and b < 30d CMA-ES + meta-model (Auger et al., 2005) b < 30d Cobyla (Powell, 1994)\nFor all other cases and all details, please refer to the source code\ndata is exported to the public dashboard (Rapin & Teytaud, 2020). For ICLR reviewers, all code is available, thanks to github-anonymizer, at (Anonymous, 2020).\nIn summary, our contributions are as follows. (1) OptimSuite Benchmark Collection: OptimSuite combines several contributions that recently led to improved reliability and generalizability of black-box optimization benchmarking, among them LSGO (Li et al., 2013), YABBOB (Hansen et al., 2009; Liu et al., 2020; Anonymous, 2020), Pyomo (Hart et al., 2017; Anonymous, 2020), MLDA (Gallagher & Saleem, 2018), and MuJoCo (Todorov et al., 2012; Mania et al., 2018), and others (novelty discussed in Section 2). (2) Algorithm Selection Wizard ABBO: Our algorithm selection technique, ABBO, can be seen as an extension of the Shiwa wizard presented in (Liu et al., 2020). It uses three types of selection techniques: passive algorithm selection (choosing an algorithm as a function of a priori available features (Baskiotis & Sebag, 2004; Liu et al., 2020)), active algorithm selection (a bet-and-run strategy which runs several algorithms for some time and stops all but the strongest (Mersmann et al., 2011; Pitzer & Affenzeller, 2012; Fischetti & Monaci, 2014; Malan & Engelbrecht, 2013; Muñoz Acosta et al., 2015; Cauwet et al., 2016; Kerschke et al., 2018)), and chaining (running several algorithms in turn, in an a priori defined order (Molina et al., 2009)). Our wizard combines, among others, algorithms suggested in (Virtanen et al., 2019; Hansen & Ostermeier, 2003; Storn & Price, 1997; Powell, 1964; 1994; Liu et al., 2020; Hellwig & Beyer, 2016; Artelys, 2015; Doerr et al., 2017; 2019; Dang & Lehre, 2016). Another core contribution of our work is a sound comparison of our wizard to Shiwa, and to the long list of algorithms available in Nevergrad." }, { "heading": "2 SOUND BLACK-BOX OPTIMIZATION BENCHMARKING", "text": "We summarize desirable features and common shortcomings of black-box optimization benchmarks and discuss how OptimSuite addresses these.\nGeneralization. The most obvious issue in terms of generalization is the statistical one: we need sufficiently many experiments for conducting valid statistical tests and for evaluating the robustness of algorithms’ performance. This, however, is probably not the main issue. A biased benchmark, excluding large parts of the industrial needs, leads to biased conclusions, no matter how many experiments we perform. Inspired by (Recht et al., 2018) in the case of image classification, and similar to the spirit of cross-validation for supervised learning, we use a much broader collection of benchmark problems for evaluating algorithms in an unbiased manner. Another subtle issue in terms of generalization is the case of instance-based choices of (hyper-)parameters: an experimenter\nmodifying the algorithm or its parameters specifically for each instance can easily improve results by a vast margin. In this paper, we consider that only the following problem properties are known in advance (and can hence be used for algorithm selection and configuration): the dimension of the domain, the type and range of each variable, their order, the presence of noise (but not its intensity), the budget, the degree of parallelism (i.e., number of solution candidates that can be evaluated simultaneously). To mitigate the common risk of over-tuning, we evaluate algorithms on a broad range of problems, from academic benchmark problems to real-world applications. Each algorithm runs on all benchmarks without any change or task-specific tuning.\nUse the ask, tell, and recommend pattern. Formalizing the concept of numerical optimization is typically made through the formalism of oracles or parallel oracles (Rogers, 1987). A recent trend is the adoption of the ask-and-tell format (Collette et al., 2010). The bandit literature pointed out that we should distinguish ask, tell, and recommend: the way we choose a point for gathering more information is not necessarily close to the way we choose an approximation of the optimum (Bubeck et al., 2011; Coulom, 2012b; Decock & Teytaud, 2013). We adopt the following framework: given an objective function f and an optimizer, for i ∈ {1, . . . , T}, do x ← optimizer.ask and optimizer.tell(x, f(x)). Then, evaluate the performance with f(optimizer.recommend). The requirement of a recommend method distinct from the ask is critical in noisy optimization. A debate pointed out some shortcomings in the the noisy counterpart of BBOB (Auger & Hansen, 2009) which was assuming that ask = recommend: (Beyer, 2012a;b; Coulom, 2012a) have shown that in the noisy case, this difference was particularly critical, and a framework should allow algorithms to “recommend” differently than they “ask”. A related issue is that a run with budget T is not necessarily close to the truncation of a run in budget 10T .\nTranslation-invariance. Zero frequently plays a special role in optimization. For example, complexity penalizations often “push” towards zero. In control, numbers far from zero are often more likely to lead to bang-bang solutions (and therefore extract zero signal, leading to a needle-inthe-haystack optimization situation), in particular with neural networks. In one-shot optimization, (Cauwet et al., 2019; Meunier et al., 2020) have shown how much focusing to the center is a good idea in particular in high-dimension. Our experiments in control confirm that the scale of the optimization search is critical, and explains the misleading results observed in some optimization papers (Section 4.2). In artificial experiments, several classical test functions have their optimum in zero. To avoid misleading conclusions, it is now a standard procedure, advocated in particular in (Hansen et al., 2009), to randomly translate the objective functions. This is unfortunately not always applied.\nRotation and symmetrization. Some optimization methods may perform well on separable objective functions but degrade significantly in optimizing non-separable functions. If the dimension of a separable objective function is d, these methods can reduce the objective function into d onedimensional optimization processes (Salomon, 1996). Therefore, Hansen et al. (2009; 2011) have\ninsisted that objective functions should be rotated to generate more difficult non-separable objective functions. However, Bousquet et al. (2017) pointed out the importance of dummy variables, which are not invariant per rotation; and (Holland, 1975) and more generally the genetic algorithms literature insist that rotation does not always makes sense – we lose some properties of a real-world objective function, and in some real-world scenarios rotating would, e.g., mix temperature, distance and electric intensity. Permutating the order of variables is also risky, as their order is sometimes critical - k-point crossovers a la Holland (Holland, 1975) typically assume some order of variables, which would be broken. Also, users sometimes rank variables with the most important first – and some optimization methods do take care of this (Cauwet et al., 2019). In OptimSuite, we do include rotations, but include both cases, rotated or not. For composite functions which use various objective functions on various subsets of variables, we consider the case with rotations – without excluding the non-rotated case. An extension of symmetrization that we will integrate later in ABBO, which makes sense for replicating an objective function without exact identity, consists in symmetrizing some variables: for example, if the ith variable has range [a, b], we can replace xi by b + a − xi. Applying this on various subsets of variables leads to 2d symmetries of an objective function, if the dimension is d. This variation can reduce the bias toward symmetric search operations (Li et al., 2013).\nBenchmarking in OptimSuite. We summarize in Table 1 some existing benchmark collections and their (desirable) properties. We inherit various advantages from Nevergrad, namely the automatic rerun of experiments and reproducibility in one-line. Our fork includes PBT (a small scale version of Population-Based Training (Jaderberg et al., 2017)), Pyomo (Hart et al., 2017), Photonics (problems related to optical properties and nanometric materials), YABBOB and variants, LSGO (Li et al., 2013), MLDA (Gallagher & Saleem, 2018), PowerSystems, FastGames, 007, Rocket, SimpleTSP, Realworld (Liu et al., 2020), MuJoCo (Todorov et al., 2012) and others including a (currently small) benchmark of hyperparameters of Scikit-Learn (Pedregosa et al., 2011) and Keras-tuning, all of those being visible for review at the above-mentioned anonymized URL (underlined means: the benchmark is either new, or, in the case of PowerSystems or SimpleTSP, significantly modified compared to previous works, or, in the case of LSGO or MuJoCo, included for the first time inside Nevergrad. For MuJoCo, we believe that interfacing with Nevergrad is particularly useful, to ensure fair comparisons, which rely very much on the precise setup of MuJoCo. . We note that, at present, we do not reproduce the extreme black-box nature of Loshchilov & Glasmachers (2017). Still, by integrating such a wide range of benchmarks in a single open source framework, which, in addition, is periodically re-run, we believe that Nevergrad/OptimSuite provides a significant contribution to benchmarking, and this both for the optimization and the machine learning community, where most of the benchmark suites originate from." }, { "heading": "3 A NEW ALGORITHM SELECTION WIZARD: ABBO", "text": "Black-box optimization is sometimes dominated by evolutionary computation. Evolution strategies (Beyer & Schwefel, 2002; Beyer, 2001; Rechenberg, 1973) have been particularly dominant in the continuous case, in experimental comparisons based on the Black-Box Optimization Benchmark BBOB (Hansen et al., 2009) or variants thereof. Parallelization advantages (Salimans et al., 2016) are particularly appreciated in the machine learning context. However, Differential Evolution (Storn & Price, 1997) is a key component of most winning algorithms in competitions based on variants of Large Scale Global Optimization (LSGO (Li et al., 2013)), suggesting a significant difference between these benchmarks. In particular, LSGO is more based on correctly identifying a partial decomposition and scaling to ≥ 1000 variables, whereas BBOB focuses (mostly, except (Varelas et al., 2018)) on ≤ 40 variables. Mathematical programming techniques (Powell, 1964; 1994; Nelder & Mead, 1965; Artelys, 2015) are rarely used in the evolutionary computation world, but they have won competitions (Artelys, 2015) and significantly improved evolution strategies through memetic methods (Radcliffe & Surry, 1994). Algorithm selection was applied to continuous blackbox optimization and pushed in Nevergrad Liu et al. (2020) : their optimization algorithm combines many optimization methods and outperforms each of them when averaged over diverse test functions. Closer to machine learning, efficient global optimization (Jones et al., 1998) is widely used, although it suffers from the curse of dimensionality more than other methods (Snoek et al., 2012); (Wang et al., 2020) presented a state-of-the-art algorithm in black-box optimization on MuJoCo, i.e., for the control of various realistic robots (Todorov et al., 2012). We propose ABBO, which extends (Liu et al., 2020) by the following features: (1) Better use of chaining (Molina et al.,\n2009) and more intensive use of mathematical programming techniques for the last part of the optimization run, i.e., the local convergence, thanks to Meta-Models (in the parallel case) and more time spent in Powell’s method (in the sequential case). This explains the improvement visible in Section 4.1. (2) Better performance in discrete optimization, using additional codes recently introduced in Nevergrad, in particular adaptive step-sizes. (3) Better segmentation of the different cases of continuous optimization. We still entirely rely on the base algorithms as available in Nevergrad; that is, we did not modify the tuning of any method. We acknowledge that our method only work thanks to the solid base components available in Nevergrad, which are based on contributions from various research teams. The obtained algorithm selection wizard, ABBO, is presented in Algorithm 1. The performances of ABBO is summarized in Table 2 and a detailed dashboard is available at https://dl.fbaipublicfiles.com/nevergrad/allxps/list.html." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "When presenting results on a single benchmark function, we present the usual average objective function value for different budget values. When a collection comprises multiple benchmark problems, we present our aggregated experimental results with two distinct types of plots: (1) Average normalized objective value for each budget, averaged over all problems. The normalized objective value is the objective value linearly rescaled to [0, 1]. (2) Heatmaps, showing for each pair (x, y) of optimization algorithms the frequency at which Algorithm x outperforms Algorithm y. Algorithms are ranked by average winning frequency. We use red arrows to highlight ABBO." }, { "heading": "4.1 BENCHMARKS IN OPTIMSUITE USED FOR DESIGNING AND VALIDATING ABBO", "text": "YABBOB (Yet Another Black-Box Optimization Benchmark (Rapin et al., 2019)), is an adaptation of BBOB (Hansen et al., 2009), with extensions such as parallelism and noise management. It contains many variants, including noise, parallelism, high-dimension (BBOB was limited to dimension < 50). Several extensions, for the high-dimensional, the parallel or the big budget case, have been developed: we present results in Figures 1 and 4. The high-dimensional one is inspired by (Li et al., 2013), the noisy one is related to the noisy counterpart of BBOB but correctly implements the difference between ask and recommend as discussed in Section 2. The parallel one generalizes YABBOB to settings in which several evaluations can be executed in parallel. Results on PARAMULTIMODAL are presented in Figure 6 (left). In addition, ABBO was run on ILLCONDI & ILLCONDIPARA (ill conditionned functions), HDMULTIMODAL (a multimodal case focusing on high-dimension), NOISY & RANKNOISY (two noisy continuous testbeds), YAWIDEBBOB (a broad range of functions including discrete cases and cases with constraints).\nAllDEs and Hdbo are benchmark collections specifically designed to compare DE variants (AllDEs) and high-dimensional Bayesian Optimization (Hdbo), respectively (Rapin & Teytaud, 2018). These benchmark functions are similar to the ones used in YABBOB. Many variants of DE (resp. BO) are considered. Results are presented in Figure 5. They show that the performance of ABBO, relatively to DE or BO, is consistent over a wide range of parametrizations of DE or BO, at least in their most classical variants. All these variants are publicly visible in Nevergrad and/or in our anonymized branch.\nRealworld: A test of ABBO is performed on the Realworld optimization benchmark suite proposed in (Rapin & Teytaud, 2018). This suite includes testbeds from MLDA (Gallagher & Saleem, 2018) and from (Liu et al., 2020). Results for this suite, presented in Figure 6, confirm that ABBO performs well also on benchmarks that were not explicitly used for its design - however, this benchmark was used for designing Shiwa, which was the basis of our ABBO. A rigorous cross-validation, on benchmarks totally independent from the design of Shiwa, is provided in the next sections." }, { "heading": "4.2 NEW BENCHMARKS IN OPTIMSUITE USED ONLY FOR EVALUATING ABBO", "text": "Pyomo is a modeling language in Python for optimization problems (Hart et al., 2017). It is popular and has been adopted in formulating large models for complex and real-world systems, including energy systems and network resource systems. We implemented an interface to Pyomo for Nevergrad and enriched our benchmark problems (Anonymous, 2020), which include discrete variables and constraints. Experimental results are shown in Figure 2. They show that ABBO also performs decently in discrete settings and in constrained cases.\nAdditional new artificial and real-world functions: LSGO (large scale global optimization) combines various functions into an aggregated difficult testbed including composite highly multimodal functions. Correctly decomposing the problem is essential. Various implementations of LSGO exist; in particular we believe that some of them do not match exactly. Our implementation follows (Li et al., 2013) , which introduces functions with subcomponents (i.e., groups of decision variables) having non-uniform sizes and non-uniform, even conflicting, contributions to the objective function. Furthermore, we present here experimental results on SequentialFastgames from the Nevergrad benchmarks, and three newly introduced benchmarks, namely Rocket, SimpleTSP (a set of traveling salesman problems), power systems (unit commitment problems (Padhy, 2004)). Experimental results are presented in Figures 2, 7, and 8. They show that ABBO performs well on new benchmarks, never used for its design nor for that of the low-level heuristics used inside ABBO.\nMuJoCo. Many articles (Sener & Koltun, 2020; Wang et al., 2020) studied the MuJoCo testbeds (Todorov et al., 2012) in the black-box setting. MuJoCo tasks correspond to control problems. Defined in (Wang et al., 2020; Mania et al., 2018), the objective is to learn a linear mapping\nfrom states to actions. It turned out that the scaling is critical (Mania et al., 2018): for reasons mentioned in Section 2, solutions are close to 0. We chose to scale all the variables of the problem at the power of 0.1 the closest to 1.2/d, for all methods run in Figure 3. We remark that ABBO and Shiwa perform well, including comparatively to gradient-based methods in some cases, while having the ability to work when the gradient is not available. When the gradient is available, black-box methods do not require computation of the gradient, which saves time.\nWe use the same experimental setup as Wang et al. (2020) (linear policy, offline whitening of states). We get results better than LA-MCTS, in a setting i.e., does not use any expensive surrogate model (Table 3). Our runs with CMA-ES and Shiwa are better than those in (Wang et al., 2020). We acknowledge that LMRS (Sener & Koltun, 2020) outperforms our method on all MuJoCo tasks, using a deep network as a surrogate model: however, we point out that a part of their code is not open sourced, making the experiments not reproducible. In addition, when rerunning their repository without the non open sourced part, it solved Half-Cheetah within budget 56k, which is larger than ours. For Humanoid, the target was reached at 768k, which is again larger than our budget. Results from ABBO are comparable to, and usually better than (for the 3 hardest problems), results from LA-MCTS, while ABBO is entirely reproducible. In addition, it runs the same method for all\nbenchmarks and it is not optimized for each task specifically as in (Sener & Koltun, 2020; Wang et al., 2020). In contrast to ABBO, (Wang et al., 2020) uses different underlying regression methods and sampling methods depending on the MuJoCo task, and it is not run on other benchmarks except for some of the HDMULTIMODAL ones. On the latter, ABBO performances are significantly better for Ackley and Rosenbrock in dimension 100 (expected results around 100 and 10−8 after 10k iterations for Rosenbrock and Ackley respectively for ABBO, vs 0.5 and 500 in (Wang et al., 2020)). From the curves in (Wang et al., 2020) and in the present work, we expect LA-MCTS to perform well with an adapted choice of parametrization and with a low budget, for tasks related to MuJoCo, whereas ABBO is adapted for wide ranges of tasks and budgets." }, { "heading": "5 CONCLUSIONS", "text": "This paper proposes OptimSuite, a very broad benchmark suite composed of real-world and of artificial benchmark problems. OptimSuite is implemented as a fork of Nevergrad (Rapin & Teytaud, 2018), from which it inherits a strong reproducibility: our (Python) code is open source (Anonymous, 2020), tests are rerun periodically, and up-to-date results are available in the public dashboard (Rapin & Teytaud, 2020). A whole experiment can be done as a one-liner. OptimSuite fixes several issues of existing benchmarking environments. The suite subsumes MuJoCo, Pyomo, LSGO, YABBOB, MLDA, and several new real-world problems. We also propose ABBO, an improved algorithm selection wizard. Despite its simplicity, ABBO shows very promising performance across the whole benchmark suite, often outperforming the previous state-of-the-art, problem-specific solvers: (a) by solving 5 of the 6 cases without any task-specific hyperparameter tuning, ABBO outperforms LA-MCTS, which was specialized for each single task, (b) ABBO outperforms Shiwa on YABBOB and its variants, which is the benchmark suite used to design Shiwa in the first place, (c) ABBO is also among the best methods on LSGO and almost all other benchmarks.\nFurther work. OptimSuite subsumes most of the desirable features outlined in Section 2, with one notable exception, the true black-box setting, which other benchmark environments have implemented through a client-server interaction (Loshchilov & Glasmachers, 2017). A possible combination between our platform and such a challenge, using the dashboard to publish the results, could be useful, to offer a meaningful way for cross-validation. Further improving ABBO is on the roadmap. In particular, we are experimenting with the automation of the still hand-crafted selection rules. Note, though, that it is important to us to maintain a high level of interpretability, which we consider key for a wide acceptance of the wizard. Another avenue for future work is a proper configuration of the low-level heuristics subsumed by ABBO. At present, some of them are merely textbook implementations, and significant room for improvement can therefore be expected. Newer variants (Loshchilov, 2014; Akimoto & Hansen, 2016; Loshchilov et al., 2018) of CMAES, of LMRS (Sener & Koltun, 2020), recent Bayesian optimization libraries (e.g. Eriksson et al. (2019)), as well as per-instance algorithm configuration such as Belkhir et al. (2017) are not unlikely to result in important improvements for various benchmarks. We also plan on extending OptimSuite further, both through interfacing existing benchmark collections/problems, and by designing new benchmark problems ourselves." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank the anonymous reviewers for valuable suggestions that helped us improve the clarity and the presentation of our work." }, { "heading": "A DETAILS ABOUT PROPERTIES OF BENCHMARKS", "text": "We specify the properties mentioned in Table 1.\n• Large scale: includes dimension ≥ 1000. • Translations: in unbounded continuous domains, a standard deviation σ has to be pro-\nvided, for example for sampling the first and second iterates of the optimization algorithm. Given a standard deviation σ, we consider that there is translation when optimas are randomly translated by a N (0, σ2) shift. Only interesting for artificial cases.\n• Far-optimum: optima are translated far from the optimum, with standard deviation at least N (0, 25× σ2).\n• Symmetrizations / rotations (here assuming an optimum, up to translation, in 0). Rotation: with a random rotation matrix M , the function x 7→ f(x) is replaced by x 7→ f(M(x)). Symmetrization: x 7→ f(x) can be replaced by x 7→ f(S(x)), with S a diagonal matrix with each diagonal coefficient equal to 1 or −1 with probability 50%. We do not request all benchmarks to be rotated: it might be preferable to have both cases considered.\n• One-line reproducibility: Where reproducibility requires significant coding, it is unlikely to be of great use outside of a very small set of specialists. One-line reproducibility is given when the effort to reproduce an entire experiment does not require more than the execution of a single line. We consider this to be an important feature.\n• Periodic automated dashboard: are algorithms re-run periodically on new problem instances? Some platforms do not collect the algorithms, and reproducibility is hence not given. An automated dashboard is convenient also because new problems can be added “on the go” without causing problems, as all algorithms will be executed on all these new problem instances. This feature addresses what we consider to be one of the biggest bottlenecks in the current benchmarking environments.\n• Complex or real-world: Real-world is self-explanatory; complex means a benchmark involving a complex simulator, even if it is not real world. MuJoCo is in the “complex” category.\n• Multimodal: whether the suite contains problems for which there are local optima which are not global optima.\n• Open sourced / no license: Are algorithms and benchmarks available under an open source agreement. BBOB does not collect algorithms, MuJoCo requires a license, LSGO and BBOB are not realworld, Mujoco requires a license, BBComp is no longer maintained, Nevergrad before OptimSuite did not include complex ML problems without license issue before our work: some people have already applied Nevergrad to MuJoCo, but with our work MuJoCo becomes part of Nevergrad so that people can upload their code in Nevergrad and it will be run on all benchmarks, including MuJoCo.\n• Ask/tell/recommend correctly implemented (Collette et al., 2010; Bubeck et al., 2011): The ask and tell idea (developped in Collette et al. (2010)) is that an optimization algorithm should not come under the format Optimizer.minimize(objective− function) because there are many settings in which this is not possible: you might think of agents optimizing concurrently their own part of an objective function, and problems of reentrance, or asynchronicity. All settings can be recovered from an ask/tell optimization method. This becomes widely used. However, as well known in the bandit literature (you can think of pure exploration bandits (Bubeck et al., 2011)), it is necessary to distinguish ask, tell and recommend: the “recommend” method is the one which proposes an approximation of the optimum. Let us develop an example explaining why this matters: the domain is {1, 2, 3, 4}, and we have a budget of 20 in a noisy case. NoisyBBOB assumes that the optimum is found when “ask” returns the optimum arm: then, the status remains “found” even if the algorithm has no idea where is the optimum and never comes back nearby. So an algorithm which just iteratively “asks” 1, 2, 3, 4, 1, 2, 3, 4, . . . reaches the optimum in at most 4 iterations. This does not mean anything in the noisy case, as the challenge is to figure out which of the four numbers is the optimum. With a proper ask/tell/recommend, the optimizer chooses an arm at the end of the budget. A simple regret is then computed.\nActually this also matters in the noise-free case, but the issue is much more critical in noisy optimization. The case of continuous noisy optimization also has counter-examples and all the best noisy optimization algorithms use ask/tell/recommend. We add the reference to the paper above. • Human excluded / client-server: The problem instances are truly black-box. Algorithms\ncan only suggest points and observe function values, but neither the algorithm nor its designer have access to any other information about the problem apart from the number of variables, their type, ranges, and order. It is impossible to repeat experiments for tuning hyperparameters without “paying” the budget of the HP tuning. This is something we could not do, as everything is public and open sourced: however, we believe that we mitigate this issue by considering a large number of benchmarks." }, { "heading": "B ADDITIONAL FIGURES", "text": "PARAMULTIMODAL" } ]
2,020
BLACK-BOX OPTIMIZATION REVISITED: IMPROVING ALGORITHM SELECTION WIZARDS
SP:7f369156e476623039e657c05ddc65aabdd923a8
[ "This paper proposes a VAE based model for learning latent causal factors given data from multiple domains. Similar to [Kingma and Hyv¨arinen, 2020], it utilizes additional labels as supervision signals and learns the model using a Bayesian optimization approach given a fixed hypothetical causal structure. The identifiability is obtained by assuming the casual mechanism to be domain invariant, which is partially supported by some empirical experiments." ]
Current supervised learning can learn spurious correlation during the data-fitting process, imposing issues regarding interpretability, out-of-distribution (OOD) generalization, and robustness. To avoid spurious correlation, we propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction. Specifically, we introduce latent variables that are separated into (a) output-causative factors and (b) others that are spuriously correlated to the output via confounders, to model the underlying causal factors. We further assume the generating mechanisms from latent space to observed data to be causally invariant. We give the identifiable claim of such invariance, particularly the disentanglement of output-causative factors from others, as a theoretical guarantee for precise inference and avoiding spurious correlation. We propose a Variational-Bayesian-based method for estimation and to optimize over the latent space for prediction. The utility of our approach is verified by improved interpretability, prediction power on various OOD scenarios (including healthcare) and robustness on security.
[]
[ { "authors": [ "M. Arjovsky", "L. Bottou", "I. Gulrajani", "D. Lopez-Paz" ], "title": "Invariant risk minimization", "venue": null, "year": 2019 }, { "authors": [ "A.R. Barron", "Sheu", "C.-H" ], "title": "Approximation of density functions by sequences of exponential families", "venue": null, "year": 1991 }, { "authors": [ "A. Bellot", "M. van der Schaar" ], "title": "Generalization and invariances in the presence of unobserved confounding", "venue": null, "year": 2020 }, { "authors": [ "S. Ben-David", "J. Blitzer", "K. Crammer", "F. Pereira" ], "title": "Analysis of representations for domain adaptation, in ‘Advances in neural information processing", "venue": null, "year": 2007 }, { "authors": [ "Y. Bengio" ], "title": "The consciousness prior", "venue": null, "year": 2017 }, { "authors": [ "Y. Bengio", "A. Courville", "P. Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "year": 2013 }, { "authors": [ "I. Biederman" ], "title": "Recognition-by-components: a theory of human image understanding.", "venue": "Psychological review", "year": 1987 }, { "authors": [ "P. Bühlmann" ], "title": "Invariance, causality and robustness", "venue": null, "year": 2018 }, { "authors": [ "M. Davies" ], "title": "Identifiability issues in noisy ica", "venue": "IEEE Signal processing letters", "year": 2004 }, { "authors": [ "C Döbler" ], "title": "Stein’s method of exchangeable pairs for the beta distribution and generalizations", "venue": "Electronic Journal of Probability", "year": 2015 }, { "authors": [ "J. Eriksson", "V. Koivunen" ], "title": "Identifiability and separability of linear ica models revisited", "venue": "in ‘Proc. of ICA’,", "year": 2003 }, { "authors": [ "Y. Ganin", "E. Ustinova", "H. Ajakan", "P. Germain", "H. Larochelle", "F. Laviolette", "M. Marchand", "V. Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "Journal of Machine Learning Research", "year": 2016 }, { "authors": [ "L.A. Gatys", "A.S. Ecker", "M. Bethge" ], "title": "A neural algorithm of artistic style", "venue": null, "year": 2015 }, { "authors": [ "M. Gong", "K. Zhang", "T. Liu", "D. Tao", "C. Glymour", "B. Schölkopf" ], "title": "Domain adaptation with conditional transferable components, in ‘International", "venue": "Conference on Machine Learning’,", "year": 2016 }, { "authors": [ "I.J. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": null, "year": 2014 }, { "authors": [ "P.J. Grother" ], "title": "Nist special database 19 handprinted forms and characters database", "venue": "National Institute of Standards and Technology", "year": 1995 }, { "authors": [ "R. Guerreiro", "J. Bras" ], "title": "The age factor in Alzheimer’s disease’, Genome medicine", "venue": null, "year": 2015 }, { "authors": [ "Y. He", "Z. Shen", "P. Cui" ], "title": "Towards non-i.i.d. image classification: A dataset and baselines", "venue": null, "year": 2019 }, { "authors": [ "C. Heinze-Deml", "N. Meinshausen" ], "title": "Conditional variance penalties and domain shift robustness", "venue": null, "year": 2017 }, { "authors": [ "P. Hoyer", "D. Janzing", "J.M. Mooij", "J. Peters", "B. Schölkopf" ], "title": "Nonlinear causal discovery with additive noise models’, Advances in neural information processing systems", "venue": null, "year": 2008 }, { "authors": [ "J. Huang", "A. Gretton", "K. Borgwardt", "B. Schölkopf", "A.J. Smola" ], "title": "Correcting sample selection bias by unlabeled data, in ‘Advances in Neural Information", "venue": "Processing Systems’,", "year": 2007 }, { "authors": [ "C. Humpel", "T. Hochstrasser" ], "title": "Cerebrospinal fluid and blood biomarkers in Alzheimer’s disease’, World journal of psychiatry", "venue": null, "year": 2011 }, { "authors": [ "A. Hyvarinen", "H. Morioka" ], "title": "Unsupervised feature extraction by time-contrastive learning and nonlinear ica, in ‘Advances in Neural Information", "venue": "Processing Systems’,", "year": 2016 }, { "authors": [ "A. Hyvärinen", "P. Pajunen" ], "title": "Nonlinear independent component analysis: Existence and uniqueness results", "venue": "Neural Networks", "year": 1999 }, { "authors": [ "A. Hyvärinen", "H. Sasaki", "R. Turner" ], "title": "Nonlinear ICA using auxiliary variables and generalized contrastive learning, in ‘The 22nd International Conference on Artificial Intelligence and Statistics", "venue": null, "year": 2019 }, { "authors": [ "M. Ilse", "J.M. Tomczak", "P. Forré" ], "title": "Designing data augmentation for simulating interventions", "venue": null, "year": 2020 }, { "authors": [ "M. Ilse", "J.M. Tomczak", "C. Louizos", "M. Welling" ], "title": "DIVA: Domain invariant variational autoencoders", "venue": null, "year": 2019 }, { "authors": [ "D. Janzing", "J. Peters", "J. Mooij", "B. Schölkopf" ], "title": "Identifying confounders using additive noise models, in ‘Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI 2009)", "venue": null, "year": 2009 }, { "authors": [ "D. Janzing", "J. Peters", "J. Mooij", "B. Schölkopf" ], "title": "Identifying confounders using additive noise models", "venue": null, "year": 2012 }, { "authors": [ "D. Janzing", "E. Sgouritsa", "O. Stegle", "J. Peters", "B. Schölkopf" ], "title": "Detecting low-complexity unobserved causes", "venue": null, "year": 2012 }, { "authors": [ "F.D. Johansson", "D. Sontag", "R. Ranganath" ], "title": "Support and invertibility in domain-invariant representations, in ‘The 22nd International Conference on Artificial Intelligence and Statistics", "venue": null, "year": 2019 }, { "authors": [ "G. Kang", "X. Dong", "L. Zheng", "Y. Yang" ], "title": "Patchshuffle regularization", "venue": null, "year": 2017 }, { "authors": [ "I. Khemakhem", "D.P. Kingma", "A. Hyvärinen" ], "title": "Variational autoencoders and nonlinear ICA: A unifying framework", "venue": "in ‘Proceedings of the 23th International Conference on Artificial Intelligence and Statistics (AISTATS-23)’,", "year": 2020 }, { "authors": [ "I. Khemakhem", "R.P. Monti", "D.P. Kingma", "A. Hyvärinen" ], "title": "Ice-beem: Identifiable conditional energy-based deep models", "venue": null, "year": 2020 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes, in ‘Proceedings of the International Conference on Learning Representations (ICLR 2014)", "venue": "ICLR Committee,", "year": 2014 }, { "authors": [ "M. Kocaoglu", "S. Shakkottai", "A.G. Dimakis", "C. Caramanis", "S. Vishwanath" ], "title": "Entropic latent variable discovery", "venue": null, "year": 2018 }, { "authors": [ "M.A. Kramer" ], "title": "Nonlinear principal component analysis using autoassociative neural networks", "venue": "AIChE journal", "year": 1991 }, { "authors": [ "D. Krueger", "E. Caballero", "Jacobsen", "J.-H", "A. Zhang", "J. Binas", "R.L. Priol", "A. Courville" ], "title": "Out-ofdistribution generalization via risk extrapolation (rex)", "venue": null, "year": 2020 }, { "authors": [ "K. Kuang", "P. Cui", "S. Athey", "R. Xiong", "B. Li" ], "title": "Stable prediction across unknown environments", "venue": "in ‘Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining’,", "year": 2018 }, { "authors": [ "C.M. Lee", "C. Hart", "J.G. Richens", "S. Johri" ], "title": "Leveraging directed causal discovery to detect latent common causes", "venue": null, "year": 2019 }, { "authors": [ "D. Li", "Y. Yang", "Song", "Y.-Z", "T.M. Hospedales" ], "title": "Learning to generalize: Meta-learning for domain generalization", "venue": null, "year": 2017 }, { "authors": [ "H. Li", "S. Jialin Pan", "S. Wang", "A.C. Kot" ], "title": "Domain generalization with adversarial feature learning", "venue": "in ‘Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition’,", "year": 2018 }, { "authors": [ "C.J. Maddison", "A. Mnih", "Y.W. Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": null, "year": 2016 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": null, "year": 2017 }, { "authors": [ "S. Magliacane", "T. van Ommen", "T. Claassen", "S. Bongers", "P. Versteeg", "J.M. Mooij" ], "title": "Domain adaptation by using causal inference to predict invariant conditional distributions, in ‘Advances", "venue": "Neural Information Processing Systems’,", "year": 2018 }, { "authors": [ "D. Marcos", "M. Volpi", "D. Tuia" ], "title": "Learning rotation invariant convolutional filters for texture classification, in ‘2016", "venue": "23rd International Conference on Pattern Recognition (ICPR)’,", "year": 2016 }, { "authors": [ "J.A. Mortimer" ], "title": "Brain reserve and the clinical expression of Alzheimer’s disease.", "venue": "Geriatrics (Basel, Switzerland)", "year": 1997 }, { "authors": [ "K. Muandet", "D. Balduzzi", "B. Schölkopf" ], "title": "Domain generalization via invariant feature representation, in ‘International", "venue": "Conference on Machine Learning’,", "year": 2013 }, { "authors": [ "S.J. Pan", "I.W. Tsang", "J.T. Kwok", "Q. Yang" ], "title": "Domain adaptation via transfer component analysis", "venue": "IEEE Transactions on Neural Networks", "year": 2010 }, { "authors": [ "J. Pearl" ], "title": "Causality, Cambridge university press", "venue": null, "year": 2009 }, { "authors": [ "J. Peters", "P. Bühlmann", "N. Meinshausen" ], "title": "Causal inference by using invariant prediction: identification and confidence intervals", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", "year": 2016 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Elements of causal inference: foundations and learning", "venue": null, "year": 2017 }, { "authors": [ "J. Peters", "J.M. Mooij", "D. Janzing", "B. Schölkopf" ], "title": "Causal discovery with continuous additive noise models", "venue": "Journal of Machine Learning Research", "year": 2014 }, { "authors": [ "M. Rojas-Carulla", "B. Schölkopf", "R. Turner", "J. Peters" ], "title": "Invariant models for causal transfer learning", "venue": "The Journal of Machine Learning Research", "year": 2018 }, { "authors": [ "Romeijn", "J.-W", "J. Williamson" ], "title": "Intervention and identifiability in latent variable modelling", "venue": "Minds and machines", "year": 2018 }, { "authors": [ "A. Rossler", "D. Cozzolino", "L. Verdoliva", "C. Riess", "J. Thies", "M. Nießner" ], "title": "Faceforensics++: Learning to detect manipulated facial images", "venue": "in ‘Proceedings of the IEEE International Conference on Computer Vision’,", "year": 2019 }, { "authors": [ "S. Sagawa", "P.W. Koh", "T.B. Hashimoto", "P. Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": null, "year": 2019 }, { "authors": [ "B. Schölkopf" ], "title": "Causality for machine learning", "venue": "arXiv preprint arXiv:1911.10500", "year": 2019 }, { "authors": [ "B. Schölkopf", "D. Janzing", "J. Peters", "K. Zhang" ], "title": "Robust learning via cause-effect models", "venue": null, "year": 2011 }, { "authors": [ "L. Schott", "J. Rauber", "M. Bethge", "W. Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": null, "year": 2018 }, { "authors": [ "E. Sgouritsa", "D. Janzing", "J. Peters", "B. Schölkopf" ], "title": "Identifying finite mixtures of nonparametric product distributions and causal inference of confounders, in ‘Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI 2013)", "venue": null, "year": 2013 }, { "authors": [ "S. Shankar", "V. Piratla", "S. Chakrabarti", "S. Chaudhuri", "P. Jyothi", "S. Sarawagi" ], "title": "Generalizing across domains via cross-gradient training, in ‘Proceedings of the International Conference on Learning Representations (ICLR 2018)", "venue": null, "year": 2018 }, { "authors": [ "S. Shimizu", "P.O. Hoyer", "A. Hyvärinen" ], "title": "Estimation of linear non-gaussian acyclic models for latent factors", "venue": "Neurocomputing 72(7-9),", "year": 2009 }, { "authors": [ "C. Shorten", "T.M. Khoshgoftaar" ], "title": "A survey on image data augmentation for deep learning", "venue": "Journal of Big Data", "year": 2019 }, { "authors": [ "R. Silva", "R. Scheine", "C. Glymour", "P. Spirtes" ], "title": "Learning the structure of linear latent variable models", "venue": "Journal of Machine Learning Research", "year": 2006 }, { "authors": [ "K. Simonyan", "A. Vedaldi", "A. Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "venue": null, "year": 2013 }, { "authors": [ "M. Sugiyama", "T. Suzuki", "S. Nakajima", "H. Kashima", "P. von Bünau", "M. Kawanabe" ], "title": "Direct importance estimation for covariate shift adaptation", "venue": "Annals of the Institute of Statistical Mathematics", "year": 2008 }, { "authors": [ "R. Suter", "D. Miladinovic", "B. Schölkopf", "S. Bauer" ], "title": "Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness, in ‘International", "venue": "Conference on Machine Learning’,", "year": 2019 }, { "authors": [ "M. Tan", "Q.V. Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "L. Taylor", "G. Nitschke" ], "title": "Improving deep learning using generic data augmentation", "venue": null, "year": 2017 }, { "authors": [ "D. Teney", "E. Abbasnejad", "Hengel", "A. v. d" ], "title": "Unshuffling data for improved generalization", "venue": null, "year": 2020 }, { "authors": [ "T. Teshima", "I. Sato", "M. Sugiyama" ], "title": "Few-shot domain adaptation by causal mechanism transfer", "venue": null, "year": 2020 }, { "authors": [ "J. Vina", "A. Lloret" ], "title": "Why women have more Alzheimer’s disease than men: gender and mitochondrial toxicity of amyloid-β peptide", "venue": "Journal of Alzheimer’s disease 20(s2),", "year": 2010 }, { "authors": [ "D.E. Worrall", "S.J. Garbin", "D. Turmukhambetov", "G.J. Brostow" ], "title": "Harmonic networks: Deep translation and rotation equivariance", "venue": "in ‘Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition’,", "year": 2017 }, { "authors": [ "C. Xie", "F. Chen", "Y. Liu", "Z. Li" ], "title": "Risk variance penalization: From distributional robustness to causality", "venue": null, "year": 2020 }, { "authors": [ "C. Zhang", "S. Bengio", "M. Hardt", "B. Recht", "O. Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": null, "year": 2016 }, { "authors": [ "K. Zhang", "B. Schölkopf", "K. Muandet", "Z. Wang" ], "title": "Domain adaptation under target and conditional shift, in ‘International", "venue": "Conference on Machine Learning’,", "year": 2013 }, { "authors": [ "H. Zhao", "Combes", "R.T. d", "K. Zhang", "G.J. Gordon" ], "title": "On learning invariant representation for domain adaptation", "venue": null, "year": 2019 }, { "authors": [ "b̃z" ], "title": "The left is to prove that Mz and Ms are invertible matrices. Denote x̄ = f−1(x)", "venue": "Applying the (Khemakhem, Kingma and Hyvärinen,", "year": 2020 }, { "authors": [ "Teshima" ], "title": "Another difference lies in the definition of Y", "venue": null, "year": 2020 }, { "authors": [], "title": "εY where εY satisfies Gaussian distribution and S denotes the subset of covariates", "venue": "ofX . The Rojas-Carulla et al", "year": 2018 }, { "authors": [ "Krueger" ], "title": "2020) proposed to enforce the similar behavior of m classifiers with variance", "venue": null, "year": 2020 }, { "authors": [ "Arjovsky" ], "title": "2019) to learn invariant information for classifier. Recently, the Bellot and van der Schaar (2020) also assumes the invariance to be generating mechanisms and can generalize the capability of IRM when unobserved confounder exist. However, this work also lacks the analysis of identifiability result. We finish this section with the following summary of methods in section 7.7.4 and the IRM, in terms", "venue": null, "year": 2020 }, { "authors": [ "Arjovsky" ], "title": "Invariant Causation v.s. Invariant Correlation by Flipping", "venue": "As illustrated in Handwritting Sample Form in Fig", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Current data-driven deep learning models, revolutionary in various tasks though, heavily rely on i.i.d data to exploit all types of correlations to fit data well. Among such correlations, there can be spurious ones corresponding to biases (e.g., selection or confounding bias due to coincidence of the presence of the third factor) inherited from the data provided. Such data-dependent spurious correlations can erode the (i) interpretability of decision-making, (ii) ability of out-of-distribution (OOD) generalization, i.e., extrapolation from observed to new environments, which is crucial especially in safety-critical tasks such as healthcare, and (iii) robustness to small perturbation (Goodfellow et al., 2014).\nRecently, there is a Renaissance of causality in machine learning, expected to pursue causal prediction (Schölkopf, 2019). The so-called “causality” is pioneered by Judea Pearl (Pearl, 2009), as a mathematical formulation of this metaphysical concept grasped in the human mind. The incorporation of a priori about cause and effect endows the model with the ability to identify the causal structure which entails not only the data but also the underlying process of how they are generated. For causal prediction, the old-school methods (Peters et al., 2016; Bühlmann, 2018) causally related the output label Y to the observed input X , which however is NOT conceptually reasonable in scenarios with sensory-level observed data (e.g. modeling pixels as causal factors of Y does not make much sense).\nFor such applications, we rather adopt the manner in Bengio et al. (2013); Biederman (1987) to relate the causal factors of Y to unobserved abstractions denoted by S, i.e., Y ← fy(S, εy) via mechanism fy. We further assume existence of additional latent components denoted as Z, that together with S generates the input X via mechanism fx as X ← fx(S,Z, εx). Taking image classification as an example, the S and Z respectively refer to object-related abstractions (e.g., contour, texture, color) and contextual information (e.g., light, view). Such an assumption is similarly adopted in the literature of nonlinear Independent Components Analysis (ICA) (Hyvarinen and Morioka, 2016; Hyvärinen et al., 2019; Khemakhem, Kingma and Hyvärinen, 2020; Teshima et al., 2020) and latent generative models (Suter et al., 2019), which are however without separation of output (y)-causative factors (a.k.a, S) and other correlating factors (a.k.a, Z) that can both be learned in data-fitting process.\nWe encapsulate these assumptions into a novel causal model, namely Latent Causal Invariance Model (LaCIM) as illustrated in Fig. 1, in which we assume the structural equations fx (associated with S,Z → X), fy (associated with S → Y ) to be the Causal Invariant Mechanisms (CIMe) that hold under any circumstances with P(S,Z) allowed to be varied across domains. The incorporation of these\npriories can explain the spurious correlation embedded in the back-door path from Z to Y (contextual information to the class label in image classification). To avoid learning spurious correlations, our goal is to identify the intrinsic CIMe fx, fy. Specifically, we first prove the identifiability (i.e., the possibility to be precisely inferred up to an equivalence relation) of the CIMe. Notably, far beyond the scope in existing literature (Khemakhem, Kingma and Hyvärinen, 2020), our results can implicitly, and are the first to disentangle the output-causative factors (a.k.a, S) from others (a.k.a, Z) for prediction, to ensure the isolation of undesired spurious correlation. Guaranteed by such, we propose to estimate the CIMe by extending the Variational Auto-encoder (VAE) (Kingma and Welling, 2014) to the supervised scenario. For OOD prediction, we propose to optimize over latent space under the identified CIMe. To verify the correctness of our identifiability claim, we conduct a simulation experiment. We further demonstrate the utility of our LaCIM via high explainable learned semantic features, improved prediction power on various OOD scenarios (including tasks with confounding and selection bias, healthcare), and robustness on security.\nWe summarize our contribution as follows: (i) Methodologically, we propose in section 4.1 a latent causal model in which only a subset of latent components are causally related to the output, to avoid spurious correlation and benefit OOD generalization; (ii) Theoretically, we prove the identifiability (in theorem 4.3) of CIMe fx, fy from latent variables to observed data, which disentangles outputcausative factors from others; (iii) Algorithmically, guided by the identifiability, we in section 4.3 reformulate Variational Bayesian method to estimate CIMe during training and optimize over latent space during the test; (iv) Experimentally, LaCIM outperforms others in terms of prediction power on OOD tasks and interpretability in section 5.2, and robustness to tiny perturbation in section 5.3." }, { "heading": "2 RELATED WORK", "text": "The invariance/causal learning proposes to learn the assumed invariance for transferring. For the invariance learning methods in Krueger et al. (2020) and Schölkopf (2019), the “invariance” can refer to stable correlation rather than causation, which lacks the interpretability and impedes its generalization to a broader set of domains. For causal learning, Peters et al. (2016); Bühlmann (2018); Kuang et al. (2018); Heinze-Deml and Meinshausen (2017) assume causal factors as observed input, which is inappropriate for sensory-level observational data. In contrast, our LaCIM introduces latent components as causal factors of the input; more importantly, we explicitly separate them into the output-causative features and others, to avoid spurious correlation. Further, we provide the identifiability claim of causal invariant mechanisms. In independent and concurrent works, Teshima et al. (2020) and Ilse et al. (2020) also explore latent variables in causal relation. As comparisons, Teshima et al. (2020) did not differentiate S from Z; and Ilse et al. (2020) proposed to augment intervened data, which can be intractable in real cases.\nOther works which are conceptually related to us, as a non-exhaustive review, include (i) transfer learning which also leverages invariance in the context of domain adaptation (Schölkopf et al., 2011; Zhang et al., 2013; Gong et al., 2016) or domain generalization (Li et al., 2018; Shankar et al., 2018); and (ii) causal inference (Pearl, 2009; Peters et al., 2017) which proposes a structural causal model to incorporate intervention via “do-calculus” for cause-effect reasoning and counterfactual learning; (iii) latent generative model which also assumes generation from latent space to observed data (Kingma and Welling, 2014; Suter et al., 2019) but aims at learning generator in the unsupervised scenario." }, { "heading": "3 PRELIMINARIES", "text": "Problem Setup & Notation LetX,Y respectively denote the input and output variables. The training data {De}e∈Etrain are collected from the set of multiple environments Etrain, where each domain e is associated with a distribution Pe(X,Y ) over X × Y and De = {xei , yei , de}i∈[ne]\ni.i.d∼ Pe with [k] := {1, ..., k} for any k ∈ Z+. The de∈{0, 1}m denotes the one-hot encoded domain index for e, where 1 ≤ m := ∣∣Etrain∣∣ ≤ n := ∑e∈Etrain ne. Our goal is to learn a model f : X 7→ Y that learns output (y)-causative factors for prediction and performs well on the set of all environments E ⊃ Etrain, which is aligned with existing OOD generalization works (Arjovsky et al., 2019; Krueger et al., 2020). We use respectively upper, lower case letter and Cursive letter to denote the random variable, the instance and the space, e.g., a is an instance in the space A of random variable A. The [f ]A denotes the f restricted on dimensions of A. The Sobolev space W k,p(A) contains all f such that ∫ A ∣∣∂Afα∣∣A=a∣∣pda <∞,∀α ≤ k.\nStructural Causal Model. The structural causal model (SCM) is defined as the causal graph assigned with structural equations. The causal graph encodes the assumptions in missing arrows in a directed acylic graph (DAG): G = (V,E) with V,E respectively denoting the node set and the edge set. The Pa(k) denotes the set of parent nodes of Vk for each Vk ∈ V and the X → Y ∈ E indicates the causal effect of X on Y . The structural equations {Vk ← fk(Pa(k), εk)}Vk∈V , quantify the causal effects shown in the causal graph G. By assuming independence among exogenous variables {εk}k, the Causal Markov Condition states that P({Vk = vk}Vk∈V ) = ΠkP(Vk = vk|Pa(k) = pa(k)). A back-door path from Va to Vb is defined as a path that ends with an arrow pointing to Va (Pearl, 2009)." }, { "heading": "4 METHODOLOGY", "text": "We build our causal model associated with Causal Invariant Mechanism (CIMe, i.e., fx, fy) and a priori about the generating process in section 4.1, followed by our identifiability result for CIMe in section 4.2. Finially, we introduce our learning method to estimate CIMe in section 4.3.\n4.1 LATENT CAUSAL INVARIANCE MODEL\nWe introduce latent variables to model the abstractions/concepts that play as causal factors that generate the observed variables (X,Y ), which is more reasonable than assuming the X as the direct cause of Y in scenarios with sensory-level data. We explicitly separate the latent variables into two parts: the S and Z that respectively denote the y (output)-causative and y-non-causative factors, as shown by the arrow S → Y in Fig. 1. Besides, the X and Y are respectively generated by S,Z and S, via structural equations (with noise) fx, fy, which are denoted as Causal Invariant Mechanisms (CIMe) that hold across all domains. The output Y denotes the label generated by human knowledge, e.g., the semantic shape, the contour to discern the object, etc. Hence, we assume the Y as the outcome/effect of these high-level abstractions (Biederman, 1987) rather than the cause (detailed comparison with Y → S is left in supplementary 7.7.1). We call the model associated with the causal\ngraph in Fig. 1 as Latent Causal Invariance Model (LaCIM), with formal definition given in Def. 4.1.\nAs an illustration, we consider the image classification in which X,Y denote the image and the class label. Instead of X , i.e., the pixels, it is more reasonable to assume the causal factors (of X,Y ) as latent concepts (S,Z) that can denote light, angle, the shape of the object to generate X following the physical mechanisms. Among these concepts, only the ones that are causally related to the object, i.e., S (e.g., shape) are causal factors of the object label, i.e., Y . Following the physical or natural law, the mechanisms S,Z → X,S → Y invariantly hold across domains. The S := Rqs ,Z := Rqz denote the space of S,Z, with Pe(S,Z) (that characterizes the correlation between S and Z) varying across E (e.g., the object is more associated with a specific scene than others). We assume that the y-non-causative factor (i.e., Z) is associated with (but not causally related to) S,Y through the confounder C, which is allowed to take a specific value for each sample unit. Therefore, the back-door path Z←C→S→Y induces the correlation between Z and Y in each single domain. Rather than invariant causation, this correlation is data-dependent and can vary across domains, which is known as “spurious correlation”. In real applications, this spurious correlation corresponds to the bias inherited from data, e.g. the contextual information in object classification. This domain-specific S-Z correlation, can be explained by the source variable D, which takes a specific and fixed value for each domain and functions the prior of distribution of the confounder C, as illustrated in Fig. 1. This source variable D can refer to attributes/parameters that characterize the distribution of S,Z in each domain. When such attributes are unobserved, we use the domain index as a substitute. Consider the cat/dog classification task as an illustration, the animal in each image is either associated with the snow or grass. The S,Z respectively denote the concepts of animals and scenes. The D denotes the sampler, which can be described by the proportions of scenes associated with the cat and those associated with the dog. The D generates the C that denotes the (time, weather) to go outside and\ncollect samples. Since each sampler may have a fixed pattern (e.g. gets used to going out in the sunny morning (or in the snowy evening)), the data he/she collects, may have sample selection bias (e.g. with dogs (cats) more associated with grass (snow) in the sunny morning (or snowy evening) ). In this regard, the scene concepts Z can be correlated with the animal concepts S, and also the label Y .\nDefinition 4.1 (LaCIM). The Latent Causal Invariance Model (LaCIM) for e ∈ E is defined as a SCM characterized by (i) the causal graph, i.e., the G = (V,E) with V = {C, S, Z,X, Y } and E = {C → S,C → Z,Z → X,S → X,S → Y }; and (ii) structural equations with causal mechanisms {fc, fz, fs, fx, fy} embodying the quantitative causal information: c ← fc(de, εc), z ← fz(c, εz), s← fs(c, εs);x← fx(s, z, εx); y ← fy(s, εy), in which {εc, εz, εs, εx, εy} are independent exogenous variables that induce pfc(c|de), pfz (z|c), pfs(s|c), pfx(x|s, z), pfy (y|s). The CIMe fx, fy are assumed to be invariant across E . We call the environment-dependent parts: Pe(S,Z) and Pe(S,Z|X) as S,Z-prior and S,Z-inference in the following. Remark 1. We denote LaCIM-ds and LaCIM-d as two versions of LaCIM, with the source variable ds with practical meaning (e.g. attributes or parameters of P(S,Z)) observed or not. The observation of ds can be possible in some applications (e.g., age, gender that characterizes population in medical diagnosis). As for the LaCIM-d with ds unobserved, we use domain index D as a substitute.\nDenote C as the space of C. We assume that the C is finite union of disjoint sets {Cr}Rr=1, i.e. C := ∪Rr=1Cr, such that for any cr,i 6= cr,j ∈ Cr, it holds that p(s, z|cr,i) = p(s, z|cr,j) for any (s, z). Returning to the cat/dog classification example, the C denotes the range of time to collect samples, i.e., 00 : 00-24 : 00. The C can be divided into several time periods C1, ..., CR, such that the proportion of concepts of (animal,scene) given any c in the same period is unchanged, e.g., the dog often comes up on the grass in the morning. Further, since p(x, y|s, z) = p(x|s, z)p(y|s) is invariant, we have for each Cr that p(x, y|cr,i) = ∫ p(x, y|s, z)p(s, z|cr,i)dsdz = ∫ p(x, y|s, z)p(s, z|cr,j)dsdz = p(x, y|cr,j) for any (x, y). That is, the {p(x, y|cr}cr∈Cr for each (x, y) collapse to a single point, namely p(x, y|cr). In this regard, we have pe(x, y) := p(x, y|de) = ∑R r=1 p(x, y|cr)p(cr|de). Besides, we assume the Additive Noise Model (ANM) for X , i.e., fx(s, z, εx) = f̂x(s, z) + εx (we replace f̂x with fx without loss of generality), which has been widely adopted to identify the causal factors (Janzing et al., 2009; Peters et al., 2014; Khemakhem, Kingma and Hyvärinen, 2020). We need to identify the CIMe (i.e., fx, fy), guaranteed by the identifiability that ensures the learning method to distinguish S from Z to avoid spurious correlation, as presented in section 4.2. Traditionally speaking, the identifiability means the parameter giving rise to the observational distribution pθ?(x, y|de) can be uniquely determined, i.e., pθ(x, y|de) = pθ̃(x, y|de) =⇒ θ = θ̃. Instead of strict uniqueness, we rather identify an equivalent class of θ? (in Def. 4.2) that suffices to disentangle the y-causative features S from Z to avoid learning spurious correlation. To achieve this goal, we first narrow our interest in case when p(s, z|c) is exponential family in Eq. (1), in which we can respectively identify the S,Z up to linear and point-wise transformations given by theorem 4.3; then we generalize to any p(s, z|c) as long as it belongs to Sobolev space, as explained in theorem 4.4. A reformulated VAE is proposed to learn the CIMe practically. For generalization, note that the gap between two environments in terms of prediction given x, i.e.,\n∣∣Epe2 [Y |X = x] − Epe1 [Y |X = x] ∣∣ = ∫S ∣∣pe2(s|x) − pe1(s|x)∣∣pfy (y|s)ds, is mainly due to the inconsistency of S,Z-inference, i.e., pe(s, z|x) 6= pe′(s, z|x) for e′ 6= e (for details please refer to theorem 7.1 in supplement 7.1). Therefore, one cannot directly apply the trained {pe(s, z|x), pe(y|x)}e∈Etrain to the inference model of new environment, i.e. pe ′ (s, z|x), pe′(y|x) for e′ /∈ Etrain. To solve this problem and generalize to new environment, we note that since pfx(x|s, z) and pfy (y|s) are shared among all environments, we propose to inference s, z that give rise to the test sample x via maximizing the identified pfx(x|s, z), as a pseudo-likelihood of x given (s, z), rather than using S,Z-inference model which is inconsistent among environments. Then, we feed estimated s into invariant predictor pfy (y|s) for prediction." }, { "heading": "4.2 IDENTIFIABILITY OF CAUSAL INVARIANT MECHANISMS", "text": "We present the identifiability claim about the CIMe fx, fy, which implicitly distinguishes the ycausative factors (a.k.a, S) from others (a.k.a, Z) for prediction, to provide a theoretical guarantee for avoiding spurious correlations. Notably, the S and Z play “asymmetric roles” in terms of generating process, as reflected in additional generating flow from S to Y . This “information intersection” property of S, i.e., f−1y (ȳ) = [f −1 x ]S(x̄) for any (x̄, ȳ) ∈ fx(S,Z) × fy(S) if y = fy(s) + εy, is exploited to disentangle S from Z. Such a disentanglement analysis, is crucial to causal prediction\nbut lacked in existing literature about identifiability, such as those identifying the discrete latent confounders (Janzing, Sgouritsa, Stegle, Peters and Schölkopf, 2012; Sgouritsa et al., 2013); or those relying on ANM assumption (Janzing, Peters, Mooij and Schölkopf, 2012); linear ICA (Eriksson and Koivunen, 2003); (Khemakhem, Kingma and Hyvärinen, 2020; Khemakhem, Monti, Kingma and Hyvärinen, 2020; Teshima et al., 2020) (Please refer to supplement 7.6 for more broad reviews). Besides, our analysis extends the scope of Khemakhem, Kingma and Hyvärinen (2020) to categorical Y and general forms of P(S,Z|C = c) that belongs to Sobolev space, in theorem 4.4. Note that our analysis does NOT require observing the original source variable ds.\nWe first narrow our interest to a family class of LaCIM denoted as Pexp in which any p ∈ Pexp satisfies that (i) the S,Z belong to the exponential family; and that (ii) the Y is generated from the ANM. We will show later that Pexp can approximate any P(S,Z|c) ∈W r,2(S × Z) for some r ≥ 2:\nPexp = { LaCIM with y = fy(s) + εy, p(s, z|c) := pTz,Γzc (z|c)pTs,Γsc(s|c) }\nwith pTt,Γtc(t) := qt∏ i=1 exp ( kt∑ j=1 T ti,j(ti)Γ t c,i,j +Bi(ti)−Atc,i ) for t = s, z, and e ∈ E ,\n(1)\nwhere {T ti,j(ti)}, {Γtc,i,j} denote the sufficient statistics and natural parameters, {Bi}, {Atc,i} denote the base measures and normalizing constants to ensure the integral of distribution equals to 1. Let Tt(t) := [Tt1(t1), ...,Ttqt(tqt)] ∈ R kt×qt ( Tti(ti) := [T t i,1(ti), ..., T t i,kt(ti)], ∀i ∈ [qt] ) ,\nΓtc := [ Γtc,1, ...,Γ t c,qt ] ∈Rkt×qt ( Γtc,i := [Γ t c,i,1, ...,Γ t c,i,kt ],∀i ∈ [qt] ) . We define the ∼p-identifiability for θ := {fx, fy,Ts,Tz} as: Definition 4.2 (∼p-identifiability). We define a binary relation on the parameter space of X × Y: θ ∼p θ̃ if there exist two sets of permutation matrices and vectors, (Ms, as) and (Mz, az) for s and z respectively, such that for any (x, y) ∈ X × Y ,\nTs([f−1x ]S(x)) = MsT̃ s([f̃−1x ]S(x)) + as, T z([f−1x ]Z(x)) = MzT̃ z([f̃−1x ]Z(x)) + az, pfy (y|[f−1x ]S(x)) = pf̃y (y|[f̃ −1 x ]S(x)),\nWe say that θ is ∼p-identifiable, if for any θ̃, peθ(x, y) = peθ̃(x, y) ∀e ∈ Etrain, implies θ ∼p θ̃.\nIt can be shown that∼p satisfies the reflective property (θ∼p θ), the symmetric property (if θ∼p θ̃ then θ̃ ∼p θ), and the transitive property (if θ1 ∼p θ2 and θ2 ∼p θ3, then θ1 ∼p θ3), and hence is an equivalence relation (details in supplement 7.2). This definition states that the S,Z can be identified up to permutation and point-wise transformation, which is sufficient for disentanglement of S and identifying the predicting mechanism pfy (y|[f−1x ]S(x)). Specifically, the definition regarding fx implies the separation of S and Z unless the extreme case when S can be represented by Z, i.e., there exists a function h : S → Z such that [f−1x ]S(x) = h([f−1x ]Z(x)). This definition is inspired by but beyond the scope of unsupervised scenario considered in nonlinear ICA (Hyvärinen et al., 2019; Khemakhem, Kingma and Hyvärinen, 2020) to further distinguish of S from Z. Besides, the pfy (y|[f−1x ]S(x)) = pf̃y (y|[f̃ −1 x ]S(x)) further guarantees the identifiability of prediction: predict using fy(s) with s obtained from fx. The following theorem presents the ∼p-identifiability for Pexp: Theorem 4.3 (∼p-identifiability). For θ in the LaCIM peθ(x, y) ∈ Pexp for any e ∈ Etrain, we assume that i) CIMe satisfies that fx, f ′x and f ′′x are continuous and that fx, fy are bijective; ii) the T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; iii) the exogenous variables satisfy that the characteristic functions of εx, εy are almost everywhere nonzero. Under the diversity condition on A := [P>de1 , ..., P > dem ]\n> ∈ Rm×R with Pde := [p(c1|de), ..., p(cR|de)] that the A and[ [Γt=s,zc2 − Γ t=s,z c1 ] T, ..., [Γt=s,zcR − Γ t=s,z c1 ] T ]T\nhave full column rank for both t = s and t = z, we have that the θ := {fx, fy,Ts,Tz} are ∼p identifiable.\nThe bijectivity of fx and fy have been widely assumed in Janzing et al. (2009); Peters et al. (2014; 2017); Khemakhem, Kingma and Hyvärinen (2020); Teshima et al. (2020) as a basic condition for identifiability. It naturally holds for fx to be bijective since the latent components S,Z, as high-level abstractions which can be viewed as embeddings in auto-encoder (Kramer, 1991), lies in lower-dimensional space compared with input X which is supposed to have more variations, i.e., (qs + qz < qx). For categorical Y , the fy which generates the classification result, i.e., p(y = k|s) = [fy]k(s)/ ( ∑ k[fy]k(s)), will be shown later to be identifiable.\nThe diversity condition implies that i) m ≥ R ≥ max(kz ∗ qz, ks ∗ qs) + 1; and that ii) different environments are variant enough in terms of S-Z correlation (which is also assumed in Arjovsky et al. (2019)), as a necessary for the invariant one to be identified. As noted in the formulation, a larger m would be easier to satisfy the condition, which agrees with the intuition that more environments can provide more complementary information for the identification of the invariant mechanisms. Remark 2. The dimensions of the ground-truth S,Z are unknown, making the check about whether m is large enough impossible. Besides, in some real applications, the training environments are passively observed and may not satisfy the condition. However, we empirically find the improvement of LaCIM in terms of both OOD prediction and interpretability, if the multiple environments provided are diverse enough. Besides, a training environment can be the mixture of many sub-environments, which motivates to splitting the data according to their source ID or clustering results (Teney et al., 2020) to obtain more environments, making the condition easier to satisfy.\nExtension to the general form of LaCIM. We generalize the identifiable result in theorem 4.3 to any LaCIM as long as its P(S,Z|C = c) ∈ W r,2(S × Z) (for some r ≥ 2) and categorical Y , in the following theorem. This is accomplished by showing that any such LaCIM can be approximated by a sequence of distributions in Pexp, motivated by the facts in Barron and Sheu (1991) that the exponential family is dense in the set of distributions with bounded support, and in Maddison et al. (2016) that the continuous variable with multinomial logit model can be approximated by a series of distributions with i.i.d Gumbel noise as the temperature converges to infinity. Theorem 4.4 (Asymptotic ∼p-identifiability). Consider a LaCIM satisfying that pfx(x|s, z) and pfy (y|s) are smooth w.r.t s, z and s respectively. For each e and c ∈ C, suppose Pe(S,Z|C = c) ∈ W r,2(S × Z) for some r ≥ 2, we have that P is asymptotically ∼p-identifiable defined as: ∀ > 0, ∃ ∼p-identifiable P̃θ ∈ Pexp, s.t. dPok(pe(x, y), p̃eθ(x, y)) < ,∀e ∈ Etrain, (x, y) ∈ X × Y 1." }, { "heading": "4.3 CAUSAL SUPERVISED VARIATIONAL AUTO-ENCODER", "text": "Guided by identifiability, we first provide the training method to learn fx, fy by reformulating VAE in a supervised scenario, followed by optimization over latent space for inference and test.\nTraining. To learn the CIMe and pfx(x|s, z), pfy (y|s) for invariant prediction, we implement the generative model to fit {pe(x, y)}e∈Etrain , which has been guaranteed by theorem 4.3, 4.4 to be able to identify the ground-truth predicting mechanism. Specifically, we reformulate the objective of VAE, as a generative model proposed in (Kingma and Welling, 2014), in supervised scenario. For unsupervised learning, the VAE introduces the variational distribution qψ parameterized by ψ to approximate the intractable posterior by maximizing the following Evidence Lower Bound (ELBO): −Lφ,ψ = Ep(x) [ Eqψ(z|x) log pφ(x,z)\nqψ(z|x)\n] , as a tractable surrogate of maximum likelihood Ep(x) log pφ(x).\nSpecifically, the ELBO is less than and equal to Ep(x) [ log pφ(x) ]\nand the equality can only be achieved when qψ(z|x)=pφ(z|x). Therefore, maximizing the ELBO over pφ and qψ will drive (i) qψ(z|x) to learn pφ(z|x); (ii) pφ to learn the ground-truth model p (including pφ(x|z) to learn p(x|z)). In our supervised scenario, we introduce the variational distribution qeψ(s, z|x, y) and the corresponding ELBO for any e is −Leφ,ψ=Epe(x,y) [ Eqe\nψ (s,z|x,y) log\npeφ(x,y,s,z) qe ψ (s,z|x,y) ] . Similarly, minimizing Leφ,ψ can\ndrive pφ(x|s, z), pφ(y|s) to learn the CIMe (i.e. pfx(x|s, z), pfy (y|s)), and also qeψ(s, z|x, y) to learn peφ(s, z|x, y). In other words, the qψ can inherit the properties of pφ. As peφ(s, z|x, y)= peφ(s,z|x)pφ(y|s) pe φ (y|x) for our DAG in Fig. 1, we can similarly reparameterize qeψ(s, z|x, y) as qeψ(s,z|x)qψ(y|s)\nqeψ(y|x) . According\nto Causal Markov Condition, we have that peφ(x, y, s, z) = pφ(x|s, z)peφ(s, z)pφ(y|s). Substituting the above reparameterizations into the ELBO with qψ(y|s) replaced by pφ(y|s), the Leφ,ψ can be rewritten as:\nLeφ,ψ = Epe(x,y) [ − log qeψ(y|x)− Eqeψ(s,z|x)\npφ(y|s) qeψ(y|x) log pφ(x|s, z)peφ(s, z) qeψ(s, z|x)\n] , (2)\nwhere qeψ(y|x) = ∫ S q e ψ(s|x)pφ(y|s)ds. The overall loss function is: Lφ,ψ = ∑ e∈Etrain L e φ,ψ. The training datasets {De}e∈Etrain are applied to optimize prior model peφ(s, z), inference\n1The dPok denotes the Pokorov distance and limn→∞ dPok(µn, µ)→ 0⇐⇒ µn d→ µ.\nmodel qeψ(s, z|x) and generative models pφ(x|s, z), pφ(y|s) in Eq. (2). The generative models pφ(x|s, z), pφ(y|s) are shared among all environments, while the peφ(s, z), qeψ(s, z|x) are respectively pφ(s, z|des), qψ(s, z|x, des) and pφ(s, z|de), qψ(s, z|x, de) for LaCIM-ds and LaCIM-d.\nInference & Test. When de ′ s can be acquired during test for e ′ ∈ Etest, we can predict y as arg maxy pφ(y|x, de ′ s ) = ∫ qψ(s|x, de ′\ns )pφ(y|s)ds. Otherwise, for LaCIM-d with ds unobserved, we first optimize s, z via (s?, z?) := arg maxs,z log pφ(x|s, z) and predict y as arg maxy qψ(y|s?). Specifically, we adopt the strategy for optimization in Schott et al. (2018) that we first sample initial points and select the one with the maximum log pφ(x|s, z), then we optimize for 50 iterations using Adam. The implementation details and optimization effect are shown in supplement 7.9." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate LaCIM on (I) synthetic data to verify the identifiability in theorem 4.3; (II) OOD challenges: object classification with sample selection bias (Non-I.I.D. Image dataset with Contexts (NICO)); Hand-Writing Recognition with confounding bias (Colored MNIST (CMNIST)); prediction of Alzheimer’s Disease (Alzheimer’s Disease Neuroimaging Initiative (ADNI www.loni.ucla. edu/ADNI); (III) Robustness on detecting images with small perturbation (FaceForensics++)." }, { "heading": "5.1 SIMULATION", "text": "To verify the identifiability claim and effectiveness of our learning method, we implement LaCIM on synthetic data. The data generating process is provided in Supplement 7.8. The domain index D ∈ Rm is denoted as a one-hot encoded vector with m = 5. To verify the utility of training on multiple domains (m > 1), we also conduct LaCIM by pooling data from all m domains together, namely pool-LaCIM for comparison. We randomly generate m = 5 datasets and run 20 times for each. We compute the metric mean correlation coefficient (MCC) adopted in Khemakhem, Kingma and Hyvärinen (2020) to measure the goodness of identifiability under permutation by introducing cost optimization to assign each learned component to the source component. This measurement is aligned with the goal of ∼p-identifiability, which allows us to distinguish S from Z. Table 5.1 shows the superiority of our LaCIM-d, LaCIM-ds over pool-LaCIM in terms of the CIMe relating to S,Z under permutation, by means of multiple diverse experiments. Besides, we consider LaCIM-d on m = 3, 5, 7 with the same total number of samples. It yields that more environments can perform better; and that even m = 3 still performs much better than pool-LaCIM. To illustrate the learning effect, we visualize the learned Z in Fig. 7.8, with S left in supplement 7.8 due to space limit." }, { "heading": "5.2 REAL-WORLD OOD CHALLENGE", "text": "We present our LaCIM’s results on three OOD tasks, with different environments associated with different values of ds. We implement both versions of LaCIM, i.e., LaCIM-ds and LaCIM-d, with task-dependent definition of ds. In CMNIST, the ds (digit color) is a fully observed confounder, and LaCIM-ds in this case is the ceiling of LaCIM-d under the same implementation. In NICO and ADNI, the LaCIM-d even outperform LaCIM-ds, when the source variables are only partially observed.\nDataset. We describe the datasets as follows (the X denote image; the Y denote label):\nNICO: we evaluate the cat/dog classification in “Animal” dataset in NICO, a benchmark for non-i.i.d problem in He et al. (2019). Each animal is associated with “grass”,“snow” contexts with different proportions, denoted as ds ∈ R4 (cat,dog in grass,snow). We set m = 8 and m = 14. The C,Z, S respectively denote the (time,whether) of sampling, the context and semantic shape of cat/dog.\nCMNIST: We relabel the digits 0-4 and 5-9 as y = 0 and y = 1, based on MNIST. Then we color pe (1 − pe) of images with y = 0 (y = 1) as green and color others as red. We set m = 2 with pe1 = 0.9, pe2 = 0.8. The des is p\ne to describe the intensity of spursiou correlation caused by color. We do not flip y with 25% like Arjovsky et al. (2019) 2, since doing so will cause the digit correlated rather than causally related to the label, which is beyond our scope. The Z, S respectively represent the color and number. The C can also denote (time,whether) for which the painter draws the number and color, e.g., the painter tends to draw red 0 more often than green 1 in the sunny morning.\nADNI. The data are obtained from the ADNI databaset, the Y := {0, 1, 2} with 0,1,2 respectively denoting AD, Mild Cognitive Impairment (MCI) and Normal Control (NC). The X is Magnetic resonance imaging (sMRI). We set m = 2. We consider two types of ds: Age and TAU (a biomarker Humpel and Hochstrasser (2011)). The S (Z) denote the disease-related (-unrelated) brain regions. The C denotes the hormone level that can affect the brain structure development.\nCompared Baselines. We compare with (i) Cross-Entropy (CE) from X → Y (CE X → Y ), (ii) domain-adversarial neural network (DANN) for domain adaptation Ganin et al. (2016), (iii) Maximum Mean Discrepancy with Adversarial Auto-Encoder (MMD-AAE) for domain generalization Li et al. (2018), (iv) Domain Invariant Variational Autoencoders (DIVA) Ilse et al. (2019) (v) Selecting Data Augmentation (SDA) Ilse et al. (2020), (vi) Invariant Risk Mnimization (IRM) Arjovsky et al. (2019), (vii) CE (X, ds) → Y , (viii) VAE with causal graph C → V → {X,Y } with V mixing S,Z and we call it sVAE for simplicity. We only implement SDA on CMNIST, since the intervened-data generation of SDA requires explicitly extracting the S,Z, which is intractable in ADNI and NICO. For fair comparison, we keep the model capacity (numer of parameters) in the same level.\nImplementation Details. For each domain e, we implement the reparameterization with ρes, ρez: s′, z′ = ρes(s), ρ e z(z), to transform the p\ne(s, z) into isotropic Gaussian; then the generative models are correspondingly modified as {pφ(x|(ρes)−1(s), (ρez)−1(z)), pφ(y|(ρes)−1(s))} according to rule of change of variables. The optimized parameters are {{qeψ(s, z|x)}e, pφ(x|s, z), pφ(y|s), {ρet=s,z}e}, with the encoder qeψ(s, z|x) being sequentially composed of: i) the sequential of Conv-BN-ReLUMaxPool blocks that shared among Etrain, followed by ii) the sequential of ReLU-FC for the mean and log-variance of S,Z that are specific to e. The structure of ρet=s,z is FC-ReLU-FC. The decoder pφ(x|s, z) is the sequential of upsampling, several TConv-BN-ReLU blocks and Sigmoid. The predictor pφ(y|s) is sequential of FC→BN→ReLU blocks, followed by Softmax (or Sigmoid) for classification. The network structure and the output channel size for CMNIST, NICO and ADNI are introduced in supplement 7.11, 7.12, 7.13, Tab. 13, 14. We implement SGD as optimizer: with learning rate (lr) 0.5 and weight decay (wd) 1e-5 for CMNIST; lr 0.01 with decaying 0.2× every 60 epochs, wd 5e-5 for NICO and ADNI (wd is 2e-4). The batch-size are set to 256, 30 and 4 for CMNIST, NICO, ADNI. The “FC”, ”BN” stand for Fully-Connected, Batch-Normalization.\nResults. We report accuracy over three runs for each method. As shown in Tab. 2 3 our LaCIM-d performs comparable and better than others on all applications, except the 99.3 achieved by SDA on CMNIST, which is comparable to the result on the original MNIST. This is because during training,\n2We also conduct this experiment with flipping y in supplementary 7.11. 3On NICO, we implement ConvNet with Batch Balancing as a specifically benchmark in He et al. (2019).\nThe results are 60± 1 on m = 8 and 62.33± 3.06 on m = 14.\nthe SDA implemented data augmentation with random colors, which decorrelate the color-label. When S cannot be explicitly extracted in general case, the SDA is not tractable.\nMethod Dataset NICO CMNIST ADNI (m = 2) m = 8 m = 14 m = 2 C : Age C : TAU CE X → Y 60.67± 2.52 59.00± 1.73 97.87± 0.19 63.06± 2.26 64.58± 0.90 DANN 59.33± 4.93 60.00± 2.65 97.42± 0.13 60.84± 1.83 64.58± 0.90 MMD-AAE 61.33± 2.89 66.33± 3.21 81.23± 7.80 62.43± 2.42 65.62± 0.00 DIVA 60.67± 2.08 58.67± 1.53 97.97± 0.19 61.37± 3.30 65.10± 0.90 SDA - - 99.37 ± 0.03 - - IRM 61.67± 4.16 65.00± 3.00 98.18± 0.22 63.49± 1.59 65.10± 0.90 CE X, ds → Y 57.33± 6.03 64.00± 1.00 98.03± 0.27 62.43± 0.92 65.62± 0.00 sVAE 59.67± 3.79 64.33± 0.58 97.89± 0.61 63.67± 1.87 66.67± 0.91 LaCIM-ds (Ours) 62.00± 1.73 68.00± 2.64 98.81± 0.14 65.08± 1.59 66.14± 0.91 LaCIM-d (Ours) 62.67 ± 0.58 68.67± 2.64 98.78± 0.20 64.44± 0.96 68.23± 0.90\nNICO, a larger m (with the total number of samples n fixed) can bring further benefit, which may due to the easier satisfaction of the diversity condition in theorem 4.3. One thing worth particular mention is that on NICO and ADNI (when ds denotes TAU), our LaCIM-d performs comparable and even better than LaCIM-ds, due to the existence of unobserved partial variables. For example, each ds only contains one attribute each time in ADNI. For completeness, we conduct experiments with fully observed confounders in supplement 7.13. Besides, we apply our method on intervened data, the result of which can validate more robustness of LaCIM, as shown in supplement 7.12.\nInterpretability. We visualize learned S as side proof of interpretability. Specifically, we select the s∗ that has the highest correlation with y among all dimension of S, and visualize the derivatives of s∗ with respect to the image. For CE x → y and CE (x, ds) → y, we visualize the derivatives of predicted class scores with respect to the image. As shown in Fig. 5.2, LaCIM (the 4th column) can identify more explainable semantic features, which verifies the identifiability and effectiveness of the learning method. Supplement 7.12 provides more results.\n5.3 ROBUSTNESS ON SECURITY\nWe consider the DeepFake-related security problem, which targets on detecting small perturbed fake images that can spread fake news. The Rossler et al. (2019) provides FaceForensics++ dataset from 1000 Youtube videos for training and 1,000 benchmark images from other sources (OOD) for testing. We split the train data into m = 2 environments accord-\ning to video ID. The considerable result in Tab. 5.3 verifies potential value on security." }, { "heading": "6 CONCLUSIONS & DISCUSSIONS", "text": "We incorporate the causal structure as prior knowledge in proposed LaCIM, by introducing: (i) latent variables and explicitly separate them into y-causative factors (a.k.a, S) and others (a.k.a, Z) which are spuriously correlated with the output; (ii) the source variable ds that explains the distributional inconsistency among domains. When the environments are diverse and much enough, we can successfully identify the causal invariant mechanisms, and also y-causative factors for prediction without a mix of others. Our LaCIM shows potential value regarding robustness to OOD tasks with confounding bias, selection bias and others such as healthcare and security. A possible drawback of our model lies in our requirement of the number of environments (which may be not satisfied in some scenarios) for identifiability, and the relaxation of which is left in the future work." }, { "heading": "7 SUPPLEMENTARY MATERIALS", "text": "" }, { "heading": "7.1 O.O.D GENERALIZATION ERROR BOUND", "text": "Denote Ep[y|x] := ∫ Y yp(y|x)dy for any x, y ∈ X×Y . We have Epe [y|s] = ∫ Y yp(y|s)dy according to that p(y|s) is invariant across E , we can omit pe in Epe [y|s] and denote g(S) := E[Y |S]. Then, the OOD bound\n∣∣Epe1 (y|x)− Epe2 (y|x)∣∣, ∀(x, y) is bounded as follows: Theorem 7.1 (OOD genearlization error). Consider two LaCIM Pe1 and Pe2 , suppose that their densities , i.e., pe1(s|x) and pe2(s|x) are absolutely continuous having support (−∞,∞). For any (x, y) ∈ X × Y , assume that\n• g(S) is a Lipschitz-continuous function; • πx(s) := p e2 (s|x) pe1 (s|x) is differentiable and Epe1 [ πx(S) ∣∣g(S)− µ1∣∣] < ∞ with µ1 := Epe1 [g(S)|X = x] = ∫ S g(s)p e1(s|x)ds;\nthen we have ∣∣Epe1 (y|x)− Epe2 (y|x)∣∣ ≤ ‖g′‖∞‖π′x‖∞Varpe1 (S|X = x).\nWhen e1 ∈ Etrain and e2 ∈ Etest, the theorem 7.1 describes the error during generalization on e2 for the strategy that trained on e1. The bound is mainly affected by: (i) the Lipschitz constant of g, i.e., ‖g‖∞; (ii) ‖π′x‖∞ which measures the difference between pe1(s, z) and pe2(s, z); and (iii) the Varpe1 (S|x) that measures the intensity of x→ (s, z). These terms can be roughly categorized into two classes: (i),(iii) which are related to the property of CIMe and gave few space for improvement; and the (ii) that describes the distributional change between two environments. Specifically for the first class, the (i) measures the smoothness of E(y|s) with respect to s. The smaller value of ‖g′‖∞ implies that the flatter regions give rise to the same prediction result, hence easier transfer from e1 to e2 and vice versa. For the term (iii), consider the deterministic setting that εx = 0 (leads to Varpe1 (S|x) = 0), then s can be determined from x for generalization if the f is bijective function. The term (ii) measures the distributional change between posterior distributions pe1(s|x) and pe2(s|x), which contributes to the difference during prediction:\n∣∣Epe1 (y|x) − Epe2 (y|x)∣∣ = ∫S(pe1(s|x) − pe1(s|x))pfy (y|s)ds. Such a change is due to the inconsistency between priors pe1(s, z) and pe2(s, z), which is caused by different value of the confounder ds.\nProof. In the following, we will derive the upper bound∣∣Epe1 [Y |X=x]− Epe2 [Y |X=x] ∣∣ ≤ ‖g′‖∞‖π′x‖∞Varpe1 (S|X = x) , where πx(s) =: pe2 (s|x) pe1 (s|x) and g(s) is assumed to be Lipschitz-continuous.\nTo begin with, note that E[Y |X] = E[E(Y |X,S)|X] = E[g(S)|X] = ∫ g(s)p(s|x)ds.\nLet p1(s|x) = pe1(s|x), p2(s|x) = pe2(s|x). For ease of notations, we use P1 and P2 denote the distributions with densities p1(s|x) and p2(s|x) and suppose S1 ∼ P1 and S2 ∼ P2, where x is omitted as the following analysis is conditional on a fixed X=x.\nThen we may rewrite the difference of conditional expectations as\nEpe2 [Y |X = x]− Epe1 [Y |X = x] = E(g(S2))− E(g(S1)), where E[g(Sj))] = ∫ g(s)pj(s|x)ds denotes the expectation over Pj .\nLet µ1 := Epe1 [g(S)|X = x] = E[g(S1)] = ∫ g(s)p1(s|x)ds. Then\nEpe2 [Y |X = x]− Epe1 [Y |X = x] = E(g(S2))− E(g(S1)) = E [g(S2)− µ1] .\nFurther, we have the following transformation E [g(S2)− µ1] = ∫ (g(s)− µ1)πx(s)p1(s|x)ds = E [(g(S1)− µ1)πx(S1)] . (3)\nIn the following, we will use the results of the Stein kernel function. Please refer to Definition 7.2 for a general definition. Particularly, for the distribution P1 ∼ p1(s|x), the Stein kernel τ1(s) is\nτ1(s) = 1\np1(s|x) ∫ s −∞ (E(S1)− t)p1(t|x)dt, (4)\nwhere E(S1) = ∫ s · p1(s|x)ds. Further, we define (τ1 ◦ g)(s) as\n(τ1 ◦ g)(s)= 1\np1(s|x) ∫ s −∞ (E(g(S1))− g(t))p1(t|x)dt= 1 p1(s|x) ∫ s −∞ (µ1 − g(t))p1(t|x)dt. (5)\nUnder the second condition listed in Theorem 7.1, we may apply the result of Lemma 7.3. Specifically, by the equation (8), we have\nE [(g(S1)− µ1)πx(S1)] = E [(τ1 ◦ g)(S1)π′x(S1)] .\nThen under the first condition in Theorem 7.1, we can obtain the following inequality by Lemma 7.4, E [(τ1 ◦ g)(S1)π′x(S1)]= E [(\n(τ1 ◦ g) τ1 π′xτ1\n) (S1) ] ≤ E [∣∣∣ (τ1 ◦ g) τ1 (S1) ∣∣∣ · ∣∣∣π′xτ1(S1)∣∣∣]\n≤ ‖g′‖∞E [| (π′xτ1) (S1)|] ≤ ‖g′‖∞‖π′x‖∞E [|τ1(S1)|] . (6)\nIn the following, we show that the Stein kernel is non-negative, which enables E [|τ1(S1)|] = E [τ1(S1)]. According to the definition, τ1(s) = 1p1(s|x) ∫ s −∞(E(S1)− t)p1(t|x)dt, where E(S1)=∫∞\n−∞ t · p1(t|x)dt. Let F1(s) = ∫ s −∞ p1(t|x)dt be the distribution function for P1. Note that∫ s\n−∞ E(S1)p1(t|x)dt = F1(s)E(S1) = F1(s) E(S1),∫ s −∞ tp1(t|x)dt = F1(s) ∫ s −∞ t p1(t|x) F1(s) dt = F1(s) E(S1|S1 ≤ s) ≤ F1(s) E(S1),\nThe last inequality is based on E(S1|S1 ≤ s)− E(S1) ≤ 0 that can be proved as the following∫ s −∞ t p1(t|x) F1(s) dt− ∫ ∞ −∞ tp1(t|x)dt = ∫ s −∞ t ( 1 F1(s) − 1 ) p1(t|x)dt− ∫ ∞ s tp1(t|x)dt\n≤ s ∫ s −∞ ( 1 F1(s) − 1 ) p1(t|x)dt− s ∫ ∞ s p1(t|x) = 0.\nTherefore, τ1(s) ≥ 0 and hence E [|τ1(S1)|] = E [τ1(S1)] in (6). Besides, by equation (9), the special case of Lemma 7.3, we have\nE [τ1(S1)] = Var(S1) = Varpe1 (S|X = x).\nTo sum up,\nE [(τ1 ◦ g)(S1)π′x(S1)] ≤ ‖g′‖∞‖πx‖∞E [τ1(S1)] = ‖g′‖∞‖π′x‖∞Varpe1 (S|X = x).\nDefinition 7.2 (the Stein Kernel τP of distribution P ). Suppose X∼P with density p. The Stein kernel of P is the function x 7→ τP (x) defined by\nτP (x) = 1\np(x) ∫ x −∞ (E(X)− y)p(y)dy, (7)\nwhere Id is the identity function for Id(x) = x. More generally, for a function h satisfying E[|h(X)|] <∞, define (τP ◦ h)(x) as\n(τP ◦ h)(x) = 1\np(x) ∫ x −∞ (E(h(X))− h(y))p(y)dy.\nLemma 7.3. For a differentiable function ϕ such that E[|(τP ◦ h)(x)ϕ′(X)|] <∞, we have E [(τP ◦ h)(x)ϕ′(X)] = E[(h(X)− E(h(X))ϕ(X)]. (8)\nProof. Let µh =: E(h(X)). As E(h(X)− µh) = 0,\n(τP ◦ h)(x) = 1\np(x) ∫ x −∞ (µh − h(y))p(y)dy = −1 p(x) ∫ ∞ x (µh − h(y))p(y)dy.\nThen E [(τP ◦ h)(x)ϕ′(X)]= ∫ 0 −∞ (τP ◦ h)(x)ϕ′(x)p(x)dx+ ∫ ∞ 0 (τP ◦ h)(x)ϕ′(x)p(x)dx\n= ∫ 0 −∞ ∫ x −∞ (µh − h(y))p(y)ϕ′(x)dydx− ∫ ∞ 0 ∫ ∞ x (µh − h(y))p(y)ϕ′(x)dydx\n= ∫ 0 −∞ ∫ 0 y (µh − h(y))p(y)ϕ′(x)dxdy − ∫ ∞ 0 ∫ y 0 (µh − h(y))p(y)ϕ′(x)dxdy\n= ∫ 0 −∞ ∫ y 0 (h(y)− µh)p(y)ϕ′(x)dxdy + ∫ ∞ 0 ∫ y 0 (h(y)− µh)p(y)ϕ′(x)dxdy\n= ∫ ∞ −∞ (h(y)− µh)p(y) (∫ y 0 ϕ′(x)dx ) dy= ∫ ∞ −∞ (h(y)− µh)p(y)(ϕ(y)− ϕ(0))dy\n= ∫ ∞ −∞ (h(y)− µh)p(y)(ϕ(y))dy=E[(h(X)− E(h(X))ϕ(X)]\nParticularly, taking h(X) = X and ϕ(X) = X − E(X), we immediately have E(τP (X)) = Var(X) (9)\nLemma 7.4. Assume that E(|X|) < ∞ and the density p is locally absolutely continuous on (−∞,∞) and h is a Lipschitz continuous function. Then we have |fh| ≤ ‖h′‖∞ for\nfh(x) = (τP ◦ h)(x) τP (x) = ∫ x −∞(E(h(X))− h(y))p(y)dy∫ x −∞(E(X)− y)p(y)dy .\nProof. This is a special case of Corollary 3.15 in Döbler et al. (2015), taking the constant c = 1." }, { "heading": "7.2 PROOF OF THE EQUIVALENCE OF DEFINITION 4.2", "text": "Proposition 7.5. The binary relation ∼p defined in Def. 4.2 is an equivalence relation.\nProof. The equivalence relation should satisfy three properties as follows:\n• Reflexive property: The θ ∼p θ with Mz , Ms being identity matrix and as, az being 0.\n• Symmtric property: If θ ∼p θ̃, then there exists block permutation matrices Mz and Ms such that\nTs([fx] −1 S (x)) = MsT̃ s([f̃x] −1 S (x)) + as, T z([fx] −1 Z (x)) = MzT̃ z([f̃x] −1 Z (x)) + az, pfy (y|[fx]−1S (x)) = pf̃y (y|[f̃x] −1 S (x)).\nThe we have M−1s and M −1 z are also block permutation matrices and such that:\nT̃s([f̃x] −1 S (x)) = M −1 s T s([fx] −1 S (x)) + (−as), T̃ s([f̃x] −1 Z (x)) = M −1 z T s([fx] −1 Z (x)) + (−az), pf̃y (y|[f̃x] −1 S (x)) = pfy (y|[fx] −1 S (x)).\nTherefore, we have θ̃ ∼p θ.\n• Transitive property: if θ1 ∼p θ2 and θ2 ∼p θ3 with θi := {f ix, f iy,Ts,1,Tz,1,Γs,i,Γz,i}, then we have\nTs,1((f1x,s) −1(x)) = M1sT s,2((f2x,s) −1(x)) + a1s, Tz,1((f1x,z) −1(x)) = M1zT z,2((f2x,z) −1(x)) + a2z, Ts,2((f2x,s) −1(x)) = M2sT s,3((f3x,s) −1(x)) + a2s, Tz,2((f2x,z) −1(x)) = M2zT z,3((f3z ) −1(x)) + a3x,z\nfor block permutation matrices M1s ,M 1 z ,M 2 s ,M 2 z and vectors a 1 s, a 2 s, a 1 z, a 2 z . Then we have\nTs,1((f1x,s) −1(x)) = M2sM 1 sT s,3((f3x,s) −1(x)) + (M2s a 1 s) + a 2 s, Tz,1((f1x,z) −1(x)) = M2zM 1 zT z,3((f3x,z) −1(x)) + (M2z a 1 z) + a 2 z.\nBesides, it is apparent that\npf1y (y|(f 1 x) −1 s (x)) = pf2y (y|(f 2 x) −1 s (x)) = pf3y (y|(f 3 x) −1 s (x)). (10)\nTherefore, we have θ1 ∼p θ3 since M2sM1s and M2zM1z are also permutation matrices.\nWith above three properties satisfied, we have that ∼p is a equivalence relation." }, { "heading": "7.3 PROOF OF THEOREM 4.3", "text": "In the following, we write pe(x, y) as p(x, y|de) and also Γt=s,zc := Γt=s,z(de), Sc,i = Si(d e), Zc,i = Zi(d e). To prove the theorem 4.3, we first prove the theorem 7.6 for the simplest case when c|de = de, then we generalize to the case when C := ∪rCr. The overall roadmap is as follows: we first prove the ∼A-identifiability in theorem 7.9, and the combination of which with lemma 7.12, 7.11 give theorem 7.6 in the simplest case when c|de = de. Then we generalize the case considered in theorem 7.6 to the more general case when C := ∪rCr. Theorem 7.6 (∼p-identifiability). For θ in the LaCIM peθ(x, y) ∈ Pexp for any e ∈ Etrain, we assume that (1) the CIMe satisfies that fx, f ′x and f ′′x are continuous and that fx, fy are bijective; (2) that the T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; (3) the exogenous variables satisfy that the characteristic functions of εx, εy are almost everywhere nonzero; (4) the number of environments, i.e., m ≥ max(qs ∗ ks, qz ∗ kz) + 1 and [ Γt=s,zde2 − Γ t=s,z de1 , ...,Γ t=s,z dem − Γ t=s,z de1 ] have full column rank for both t = s and t = z, we have that the parameters θ := {fx, fy,Ts,Tz} are ∼p identifiable.\nTo prove theorem 7.6, We first prove the ∼A-identifiability that is defined as follows: Definition 7.7 (∼A-identifiability). The definition is the same with the one defined in 4.2, with Ms,Mz being invertible matrices which are not necessarily to be the permutation matrices in Def. 4.2. Proposition 7.8. The binary relation ∼A defined in Def. 7.7 is an equivalence relation.\nProof. The proof is similar to that of proposition 7.5.\nThe following theorem states that any LaCIM that belongs to Pexp is ∼A-identifiable. Theorem 7.9 (∼A-identifiability). For θ in the LaCIM peθ(x, y) ∈ Pexp for any e ∈ Etrain, we assume (1) the CIMe satisfies that fx, fy are bijective; (2) the T ti,j are twice differentiable for any t = s, z, i ∈ [qt], j ∈ [kt]; (3) the exogenous variables satisfy that the characteristic functions of εx, εy are almost everywhere nonzero; (4) the number of environments, i.e., m ≥ max(qs ∗ ks, qz ∗ kz) + 1 and [ [Γtde2 − Γtde1 ]T, ..., [Γtdem − Γtde1 ]T ]T have full column rank for t = s, z, we have that the parameters {fx, fy,Ts,Tz} are ∼p identifiable.\nProof. Suppose that θ = {fx, fy,Ts,Tz} and θ̃ = {f̃x, g̃y, T̃s, T̃z} share the same observational distribution for each environment e ∈ Etrain, i.e.,\npfx,fy,Ts,Γs,Tz,Γz (x, y|de) = pf̃x,f̃y,T̃s,Γ̃s,T̃z,Γ̃z (x, y|d e). (11)\nThen we have\npfx,fy,Ts,Γs,Tz,Γz (x|de) = pf̃x,f̃y,T̃s,Γ̃s,T̃z,Γ̃z (x|d e) (12) =⇒ ∫ S×Z pfx(x|s, z)pTs,Γs,Tz,Γz (s, z|de)dsdz = ∫ S×Z pf̃x(x|s, z)pT̃s,Γ̃s,T̃z,Γ̃z (s, z|d e)dsdz\n(13) =⇒ ∫ X pεx(x− x̄)pTs,Γs,Tz,Γz (f−1x (x̄)|de)volJf−1x (x̄)dx̄ (14)\n= ∫ X pεx(x− x̄)pT̃s,Γ̃s,T̃z,Γ̃z (f̃ −1 x (x̄)|de)volJf̃−1x (x̄)dx̄ (15)\n=⇒ ∫ X p̃Ts,Γs,Tz,Γz,fx(x̄|de)pεx(x− x̄)dx̄ = ∫ X p̃T̃s,Γ̃s,T̃z,Γ̃z,f̃x(x̄|d e)pεx(x− x̄)dx̄ (16) =⇒ (p̃Ts,Γs,Tz,Γz,fx ∗ pεx)(x|de) = (p̃T̃s,Γ̃s,T̃z,Γ̃z,f̃x) ∗ pεx(x|d e) (17) =⇒ F [p̃Ts,Γs,Tz,Γz,fx ](ω)ϕεx(ω) = F [p̃T̃s,Γ̃s,T̃z,Γ̃z,f̃x ](ω)ϕεx(ω) (18) =⇒ F [p̃Ts,Γs,Tz,Γz,fx ](ω) = F [p̃T̃s,Γ̃s,T̃z,Γ̃z,f̃x ](ω) (19) =⇒ p̃Ts,Γs,Tz,Γz,fx(x|de) = p̃T̃s,Γ̃s,T̃z,Γ̃z,f̃x(x|d e) (20)\nwhere volJf (X) := det(Jf (X)) for any square matrix X and function f with “J” standing for the Jacobian. The p̃Ts,Γs,Tz,Γz,fx(x) in Eq. (16) is denoted as pTs,Γs,Tz,Γz (f −1 x (x|de)volJf−1(x). The ’*’ in Eq. (17) denotes the convolution operator. The F [·] in Eq. (18) denotes the Fourier transform, where φεx(ω) = F [pεx ](ω). Since we assume that the ϕεx(ω) is non-zero almost everywhere, we can drop it to get Eq. (20). Similarly, we have that:\npfy,Ts,Γs(y|de) = pf̃y,T̃s,Γ̃s(y|d e) (21) =⇒ ∫ S pfy (y|s)pTs,Γs(s|de)ds = ∫ S pf̃y (y|s)pT̃s,Γ̃s(s|d e)ds (22)\n=⇒ ∫ Y pεy (y − ȳ)pTs,Γs(f−1y (ȳ)|de)volJf−1y (ȳ)dȳ (23)\n= ∫ Y pεy (y − ȳ)pT̃s,Γ̃s(f̃ −1 y (ȳ)|de)volJg̃−1(ȳ)dȳ (24)\n=⇒ ∫ S p̃Ts,Γs,fy (ȳ|de)pεy (y − ȳ)dȳ = ∫ S p̃T̃s,Γ̃s,f̃y (ȳ|d e)pεy (y − ȳ)dȳ (25) =⇒ (p̃Ts,Γs,fy ∗ pεy )(y|de) = (p̃T̃s,Γ̃s,f̃y ∗ pεy )(y|d e) (26)\n=⇒ F [p̃Ts,Γs,fy ](ω)ϕεy (ω) = F [p̃T̃s,Γ̃s,f̃y ](ω)ϕεy (ω) (27) =⇒ F [p̃Ts,Γs,fy ](ω) = F [p̃T̃s,Γ̃s,f̃y ](ω) (28) =⇒ p̃Ts,Γs,fy (y) = p̃Ts,Γs,f̃y (y), (29)\nand that\npfx,fyTs,Γs,Tz,Γz (x, y|de) = pf̃x,f̃y,T̃s,Γ̃s,T̃z,Γ̃z (x, y|d e) (30) =⇒ ∫ S×Z pfx(x|s, z)pfy (y|s)pTs,Γs,Tz,Γz (s, z|de)dsdz\n= ∫ S×Z pf̃ (x|s, z)pf̃y (y|s)pT̃s,Γ̃s,T̃z,Γ̃z (s, z|d e)dsdz (31)\n=⇒ ∫ V pε(v − v̄)pTs,Γs,Tz,Γz (h−1(v̄)|de)volJh−1(v̄)dv̄ (32)\n= ∫ V pε(v − v̄)pT̃s,Γ̃s,T̃z,Γ̃z (h̃ −1(v̄)|de)volJh̃−1(v̄)dv̄ (33)\n=⇒ ∫ S×Z p̃Ts,Γs,Tz,Γz,h,c(v̄|d)pε(v − v̄)dv̄ = ∫ S×Z p̃T̃s,Γ̃s,T̃z,Γ̃z,h̃,de(v̄|d e)pε(v − v̄)dv̄ (34)\n=⇒ (p̃Ts,Γs,Tz,Γz,h ∗ pε)(v) = (p̃T̃s,Γ̃s,T̃z,Γ̃z,h̃ ∗ pε)(v) (35)\n=⇒ F [p̃Ts,Γs,Tz,Γz,h](ω)ϕε(ω) = F [p̃T̃s,Γ̃s,T̃z,Γ̃z,h̃](ω)ϕε(ω) (36) =⇒ F [p̃Ts,Γs,Tz,Γz,h](ω) = F [p̃T̃s,Γ̃s,T̃z,Γ̃z,h̃](ω) (37) =⇒ p̃Ts,Γs,Tz,Γz,h(v) = p̃Ts,Γs,Tz,Γz,h(v), (38)\nwhere v := [x>, y>]>, ε := [ε>x , ε > y ] >, h(v) = [[fx]−1Z (x) >, f−1y (y) >]>. According to Eq. (29), we have\nlog volJfy (y) + qs∑ i=1 logBi(f−1y,i (y))− logAi(de) + ks∑ j=1 T si,j(f −1 y,i (y))Γ s i,j(d e) = log volJf̃y (y) + qs∑ i=1 log B̃i(f̃−1y,i (y))− log Ãi(de) + ks∑ j=1 T̃ si,j(f̃ −1 y,i (y))Γ̃ s i,j(d e) (39)\nSuppose that the assumption (4) holds, then we have\n〈Ts(f−1y (y)),Γ s (dek)〉+ ∑ i log Ai(d e1) Ai(dek) = 〈T̃s(f̃−1y (y)), Γ̃ s (dek)〉+ ∑ i log Ãi(d e1) Ãi(dek) (40)\nfor all k ∈ [m], where Γ̄(d) = Γ(d)−Γ(de1). Denote b̃s(k) = ∑ i Ãi(d e1 )Ai(d ek )\nÃi(d ek )Ai(de1 ) for k ∈ [m], then we have\nΓ s,> Ts(f−1y (y)) = Γ̃ s,> T̃s(f̃−1y (y)) + b̃s, (41)\nSimilarly, from Eq. (20) and Eq. (38), there exists b̃z, b̃s such that\nΓ s,> Ts([fx] −1 S (x)) + Γ z,> Tz([fx] −1 Z (x)) = Γ̃ s,> T̃s([f̃x] −1 S (x)) + Γ̃ z,> T̃z([f̃x] −1 Z (x)) + b̃z + b̃s,\n(42)\nwhere b̃z(k) = ∑ i Z̃i(d e1 )Zi(d ek )\nZ̃i(d ek )Zi(de1 )\nfor k ∈ [m]; and that,\nΓ s,> Ts(f−1y (y)) + Γ z,> Tz([f−1x ]Z(x)) = Γ̃ s,> T̃s(f̃−1y (y)) + Γ̃ z,> T̃z([f̃−1x ]Z(x)) + b̃z + b̃s.\n(43)\nSubstituting Eq. (41) to Eq. (42) and Eq. (43), we have that\nΓ z,> Tz([f−1x ]Z(y)) = Γ̃ z,> T̃z([f̃−1x ]Z(y)) + b̃z, Γ s,> Ts([f−1x ]S(y)) = Γ̃ s,>\nT̃s([f̃−1x ]S(y)) + b̃s. (44)\nAccording to assumption (4), the Γ s,> and Γ z,> have full column rank. Therefore, we have that\nTz([f−1x ]Z(x)) = ( Γ z Γ z,>)−1 Γ̃ z,> T̃z([f̃−1x ]Z(x)) + ( Γ z Γ z,>)−1 b̃z (45)\nTs([f−1x ]S(x)) = ( Γ s Γ s,>)−1 Γ̃ s,> T̃s([f̃−1x ]S(x)) + ( Γ s Γ s,>)−1 b̃s. (46)\nTs(f−1y (y)) = ( Γ s Γ s,>)−1 Γ̃ s,> T̃s(f̃−1y (y)) + ( Γ s Γ s,>)−1 b̃s. (47)\nDenote Mz := ( Γ z Γ z,>)−1 Γ̃ z,> , Ms := ( Γ s Γ s,>)−1 Γ̃ s,> and as = ( Γ s Γ s,>)−1\nb̃s, az =( Γ z Γ z,>)−1\nb̃z . The left is to prove that Mz and Ms are invertible matrices. Denote x̄ = f−1(x). Applying the (Khemakhem, Kingma and Hyvärinen, 2020, Lemma 3) we have that there exists ks points x̄1, ..., x̄ks , ˜̄x1, ..., ˜̄xkz such that ( (Ts)′i([f −1 x ]Si(x 1 i )), ..., (T s)′i([f −1 x ]Si(x ks i )) ) for each i ∈\n[qs] and ( (Tz)′i([f −1 x ]Zi(x̃ 1 i )), ..., (T z)′i([f −1 x ]Si(x̃ kz i )) ) for each i ∈ [qt] are linearly independent.\nBy differentiating Eq. (45) and Eq. (46) for each x̄i with i ∈ [qs] and ˜̄xi with i ∈ [qz] respectively, we have that (\nJTs(x̄ 1), ..., JTs(x̄ ks) ) = Ms ( JTs◦f̃−1x ◦fx(x̄ 1), ..., JTs◦f̃−1x ◦f (x̄ ks) )\n(48)( JTz (˜̄x 1), ..., JTz (˜̄x kz ) ) = Mz ( JTz◦f̃−1x ◦fx( ˜̄x1), ..., JTz◦f̃−1x ◦fx( ˜̄xkz ) ) . (49)\nThe linearly independence of (\n(Ts)′i([f −1 x ]Si(x 1 i )), ..., (T s)′i([f −1 x ]Si(x ks i )) ) and(\n(Tz)′i([f −1 x ]Zi(x̃ 1 i )), ..., (T z)′i([f −1 x ]Si(x̃ kz i ))\n) imply that the ( JTs(x̄ 1), ..., JTs(x̄ ks) )\nand( JTz (˜̄x 1), ..., JTz (˜̄x kz ) )\nare invertible, which implies the invertibility of matrix Ms and Mz . The rest is to prove pfy (y|[fx]−1S (x)) = pf̃y (y|[f̃x] −1 S (x)). This can be shown by applying Eq. (31) again. Specifically, according to Eq. (31), we have that∫ X pεx(x− x̄)p(y|[fx]−1S (x̄))pTs,Γs,Tz,Γz (f −1(x̄)|de)volJf−1(x̄)dx̄\n= ∫ X pεx(x− x̄)p(y|[f̃x]−1S (x̄))pTs,Γs,Tz,Γz (f̃ −1(x̄)|de)volJf̃−1(x̄)dx̄. (50)\nDenote lTs,Γs,Tz,Γz,fy,fx,y(x) := pfy (y|[fx]−1S (x̄))pTs,Γs,Tz,Γz (f−1(x̄)|de)volJf−1x (x̄), we have∫ X pεx(x− x̄)lTs,Γs,Tz,Γz,fy,fx,y(x̄)dx̄ = ∫ X pεx(x− x̄)lT̃s,Γ̃s,T̃z,Γ̃z,f̃y,f̃x,y(x̄)dx̄ (51)\n=⇒(lTs,Γs,Tz,Γz,fy,fx,y ∗ pεx)(x|de) = (lT̃s,Γ̃s,T̃z,Γ̃z,f̃y,f̃x,y ∗ pεx)(x|d e) (52)\n=⇒F [lT̃s,Γ̃s,T̃z,Γ̃z,f̃y,f̃x,y](ω)ϕεx(ω) = F [lTs,Γs,Tz,Γz,fy,fx,y](ω)ϕεx(ω) (53) =⇒F [lTs,Γs,Tz,Γz,fy,fx,y](ω) = F [lT̃s,Γ̃s,T̃z,Γ̃z,f̃y,f̃x,y](ω) (54) =⇒lTs,Γs,Tz,Γz,fy,fx,y(x) = lT̃s,Γ̃s,T̃z,Γ̃z,f̃y,f̃x,y(x) (55) =⇒pfy (y|[fx]−1S (x))pTs,Γs,Tz,Γz (f −1(x)|de)volJf−1x (x)\n= pf̃y (y|[f̃x] −1 S (x))pT̃s,Γ̃s,T̃z,Γ̃z (f̃ −1(x)|de)volJf̃−1x (x). (56)\nTaking the log transformation on both sides of Eq. (56), we have that\nlog pfy (y|[fx]−1S (x)) + log pTs,Γs,Tz,Γz (f −1(x)|de) + log volJf−1x (x)\n= log pf̃y (y|[f̃x] −1 S (x)) + log pT̃s,Γ̃s,T̃z,Γ̃z (f̃ −1(x)|de) + log volJf̃−1x (x). (57)\nSubtracting Eq. (57) with y2 from Eq. (57) with y1, we have\npfy (y2|[fx]−1S (x)) pfy (y1|[fx]−1S (x)) = pf̃y (y2|[f̃x] −1 S (x)) pf̃y (y1|[f̃x] −1 S (x))\n(58)\n=⇒ ∫ Y pfy (y2|[fx]−1S (x)) pfy (y1|[fx]−1S (x)) dy2 = ∫ Y pf̃y (y2|[f̃x] −1 S (x)) pf̃y (y1|[f̃x] −1 S (x)) dy2 (59) =⇒pfy (y1|[fx]−1S (x)) = pf̃y (y1|[f̃x] −1 S (x)), (60)\nfor any y1 ∈ Y . This completes the proof.\nUnderstanding the assumption (4) in Theorem 7.9 and 7.6. Recall that we assume the confounder ds in LaCIM is the source variable for generating data in corresponding domain. Here we also use the C to denote the space of ds (since ds := c), then we have the following theoretical conclusion that the as long as the image set of C is not included in any sets with Lebesgue measure 0, the assumption (4) holds. This conclusion means that the assumption (4) holds generically.\nTheorem 7.10. Denote ht=s,z(d) := (\nΓt1,1(d)− Γt1,1(de1), ...,Γtqt,kt(d)− Γ t 1,1(d\ne1) )>\n, h(C) := hs(S) ⊕ hz(Z) ⊂ Rqz∗kz ⊕ Rqs∗ks , then assumption (4) holds if h(C) is not included in any zero-measure set of Rqz∗kz ⊕ Rqs∗ks . Denote rs := qs ∗ ks and rz := qz ∗ kz .\nProof. With loss of generality, we assume that rs ≤ rz . Denote Q as the set of integers q such that there exists de2 , ..., dq+1 that the rank([hz(de2), ..., hz(deq+1)]) = min(q, rz) and rank([hs(de2), ..., hs(deq+1)]) = min(q, rs). Denote u := max(Q). We discuss two possible cases for u, respectively:\n• Case 1. u < rs ≤ rz . Then there exists de2 , ..., deu+1 s.t. hz(de2), ..., hz(deu+1) and hs(de2), ..., hs(deu+1) are linearly independent. Then ∀c, we have hz(d) ∈ L(hz(de2), ..., hz(deu+1)) or hs(d) ∈ L(hs(de2), ..., hs(deu+1)). Therefore, so we have hz(d) ⊕ hs(d) ∈ [L(hz(de2), ..., hz(deu+1))⊕ Rrs ] ∪ [Rrz ⊕ L(hs(de2), ..., hs(deu+1))], which has measure 0 in Rrz ⊕ Rrs .\n• Case 2. rs ≤ u < rz . Then there exists de2 , ..., deu+1 s.t. hz(de2), ..., hz(deu+1) are linearly independent and rank([hs(de1), ..., hs(deu)]) = rs. Then ∀c, we have hz(d) ∈ L(hz(de1), ..., hz(deu+1)), which means that hz(d)⊕hs(d) ∈ L(hz(de1), ..., hz(deu+1))⊕ Rrs , which has measure 0 in Rrz ⊕ Rrs .\nThe above two cases are contradict to the assumption that h(C) is not included in any zero-measure set of Rrz ⊕ Rrs .\nLemma 7.11. Consider the cases when ks ≥ 2. Then suppose the assumptions in theorem 7.9 are satisfied. Further assumed that\n• The sufficient statistics Tsi,j are twice differentiable for each i ∈ [qs] and j ∈ [ks].\n• fy is twice differentiable.\nThen we have Ms in theorem 7.9 is block permutation matrix.\nProof. Directly applying (Khemakhem, Kingma and Hyvärinen, 2020, Theorem 2) with fx, A, b,T, x replaced by fy,Ms, as,Ts, y.\nLemma 7.12. Consider the cases when ks = 1. Then suppose the assumptions in theorem 7.9 are satisfied. Further assumed that\n• The sufficient statistics Tsi are not monotonic for i ∈ [qs].\n• g is smooth.\nThen we have Ms in theorem 7.9 is block permutation matrix.\nProof. Directly applying (Khemakhem, Kingma and Hyvärinen, 2020, Theorem 3) with fx, A, b,T, x replaced by fy,Ms, as,Ts, y.\nProof of Theorem 7.6. According to theorem 7.9, there exist invertible matrices Ms and Mz such that\nT(f−1x (x)) = AT̃(f̃ −1 x (x)) + b Ts([f−1x ]S(x)) = MsT̃ s([f̃−1x ]S(x)) + as. Ts(f−1y (y)) = MsT̃ s(f̃−1y (y)) + as,\nwhere T = [Ts,>,Tz,>]>, and\nA = ( Ms 0 0 Mz ) . (61)\nBy further assuming that the sufficient statistics Tsi,j are twice differentiable for each i ∈ [qs] and j ∈ [ks] for ks ≥ 2 and not monotonic for ks = 1. Then we have that Ms is block permutation matrix. By further assuming that Tzi,j are twice differentiable for each i ∈ [nz] and j ∈ [kz] for kz ≥ 2 and not monotonic for kz = 1 and applying the lemma 7.11 and 7.12 respectively, we have that A is block permutation matrix. Therefore, Mz is also a block permutation matrix.\nProof of Theorem 4.3. We consider the general case when C := ∪Rr=1Cr, in which each Cr can be simplified as a representative point cr. For environment de, let Pde = [P(C = c1|de), · · · ,P(C = cR|de)] be the vector of probability mass of C in the environment de. And Etrain hasm environments with indexes de1 , · · · , dem . The latent factors (S,Z) belongs to the exponential family distribution p(s, z|c) = pTz,Γz(d)(z)pTs,Γs(d)(s). Suppose that θ = {fx, fy,Ts,Tz} and θ̃ = {f̃x, g̃y, T̃s, T̃z} share the same observational distribution for each environment, i.e., pθ(x, y|de) = pθ̃(x, y|de), then we have that\nR∑ r=1 pθ(x, y|cr) P(C=cr|de) = R∑ r=1 pθ̃(x, y|cr) P(C=cR|d e). (62)\nLet ∆x,y = [pθ(x, y|c1)−pθ̃(x, y|c1), · · · , pθ(x, y|cm)−pθ̃(x, y|cm)]T, then Eq. (62) can be written as A∆x,y = 0. Denote A := P>de1 ∈ Rm×R. According the diversity condition, we have that A and the [[Γt(c2)−Γt(c1)]T, ..., [Γt(cm)−Γt(c1)]T]T have full column rank, therefore we have that ∆x,y = 0, i.e. pθ(x, y|cr) = pθ̃(x, y|cr) for each r ∈ [R]. The left proof is the same with the one in theorem 7.6." }, { "heading": "7.4 PROOF OF THEOREM 4.4", "text": "Proof of Theorem 4.4. Due to Eq. (62), it is suffices to prove the conclusion for every cr ∈ {cr}r∈[R]. Motivated by Barron and Sheu (1991, Theorem 2) that the distribution pe(s, z) defined on bounded set can be approximated by a sequence of exponential family with sufficient statistics denoted as polynomial terms, therefore the Tt=s,z are twice differentiable hence satisfies the assumption (2) in theorem 4.3 and assumption (1) in lemma 7.11. Besides, the lemma 4 in Barron and Sheu (1991) informs us that the KL divergence between pθ0(s, z|cr) (θ0 := (fx, fy,T z,T s,Γz0,Γs0) and pθ1(s, z|cr) (θ1 := (fx, fy,T z,T s,Γz1,Γs1) (the pθ0(s, z|cr), pθ1(s, z|cr) belong to exponential family with polynomial sufficient statistics terms) can be bounded by the `2 norm of [(Γs(cr) − Γs1(cr))\n>, (Γz0(cr) − Γz1(cr))>]>. Therefore, ∀ > 0, there exists a open set of Γ(cr) such that the DKL(p(s, z|cr), pθ(s, z|cr)) < . Such an open set is with non-zero Lebesgue measurement therefore can satisfy the assumption (4) in theorem 4.3, according to result in theorem 7.10. The left is to prove that for any p defined by a LaCIM following Def. 4.1, there is a sequence of {pm}n ∈ Pexp such that the dPok(p, pn)→ 0 that is equivalent to pn d→ p. For any A,B, we consider to prove that\nIn ∆ = ∣∣∣∣p(x ∈ A, y ∈ B|cr)− pn(x ∈ A, yn ∈ B|cr)∣∣∣∣→ 0, (63) where pn(x ∈ A, yn ∈ B|cr) = ∫ S ∫ Z p(x ∈ A|s, z)p(yn ∈ B|s)pn(s, z|cr)dsdz with\nyn(i) = exp((fy,i(s) + εy,i)/Tn)∑ i exp((fy,i(s) + εy,i)/Tn) , i = 1, ..., k, (64)\nfor y ∈ Rk denoting the k-dimensional one-hot vector for categorical variable and εy,1,...,k are Gumbel i.i.d. According to (Maddison et al., 2016, Proposition 1) that the yn(i) d→ y(i) with\np(y(i) = 1) = exp(fy,i(s))∑ i exp((fy,i(s)) , as Tn → 0. (65)\nAs long as fy is smooth, we have that the p(yn|s) is continuous. We have that In = ∣∣∣p(x ∈ A, y ∈ B|cr)− ∫\nS×Z p(x ∈ A|s, z)p(yn ∈ B|s)pn(s, z|cr)dsdz ∣∣∣ ≤ ∣∣∣p(x ∈ A, y ∈ B|cr)− p(x ∈ A, yn ∈ B|cr)∣∣∣\n+ ∣∣∣p(x ∈ A, yn ∈ B|cr)− ∫\nS×Z p(x ∈ A|s, z)p(yn ∈ B|s)pn(s, z|cr)dsdz ∣∣∣ = ∣∣∣ ∫ S×Z p(x ∈ A|s, z) (p(y ∈ B|s)− p(yn ∈ B|s)) p(s, z|cr)dsdz ∣∣∣\n+ ∣∣∣ ∫ S×Z p(x ∈ A|s, z)p(yn ∈ B|s) (p(s, z|cr)− pn(s, z|cr)) ∣∣∣\n≤ ∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z) (p(y ∈ B|s)− p(yn ∈ B|s)) p(s, z|cr)dsdz ∣∣∣︸ ︷︷ ︸\nIn,1 + ∣∣∣ ∫\n(Ms×Mz)cr p(x ∈ A|s, z) (p(y ∈ B|s)− p(yn ∈ B|s)) p(s, z|cr)dsdz ∣∣∣︸ ︷︷ ︸ In,2\n+ ∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z)p(yn ∈ B|s) (p(s, z|cr)− pn(s, z|cr)) ∣∣∣︸ ︷︷ ︸\nIn,3 + ∣∣∣ ∫\n(Ms×Mz)cr p(x ∈ A|s, z)p(yn ∈ B|s) (p(s, z|cr)− pn(s, z|cr)) ∣∣∣︸ ︷︷ ︸ In,4 . (66)\nFor In,1, if y is itself additive model with y = fy(s) + εy, then we just set yn d = y, then we have that In,1 = 0. Therefore, we only consider the case when y denotes the categorical variable with softmax distribution, i.e., Eq. (65). ∀cr ∈ C := {c1, ..., cR} and ∀ > 0, there exists M crs and M crz such that p(s, z ∈ M crs ×M crz |cr) ≤ ; Denote Ms ∆ = ∪mk=1M crs and Mz ∆ = ∪mk=1M crz , we have that p(s, z ∈Ms ×Mz|c) ≤ 2 for all cr ∈ C. Since ∀s1 ∈Ms, ∃Ns1 such that ∀n ≥ Ns1 , we have that\n∣∣∣p(y ∈ B|s1) − p(y ∈ B|s1)| ≤ from that yn d→ y. Besides, there exists open set Os1 such that ∀s ∈ Os1 and∣∣∣p(y ∈ B|s1)− p(y ∈ B|s1)| ≤ , ∣∣∣p(yn ∈ B|s1)− p(yn ∈ B|s1)| ≤ . Again, according to Heine–Borel theorem, there exists finite s, namely s1, ..., sl such that Ms ⊂ ∪li=1O(si). Then there exists N\n∆ = max{Ns1 , ..., Nsl} such that ∀n ≥ N , we have that∣∣p(y ∈ B|s)− p(yn ∈ B|s)∣∣ ≤ 3 , ∀s ∈Ms. (67)\nTherefore, In,1 ≤ ∫ Ms×Mz 3 p(x ∈ A|s, z)p(s, z|c)dsdz ≤ 3 . Hence, In,1 → 0 as n → ∞.\nBesides, we have that In,2 ≤ ∫ Ms×Mz 2 p(s, z|cr)dsdz ≤ 2 . Therefore, we have that ∣∣ ∫ S×Z p(x ∈\nA|s, z) (p(y ∈ B|s)− p(yn ∈ B|s)) p(s, z|cr)dsdz ∣∣→ 0 as n→∞. For In,3, we have that\nIn,3 = ∣∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z)p(yn ∈ B|s)1(s, z ∈Ms ×Mz) (p(s, z|cr)− pn(s, z|cr)) dsdz ∣∣∣∣\n≤ ∣∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z)p(yn ∈ B|s)p(s, z|cr) (\n1 p(s, z ∈Ms ×Mz|cr) − 1 ) dsdz ∣∣∣∣︸ ︷︷ ︸ In,3,1\n+ ∣∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z)p(yn ∈ B|s)p(s, z|cr) (\n1 p(s, z ∈Ms ×Mz|cr) − 1 ) dsdz ∣∣∣∣︸ ︷︷ ︸ In,3,2 .\n(68)\nThe In,3,1 ≤ 1− . Denote p̃(s, z|cr) := p(s,z|cr)1(s,z∈Ms×Mz)\np(s,z∈Ms×Mz|cr) , according to (Barron and Sheu, 1991, Theorem 2), there exists a sequence of pn(s, z|c) defined on a compact support Ms ×Mz such that ∀cr ∈ C, we have that\npn(s, z|cr) d→ p(s, z|cr).\nApplying again the Heine–Borel theorem, we have that ∀ , ∃N such that ∀n ≥ N , we have∣∣∣p̃(s, z|cr)− pn(s, z|cr)∣∣∣ ≤ , (69)\nwhich implies that In,3,2 → 0 as n→∞ combining with the fact that p(x, y|s, z) is continuous with respect to s, z. For In,4, we have that\nIn,4 = ∣∣∣∣ ∫ Ms×Mz p(x ∈ A|s, z)p(yn ∈ B|s)p(s, z|cr) ∣∣∣∣ ≤ ∣∣∣∣ ∫ Ms×Mz p(s, z|cr) ∣∣∣∣ ≤ , (70) where the first equality is from that the pn(s, z|cr) is defined on Ms ×Mz . Then we have that∣∣∣∣ ∫\nS×Z p(x ∈ A|s, z)p(yn ∈ B|s) (p(s, z|cr)− pn(s, z|c)) ∣∣∣∣→ 0, as n→∞. (71) The proof is completed.\n7.5 REPARAMETERIZATION FOR LACIM-d\nWe provide an alternative training method to avoid parameterization of prior p(s, z|de) to increase the diversity of generative models in different environments. Specifically, motivated by Hyvärinen and Pajunen (1999) that any distribution can be transformed to isotropic Gaussian with the density denoted by pGau, we have that for any e ∈ Etrain, we have\npe(x, y) = ∫ S×Z pfx(x|s, z)pfy (y|s)p(s, z|de)dsdz\n= ∫ S×Z p(x|(ρes)−1(s′), (ρez)−1(z′))p(y|ρs(s′))pGau(s′, z′)ds′dz′,\nwith s′, z′ := ρes(s), ρ e z(z) ∼ N (0, I). We can then rewrite ELBO for LaCIM-d for environment e as:\nLeφ,ψ,ρe = Epe(x,y) [ − log qeψ(y|x) ] + Epe(x,y) [ −Eqeψ(s,z|x)\nqψ(y|(ρes)−1(s)) qeψ(y|x) log pφ((ρ e s) −1(s), (ρez) −1(z))pGau(s, z) qeψ(s, z|x)\n] .\n(72)" }, { "heading": "7.6 IDENTIFIABILITY", "text": "Earlier works that identify the latent confounders rely on strong assumptions regarding the causal structure, such as the linear model from latent to observed variable or ICA in which the latent component are independent Silva et al. (2006), or noise-free model Shimizu et al. (2009); Davies (2004). The Hoyer et al. (2008); Janzing, Peters, Mooij and Schölkopf (2012) extend to the additive noise model (ANM) and other causal discovery assumptions. Although the Lee et al. (2019) relaxed the constraints put on the causal structure, it required the latent noise is with small strength, which does not match with many realistic scenarios, such as the structural MRI of Alzheimer’s Disease considered in our experiment. The works which also based on the independent component analysis (ICA), i.e., the latent variables are (conditionally) independent, include Davies (2004); Eriksson and Koivunen (2003); recently, a series of works extend the above results to deep nonlinear ICA (Hyvarinen and Morioka, 2016; Hyvärinen et al., 2019; Khemakhem, Kingma and Hyvärinen, 2020; Khemakhem, Monti, Kingma and Hyvärinen, 2020; Teshima et al., 2020). However, these works require that the value of confounder of these latent variables is fixed, which cannot explain the spurious correlation in a single dataset. In contrast, our result can incorporate these scenarios by assuming that each sample has a specific value of the confounder. Other works assume discrete distribution for latent variables, such as Janzing, Sgouritsa, Stegle, Peters and Schölkopf (2012); Kocaoglu et al. (2018); Sgouritsa et al. (2013). However, in the literature, no existing works can disentangle the prediction-causative features from others, in the scenario of avoiding spurious correlation in order for OOD generalization." }, { "heading": "7.7 COMPARISON WITH EXISTING WORKS", "text": "" }, { "heading": "7.7.1 Y → S OR S → Y ?", "text": "Many existing works Rojas-Carulla et al. (2018); Khemakhem, Monti, Kingma and Hyvärinen (2020); Ilse et al. (2020; 2019) assumed Y → S(X) as the causal direction. Such an difference from ours\ncan mainly be contributed to the generating process of Y . Different understanding leads to different causal graph. The example of digital hand-writing in Peters et al. (2017) provides a good explanation. Consider the case that the writer is provided with a label first (such as ”2”) before writing the digit (denoted as X), then it should be Y → X . Consider another case, when the writing is based on the incentive (denoted as S) of which digit to write, then the writer record the label Y and the digit X concurrently, in which case it should be X ← S → Y . For Y → S, the Y is thought to be the source variable that generates the latent components and is observed before X . In contrast, we define Y as ground-truth labels given by humans. Taking image classification as an example, it is the human that give the classification of all things such as animals. In this case, it can be assumed that the label given by humans are ground-truth labels. This assumption can be based by the work Biederman (1987) in the field of psychology that humans can factorize the image X by many components due to the powerful perception learning ability of human beings. These components which denoted as S, can be accurately detected by humans, therefore we can approximately assume that it is the S generating the label Y . Consider the task of early prediction in Alzheimer’s Disease, the disease label is given based on the pathological analysis and observed after the MRI X . Such a labelling outcome can be regarded as the ground-truth which itself is defined by medical science. The corresponding pathology features, as the evidences for labelling, can also thought as the generators of X . In these cases, it is more appropriate to assume the Y as the outcome than the cause. For example, the Peters et al. (2016); Kuang et al. (2018) assumed XS → Y . As an adaptation to sensory-level data such as image, we assume S → Y with S are latent variables to model high-level explanatory factors, which coincides with existing literature Teshima et al. (2020). Another difference lies in the definition of Y . The Invariant Risk Minimization (we will give a detailed comparison later) Arjovsky et al. (2019) assumes that X → S̃ → Y by defining the Y as the label with noise. The S̃ denoted as the extracted hidden components by observer." }, { "heading": "7.7.2 COMPARISONS WITH DATA AUGMENTATION & ARCHITECTURE DESIGN", "text": "The goal of data augmentation Shorten and Khoshgoftaar (2019) is increase the variety of the data distribution, such as geometrical transformation Kang et al. (2017); Taylor and Nitschke (2017), flipping, style transfer Gatys et al. (2015), adversarial robustness Madry et al. (2017). On the other way round, an alternative kind of approaches is to integrate into the model corresponding modules that improve the robustness to some types of variations, such as Worrall et al. (2017); Marcos et al. (2016).\nHowever, these techniques can only make effect because they are included in the training data for neural network to memorize Zhang et al. (2016); besides, the improvement is only limited to some specific types of variation considered. As analyzed in Xie et al. (2020); Krueger et al. (2020), the data augmentation trained with empirical risk minimization or robust optimization Ben-Tal et al. (2009) such as adversarial training Madry et al. (2017); Sagawa et al. (2019) can only achieve robustness on interpolation (convex hull) rather than extrapolation of training environments." }, { "heading": "7.7.3 COMPARISONS WITH EXISTING WORKS IN DOMAIN ADAPTATION", "text": "Apparently, the main difference lies in the problem setting that (i) the domain adaptation (DA) can access the input data of the target domain while ours cannot; and (ii) our methods need multiple training data while the DA only needs one source domain. For methodology, our LaCIM shares insights but different with DA. Specifically, both methods assume some types of invariance that relates the training domains to the target domain. For DA, one stream is to assume the same conditional distribution shared between the source and the target domain, such as covariate shift Huang et al. (2007); Ben-David et al. (2007); Johansson et al. (2019); Sugiyama et al. (2008) in which P (Y |X) are assumed to be the same across domains, concept shift Zhang et al. (2013) in which the P (X|Y ) is assumed to be invariant. Such an invariance is related to representation, such as Φ(X) in Zhao et al. (2019) and P (Y |Φ(X)) in Pan et al. (2010); Ganin et al. (2016); Magliacane et al. (2018). However, these assumptions are only distribution-level rather than the underlying causation which takes the data-generating process into account. Taking the image classification again as an example, our method first propose a causal graph in which the latent factors are introduced as the explanatory/causal factors of the observed variables. These are supported by the framework of generative model Khemakhem, Kingma and Hyvärinen (2020); Khemakhem, Monti, Kingma and Hyvärinen (2020); Kingma and Welling (2014); Suter et al. (2019) which has natural connection with the causal\ngraph Schölkopf (2019) that the edge in the causal graph reflects both the causal effect and also the generating process. Until now, perhaps the most similar work to us are Romeijn and Williamson (2018) and Teshima et al. (2020) which also need multiple training domains and get access to a few samples in the target domain. Both work assumes the similar causal graph with us but unlike our LaCIM, they do not separate the latent factors which can not explain the spurious correlation learned by supervised learning Ilse et al. (2020). Besides, the multiple training datasets in Romeijn and Williamson (2018) refer to intervened data which may hard to obtain in some applications. We have verified in our experiments that explicitly disentangle the latent variables into two parts can result in better OOD prediction power than mixing them together." }, { "heading": "7.7.4 COMPARISONS WITH DOMAIN GENERALIZATION", "text": "For domain generalization (DG), similar to the invariance assumption in DA, a series of work proposed to align the representation Φ(X) that assumed to be invariant across domains Li et al. (2017; 2018); Muandet et al. (2013). As discussed above, these methods lack the deep delving of the underlying causal structure and precludes the variations of unseen domains.\nRecently, a series of works leverage causal invariance to enable OOD generalization on unseen domains, such as Ilse et al. (2019) which learns the representation that is domain-invariant. Notably, the Invariant Causal Prediction Peters et al. (2016) formulates the assumption in the definition of Structural Causal Model and assumes that Y = XSβ?S + εY where εY satisfies Gaussian distribution and S denotes the subset of covariates ofX . The Rojas-Carulla et al. (2018); Bühlmann (2018) relaxes such an assumption by assuming the invariance of fy and noise distribution εy in Y ← fy(XS , εy) which induces P (Y |XS). The similar assumption is also adopted in Kuang et al. (2018). However, these works causally related the output to the observed input, which may not hold in many real applications in which the observed data is sensory-level, such as audio waves and pixels. It has been discussed in Bengio et al. (2013); Bengio (2017) that the causal factors should be high-level abstractions/concepts. The Heinze-Deml and Meinshausen (2017) considers the style transfer setting in which each image is linear combination of shape-related variable and contextual-related variable, which respectively correspond to S and Z in our LaCIM in which the nonlinear mechanism (rather than linear combination in Heinze-Deml and Meinshausen (2017)) is allowed. Besides, during testing, our method can generalize to the OOD sample with intervention such as adversarial noise and contextual intervention.\nRecently, the most notable work is Invariant Risk Minimization Arjovsky et al. (2019), which will be discussed in detail in the subsequent section." }, { "heading": "7.7.5 COMPARISONS WITH INVARIANT RISK MINIMIZATION ARJOVSKY ET AL. (2019) AND REFERENCES THERE IN", "text": "The Invariant Risk Minimization (IRM) Arjovsky et al. (2019) assumes the existence of invariant representation Φ(X) that induces the optimal classifier for all domains, i.e., the E[Y |Pa(Y )] is domain-independent in the formulation of SCM. Similar to our LaCIM, the Pa(Y ) can refer to latent variables. Besides, to identify the invariance and the optimal classifier, the training environments also need to be diverse enough. As aforementioned, this assumption is almost necessary to differentiate the invariance mechanism from the variant ones. To learn such an invariance, a regularization function is proposed.\nThe difference of our LaCIM with IRM lies in two aspects: the direction of causal relation and the methodology. For the direction, as aforementioned in section 7.7.1, the IRM assumes X → S rather than the S,Z → X in our LaCIM. This is because the IRM defines Y as label with noise while ours definie the Y as the ground-truth label hence should be generated by the ground-truth hidden components that generating S. Such an inconsistency can be reflected by experiment regarding to the CMNIST in which the number is the causal factors of the label Y , rather than only invariant correlation. Besides, in terms of methodology, the theoretical claim of IRM only holds in linear case; in contrast, the CIMe fx, fy are allowed to be nonlinear.\nSome other works share the similar spirit with or based on IRM. The Risk-Extrapolation (REx) Krueger et al. (2020) proposed to enforce the similar behavior of m classifiers with variance of which proposed as the regularization function. The work in Xie et al. (2020) proposed a Quasidistribution framework that can incorporate empirical risk minimization, robust optimization and\nREx. It can be concluded that the robust optimization only generalizes the convex hull of training environments (defined as interpolation) and the REx can generalize extrapolated combinations of training environments. This work lacks model of underlying causal structure, although it performs similarly to IRM experimentally. Besides, the Teney et al. (2020) proposed to unpool the training data into several domains with different environment and leverages Arjovsky et al. (2019) to learn invariant information for classifier. Recently, the Bellot and van der Schaar (2020) also assumes the invariance to be generating mechanisms and can generalize the capability of IRM when unobserved confounder exist. However, this work also lacks the analysis of identifiability result.\nWe finish this section with the following summary of methods in section 7.7.4 and the IRM, in terms of causal factor, invariance type, direction of causal relation, theoretical judgement and the ability to generalize to intervened data." }, { "heading": "7.8 IMPLEMENTATION DETAILS AND MORE RESULTS FOR SIMULATION", "text": "Data Generation We set m = 5, ne = 1000 for each e. The generating process of ds ∈ Rqds , Z ∈ Rqz , S ∈ Rqs , X ∈ Rqx and Y ∈ Rqy is introduced in the supplement 7.8. We set qds = qs = qz = qy = 2 and qx = 4. For each environment e ∈ [m] with m = 5, we generate 1000 samples De = {xi, yi} i.i.d∼ ∫ pfx(x|s, z)pfy (y|s)pe(s, z|des)dsdz. The des = ( N (0, Iqds×qds ) + 5 ∗ e ) ∗ 2;\nthe s, z|des ∼ N ( µφ?s,z (s, z|d e s), σ 2 φ?s,z (s, z|des) ) with µφ?s,z = A µ s,z ∗ des and log σφ?s,z = A σ s,z ∗ des\n(Aµs,z , A σ s,z are random matrices); the x|s, z ∼ N ( µφ?x(x|s, z), σ 2 φ?x (x|s, z) ) with µφ?s,z = h(A µ,3 x ∗ h(Aµ,2x ∗ h(Aµ,2x ∗ [s>, z>]>]))) and log σφ?s,z = h(A σ,3 x ∗ h(Aσ,2x ∗ h(Aσ,2x ∗ [s>, z>]>]))) (h is LeakyReLU activation function with slope = 0.5 and Aµ,i=1,2,3x ,A σ,i=1,2,3 x are random matrices); the y|s is similarly to x|s, z with Aµ,i=1,2,3x ,Aσ,i=1,2,3x respectively replaced by Aµ,i=1,2,3y ,Aσ,i=1,2,3y .\nImplementation Details We parameterize pθ(s, z|d), qφ(s, z|x, y, d), pθ(x|s, z) and pθ(y|s) as 3- layer MLP with the LeakyReLU activation function. The Adam with learning rate 5 × 10−4 is implemented for optimization. We set the batch size as 512 and run for 2,000 iterations in each trial.\nVisualization. As shown from the visualization of S is shown in Fig. 7.8, our LaCIM can identify the causal factor S.\nThe setting when C can take a value in a sample-level. We consider the generation process of De\nas De = {xi, yi} i.i.d∼ ∫ pfx(x|s, z)pfy (y|s)p(s, z|c)p(c|de)dsdzdc, with qc := 2. The generation is the same except that the after obtaining ds, we additionally generate c with c := N (ds, I). The results are summarized in Tab. 7.8.\n7.9 IMPLEMENTATION DETAILS FOR OPTIMIZATION OVER S,Z\nRecall that we first optimize s∗, z∗ according to\ns∗, z∗ = arg max s,z log pφ(x|s, z).\nWe first sample some initial points from each posterior distribution qeψ(s|x) and then optimize for 50 iterations. We using Adam as optimizer, with learning rate as 0.002 and weight decay 0.0002. The\nFig. 7.9 shows the optimization effect of one run in CMNIST. As shown, the test accuracy keeps growing as iterates. For time saving, we chose to optimize for 50 iterations.\nFigure 5: The optimization effect in CMNIST, starting from the point with initial sampling from inference model q of each branch. As shown, the test accuracy increases as iterates." }, { "heading": "7.10 IMPLEMENTATIONS FOR BASELINE", "text": "For the CE X → Y and the CE X, ds → Y , they both composed of two parts: (i) feature extractor, followed by (ii) classifier. The network structure of the feature extractor for CE X → Y is the same with that of our encoder; while the extracted features for CE X, d→ Y is the concatenation of the features encoded from X → S,Z via the network with the same network structure of our encoder; and the network with the same structure of our prior network for LaCIM-d. The network structures of the classifier for both methods are the same to that of our pφ(y|s). The IRM and SDA adopt the same structure as CE X → Y . DANN adopt the same structure of CE X → Y and a additional domain classifier which is the same as that of pφ(y|s). sVAE adopt the same structure as LaCIM-ds with the exception that the pφ(y|s) is replaced by pφ(y|z, s). MMD-AAE adopt the same structure of encoder, decoder and classifier as LaCIM-d and a additional 2-layer MLP with channel 256-256-dimz is used to extract latent z. The detailed number of parameters and channel size on each dataset for each method are summarized in Tab. 13, 14." }, { "heading": "7.11 SUPPLEMENTARY FOR COLORED MNIST", "text": "Implementation details The network structure for inference model is composed of two parts, with the first part shared among all environments and multiple branches corresponding to each environment\nfor the second part. The network structure of the first-part encoder is composed of four blocks, each block is the sequential of Convolutional Layer (Conv), Batch Normalization (BN), ReLU and maxpooling with stride 2. The output number of feature map is accordingly 32, 64, 128, 256. The second part network structure that output the mean and log-variance of S,Z is Conv-bn-ReLU(256) → Adaptive (1)→ FC(256, 256)→ ReLU→ FC(256, qt=s,z) with FC stands for fully-connected layer. The structure of ρt=s,z in Eq. (72) is FC(qt, 256)→ ReLU→ FC(256, qt). The network structure for generative model pφ(x|s, z) is the sequential of three modules: (i) Upsampling with stride 2; (ii) four blocks of Transpose-Convolution (TConv), BN and ReLU with respective output dimension being 128, 64, 32, 16; (iii) Conv-BN-ReLU-Sigmoid with number of channels in the output as 3, followed by cropping step in order to make the image with the same size as input dimension, i.e., 3× 28× 28. The network structure for generative model pφ(y|s) is commposed of FC (512)→ BN→ ReLU→ FC (256)→ BN→ ReLU→ FC (|Y|). The qt=s,z is set to 32. We implement SGD as optimizer with learning rate 0.5, weight decay 1e− 5 and we set batch size as 256. The total training epoch is 80. We first explain why we do not flip y with 25% in the manuscript, and then provide further exploration of our method for the setting with flipping y.\nInvariant Causation v.s. Invariant Correlation by Flipping y in Arjovsky et al. (2019) The y is further flipped with 25% to obtain the final label in IRM setting and this step is omitted in ours. The difference lies in the definition of invariance. Our LaCIM defines invariance as the causal relation between S and the label Y , while the one in IRM can be correlation. As illustrated in Handwritting Sample Form in Fig. 7.11 in Grother (1995), the generting direction should be Y → X . If we denote the variable by flipping Y as Ỹ (a.k.a, the final label in IRM), then the causal graph should be X ← Y → Ỹ . In this case, the Ỹ is correlated rather than causally related to the digit X . For our LaCIM, we define the label as interpretable human label (which can approximate to y for any image x) and represented by Y in our experiments. The reason why we do not define the Y as ground-truth label is that (i) the prediction is only based on the extracted components of image which may be determined not only by the ground-truth label; (ii) the learning of ground-truth is interpretable that relevant to human. For example, if a writer is provided with digit “2” but he wrote it mistakenly as “4”, then it is more interpretable that we can predict the digit as “4” rather than “2”. For the digit with ambiguous label from the perspective of image, even if we predict it mistakenly, it is also interpretable in terms of prediction given the information of only digit. Returning back to the IRM setting, the label is flipping without reference to the semantic shape of digit. Therefore, the flipping may happen to noiseless digits rather than noisy and unsure ones, making the shape of number less semantically related to the label.\nExperiment with IRM setting We further conduct the experiment on IRM setting, with the final label y defined by flipping original label with 25%, and further color pe proportions of digits with corresponding color-label mapping. If we assume the original ground-truth label to be the effect of the digit number of S, then the anti-causal relation with Z and Y can make the identifiability of S difficult in this flipping scenario. Note that the causal effect between S and Y is invariant across domains, therefore we adopt to regularize the branch of inferring S to be shared among inference models for multiple environments. Besides, we regularize the causal effect between S and Z to be shared among different environments via pairwise regularization. The combined loss is formulated as:\nL̃ψ,φ = Lψ,φ + Γ\n2m2 m∑ i=1 m∑ j=1 ‖E(x,y)∼pei (x,y)[y|x]− E(x,y)∼pej (x,y)[y|x]‖22,\nwith qeψ(s, z|x) in Eq. (72) factorized as qψez (z)qψs(s) and ρs shared among m environments. The appended loss is coincide with recent study Risk-Extropolation (REx) in Krueger et al. (2020), with the difference of separating y-causative factors S from others. We name such a training method as LaCIM-REx. For implementation details, in addition to shared encoder regarding S, we set learning rate as 0.1, weight decay as 0.0002, batch size as 256. we have that p(y|x) = ∫ S qψs(s|x)pφ(y|ρs(s)) for any x. We consider two settings: setting#1 with m2 and pe1 = 0.9, pe2 = 0.8; and setting#2 with m = 4 with pe1 = 0.9, pe2 = 0.8, pe3 = 0.7, pe4 = 0.6. We only report the number of IRM since the cross entropy performs poorly in both settings. As shown, our model performs comparably with LaCIM-ds and better than IRM Arjovsky et al. (2019) due to separation of S znd Z." }, { "heading": "7.12 SUPPLEMENTARY FOR NICO", "text": "Implementation Details Due to size difference among images, we resize each image into 256×256. The network structure of pθ(z, s|ds), qφ(z, s|x, ds), pθ(x|z, s), pθ(y|s) for cat/dog classification is the same with the one implemented in early prediction of Alzheimer’s Disease with exception of 3D convolution/Deconvolution replaced by 2D ones. For each model, we train for 200 epochs using sgd, with learning rate (lr) set to 0.01, and after every 60 epochs the learning rate is multiplied by lr decay parameter that is set to 0.2. The weight decay coefficients parameter is set to 5× 10−4. The\nbatch size is set to 30. The training environments which is characterized by c can be referenced in\nTable 7.12. For visualization, we implemented the gradient-based method Simonyan et al. (2013) to visualize the neuron (in fully connected layer for both CE x→ y and CE (x, ds)→ y; in s layer for LaCIM-ds) that is most correlated to label y.\nThe ds form environments We summarize the ds ofm = 8 andm = 14 environments in Table 7.12. As shown, the value of ds in the test domain is the extrapolation of the training environments, i.e., the dtests is not included in the convex hull of {dei}14i=1. More Visualization Results Fig. 7 shows more visualization results.\nResults on Intervened Data. We test our model and the baseline on intervened data, in which each image is generated by intervention on Z, i.e., taking a specific value of Z. This intervention breaks the correlation between S and Z, thus the distribution of which can be regarded as a specific type of OOD. Specifically, we replace the scene of an image with the scene from the another image, as shown in Fig. 8. We generate 120 images, including 30 images of types: cat on grass, dog on grass, cat on snow, and dog on grass. We evaluate LaCIM-d, CE X → Y , IRM, DANN, NCBB, MMD-AAE, and DIVA methods on this intervened dataset. As shown in Tab 9, our LaCIM-d can performs the best among all methods, which validate the robustness of our LaCIM." }, { "heading": "7.13 DISEASE PREDICTION OF ALZHEIMER’S DISEASE", "text": "Dataset Description. The dataset contains in total 317 samples with 48 AD, 75 NC, and 194 MCI.\nDenotation of Attributes ds. The C ∈ R9 includes personal attributes (e.g., age Guerreiro and Bras (2015), gender Vina and Lloret (2010) and education years Mortimer (1997) that play as potential\nrisks of AD), gene (ε4 allele), and biomarkers (e.g., changes of CSF, TAU, PTAU, amyloidβ , cortical amyloid deposition (AV45) Humpel and Hochstrasser (2011)).\nImplementation Details For LaCIM-ds, we parameterize inference model qψ(s, z|x, ds), pφ(s, z|ds), pφ(x|z, s) and pφ(y|s) and S,Z ∈ R64. For qψ(s, z|x, ds), we concatenate outputs of feature extractors of X and ds: the feature extractor for x is composed of four Convolution-Batch\nNormalization-ReLU (CBNR) blocks and four Convolution-Batch Normalization-ReLU-MaxPooling (CBNR-MP) blocks with structure 64 BNR→ 128 CBNR-MP→ 128 CBNR→ 256 CBNR-MP → 256 CBNR→ 512 CBNR-MP→ 512 CBNR→ 1024 CBNR-MP; the feature extractor of c is composed of three Fully Connection-Batch Normalization-ReLU (FC-BNR) blocks with structure 128→ 256→ 512. The concatenated features are further transformed by four 64 FC-BNR to generate µs,z(x, ds) and log σs,z(x, ds). For the prior model pθ(s, z|ds), it shares the same structure without feature extractor of x. For pφ(x|s, z), the network is composed of three DeConvolution-Batch Normalization-ReLU (DCBNR) blocks and three Convolution-Batch Normalization-ReLU (CBNR) blocks, followed by a convolutional layer, with structure 256 DCBNR→ 256 CBNR→ 128 DCBNR → 128 CBNR→ 64 DCBNR→ 64 CBNR→ 48 Conv. For pφ(y|s), the network is composed of 256 FC-BNR→ 512 FC-BNR→ 3 FC-BNR. For prior model pφ(s, z|ds)N (µs,z(ds),diag(σ2s,z(ds))) the µs,z(x, ds) and log σs,z(x, ds) are parameterized by Multi Perceptron Neural Network (MLP). The decoders pφ(x|s, z) are pφ(y|s) parameterized by Deconvolutional neural network. For all methods, we train for 200 epochs using SGD with weight decay 2× 10−4 and learning rate 0.01 and is multiplied by 0.2 after every 60 epochs. The batch size is set to 4. For each variable in biomarker vector C ∈ R9, each person may have multiple records, and we take its median as representative to avoid extreme values due to device abnormality.\nAs for LaCIM-d, we adopt the same decoder pφ(x|z, s) and classifier pφ(y|s). For qψ(s, z|x, d), we adopt the same network for the shared part; for the part specific to each domain, µs,z(x, d) and log σs,z(x, d) are generated by the sub-network which is composed of 1024 FC-BNR→ 1024 FC-BNR→ qz,s FC-BNR. The z, s can be reparameterized by µs,z(x, d) and log σs,z(x, d) are fed into a sub-network which is composed of qz,s FC-BNR→ 1024 FC-BNR→ qz,s FC-BNR to get rid of the constraint of Gaussian distribution. Then the reconstructed images and predicted label are computed by pφ(x|z, s) and pφ(y|s) which have the same network structure of LaCIM-C with the z, s.\nThe ds variable in training and test. The selected attributes include Education Years, Age, Gender (0 denotes male and 1 denotes female), AV45, amyloidβ and TAU. We split the data into m = 2 training environments and test according to different value of ds. The Tab. 7.13 describes the data distribution in terms of number of samples, the value of ds (Age and TAU)." }, { "heading": "7.13.1 EXPERIMENTS WITH COMPLETE OBSERVABLE SOURCE VARIABLE", "text": "In image-based diagnosis, the personal attributes, genes and biomarkers are often available. Therefore, we consider the setting when ds can be fully observed. In this case, the value of ds is person-byperson. Therefore, the number of environments m is equal to the number of samples. In this case, the dataset turns to {xi, yi, dis}ni=1. The expected risk turns to:\nLψ,φ = Ep(x,y|ds) [ − log qψ(y|x, ds)− Eqψ(s,z|x,ds) [ pφ(y|s)\nqψ(y|x, ds) log pφ(x|s, z)pφ(s, z|ds) qψ(s, z|x, ds)\n]] .\n(73)\nAnd the corresponding empirical risk is:\nL̃ψ,φ = 1\nn\n[ − log qψ(yi|xi, ds,i)− Eqψ(s,z|xi,ds,i) [ qψ(yi|s)\nqψ(yi|xi, ds,i) log pφ(xi|s, z)pφ(s, z|ds,i) qψ(s, z|xi, ds,i)\n]] .\n(74)\nThe ds here is re-defined as the 9-dimensional vector that includes all attributes, genes and biomarkers mentioned above. We re-split the data into 80% train and 20% test, according to different average value of specific variable in the whole vector ds.\nThe ds variable in training and test We implemented OOD tasks in which the value of ds is different between training and test. Specifically, we repeatedly split the dataset into the training and the test according to a selected attribute in ds for three times. The average value of these attributes in train and test are recorded in Table 7.13.1.\nExperimental Results We conduct OOD experiments with source variables Age, Gender, amyloidβ and TAU different between training data and the test. The results are shown in Table 12." }, { "heading": "7.14 SUPPLEMENTARY FOR DEEPFAKE", "text": "Implementation Details. We implement data augmentations, specifically images with 30 angle rotation, with flipping horizontally with 50% probability. We additionally apply random compressing techniques, such as JpegCompression. For inference model, we adopt Efficient-B5 Tan and Le (2019), with the detailed network structure as: FC(2048, 2048)→ BN→ ReLU→ FC(2048, 2048)→ BN → ReLU→ FC(2048, qt=s,z). The structure of reparameterization, i.e., ρt=s,z is FC(qt=s,z , 2048) → BN→ ReLU→ FC(2048, 2048)→ BN→ ReLU→ FC(2048, qt=s,z). The network structure for generative model, i.e., pψ(x|s, z) is TConv-BN-ReLU(qt=s,z , 256)→ TConv-BN-ReLU(256, 128)→ TConv-BN-ReLU(128, 64)→ TConv-BN-ReLU(64, 32)→ TConv-BN-ReLU(32, 32)→ TConv-BN-ReLU(32, 16) → TConv-BN-ReLU(16, 16) → Conv-BN-ReLU(16, 3) → Sigmoid,\nfollowed by cropping the image to the same size 3×224×224. We set qt=s,z as 1024. We implement SGD as optimizer, with learning rate 0.02, weight decay 0.00005, and run for 9 epochs.\nTa bl\ne 13\n:G en\ner al\nfr am\new or\nk ta\nbl e\nfo ro\nur m\net ho\nd an\nd ba\nse lin\nes on\nD a ta ∈ {C\nM N\nIS T ,N\nIC O ,A\nD N\nI, D\nee pF\nak e}\nD at\nas et\n.W e\nde no\nte th\ne di\nm en\nsi on\nof z\nor z s as di m z ,z s .W e lis tt he ou tp ut di m en si on (e .g .t he ch an ne ln um be r) of ea ch m od ul e, if it is di ff er en tf ro m th e on e in Ta b. 14 .\nD at\nas et\nM et\nho d\nC E X → Y\nC E X , d → Y\nM M\nD -A\nA E\nD A\nN N\nD IV\nA L\naC IM\n-d s\nL aC\nIM -d\nD at\na: C\nM N\nIS T\nE nc\nD a t a\nx FC (2 56\n,d im z ) D ec -C E D a t a y\nE nc\nD a t a x ;E\nnc D\na t a\nd FC (5 12 ,d im z ) D ec -C E D a t a y\nE nc\nD a t a\nx FC -B N -R eL\nU (2\n56 ,2\n56 )\nFC (2\n56 ,2\n56 )→\nz\nD ec\nD a t a y ;D\nec D\na t a\nx\nE nc\nD a t a\nx D A N N -C L SD a t a\ny ;D\nA N\nN -C\nL SD\na t a\ny\np D\na t a θ (x |z d , z x , z y\n)\np D\na t a\nθ d\n(z d |d\n)\np D\na t a\nθ y\n(z y |y\n)\nq D\na t a\nφ d\n(z d |x\n)\nq D\na t a\nφ x\n(z x |x\n)\nq D\na t a\nφ y\n(z y |x\n)\nE nc\nD a t a x ;E\nnc D\na t a\nd FC -B N -R eL U (5\n12 ,2\n56 )\nFC (2\n56 ,d\nim z s ) D ec D a t a y ;D ec D\na t a\nx\npr io\nr: E\nnc D\na t a\nd\nE nc\nD a t a\nx E nc D\na t a z ,s × m\nΦ D\na t a z ,s × m\nD ec\nD a t a y ;D\nec D\na t a\nx\n# of\nPa ra\nm s\n1. 12\nM 1.\n16 M\n1. 23\nM 1.\n1M 1.\n69 M\n1. 09\nM 0.\n92 M\nhy pe\nrPa\nra m\ns lr\n:0 .1\nw d:\n0. 00\n00 5\nlr :0 .2 w d: 0.\n00 05\nlr :0\n.0 1\nw d:\n0. 00\n01 lr\n:0 .1\nw d:\n0. 00\n02 lr\n:0 .0 01 w d: 0. 00 00\n1 lr\n:0 .1\nw d:\n0. 00\n01 lr\n:0 .0 1 w d: 0.\n00 02\nD at\na: N\nIC O\nE nc\nD a t a\nx FC (1 02\n4, di\nm z ) D ec -C E D a t a y\nE nc\nD a t a x ;E\nnc D\na t a\nd FC (5 12 ,d im z ) D ec -C E D a t a y\nE nc\nD a t a\nx FC -B N -R eL\nU (1\n02 4,\n10 24 ) FC (1 02 4, 10 24 )→ z D ec D a t a y ;D ec D a t a x\nE nc\nD a t a\nx D A N N -C L SD a t a\ny ;D\nA N\nN -C\nL SD\na t a\ny\np D\na t a θ (x |z d , z x , z y\n)\np D\na t a\nθ d\n(z d |d\n)\np D\na t a\nθ y\n(z y |y\n)\nq D\na t a\nφ d\n(z d |x\n)\nq D\na t a\nφ x\n(z x |x\n)\nq D\na t a\nφ y\n(z y |x\n)\nE nc\nD a t a x ;E\nnc D\na t a\nd FC (1 53 6, di\nm z s ) D ec D a t a y ;D ec D a t a x pr io r: E nc D a t a d\nE nc\nD a t a\nx E nc D\na t a z ,s × m\nΦ D\na t a z ,s × m\nD ec\nD a t a y ;D\nec D\na t a\nx\n# of\nPa ra\nm s\n(m =\n8 )\n18 .0\n8M 19\n.0 1M\n19 .7\n0M 19\n.1 3M\n14 .8\n6M 16\n.3 1M\n18 .2 5M # of Pa ra m s (m = 1 4 ) 18 .0 8M 19 .0 1M 19 .7 0M 26 .4 9M 14 .8 7M 18 .0 8M 19 .7 0M hy pe rPa ra m s lr :0 .0 1 w d: 0. 00 02 lr :0 .0 1 w d: 0. 00 02 lr :0 .2 w d: 0. 00 01 lr :0 .0 5 w d: 0. 00 05 lr :0 .0 01 w d: 0. 00 01 lr :0 .0 1 w d: 0. 00 05 lr :0\n.0 1\nw d:\n0. 00\n01\nD at\na: A\nD N\nI E\nnc D\na t a\nx FC (1 02\n4, di\nm z ) D ec -C E D a t a y\nE nc\nD a t a x ;E\nnc D\na t a\nd FC (1 53 6, di\nm z ) D ec -C E D a t a y\nE nc\nD a t a\nx FC -B N -R eL\nU (1\n02 4,\n10 24 ) FC (1 02 4, 10 24 )→ z D ec D a t a y ;D ec D a t a x\nE nc\nD a t a\nx D A N N -C L SD a t a\ny ;D\nA N\nN -C\nL SD\na t a\ny\np D\na t a θ (x |z d , z x , z y\n)\np D\na t a\nθ d\n(z d |d\n)\np D\na t a\nθ y\n(z y |y\n)\nq D\na t a\nφ d\n(z d |x\n)\nq D\na t a\nφ x\n(z x |x\n)\nq D\na t a\nφ y\n(z y |x\n)\nE nc\nD a t a x ;E\nnc D\na t a\nd FC (1 53 6, di\nm z s ) D ec D a t a y ;D ec D a t a x pr io r: E nc D a t a d\nE nc\nD a t a\nx E nc D\na t a z ,s × m\nΦ D\na t a z ,s × m\nD ec\nD a t a y ;D\nec D\na t a\nx\n# of\nPa ra\nm s\n28 .2\n7M 28\n.2 7M\n36 .6\n8M 30\n.2 1M\n33 .2\n2M 33\n.0 7M\n37 .7\n8M\nhy pe\nrPa\nra m\ns lr\n:0 .0 1 w d: 0.\n00 02\nlr :0\n.0 1\nw d:\n0. 00\n02 lr\n:0 .0 05 w d: 0.\n00 02\nlr :0\n.0 1\nw d:\n0. 00\n02 lr\n:0 .0 05 w d: 0.\n00 01\nlr :0\n.0 05\nw d:\n0. 00\n02 lr\n:0 .0 1 w d: 0.\n00 02" } ]
2,020
LATENT CAUSAL INVARIANT MODEL
SP:d021dc94272c00ac362f53e3deb239da1292a734
[ "This paper introduces a fast way to get Bayesian posterior by using a pretrained deterministic model. Specifically, the authors first train a standard DNN model and then use it to initialize the variational parameters. Finally the variational parameters are optimized through standard variational inference (VI) training. To further improve uncertainty estimate, the authors propose an uncertainty regularization which maximizes the prediction inconsistency on out-of-distribution (OOD) data. Experiments including image classification and uncertainty estimates are conducted to demonstrate the proposed method." ]
Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with deterministic NNs, mainly due to their limited scalability in training and low fidelity in uncertainty estimates. In this work, we develop a new framework, named BayesAdapter, to address these issues and bring Bayesian deep learning to the masses. The core notion of BayesAdapter is to adapt pre-trained deterministic NNs to be BNNs via Bayesian fine-tuning. We implement Bayesian fine-tuning with a plug-and-play instantiation of stochastic variational inference, and propose exemplar reparameterization to reduce gradient variance and stabilize the finetuning. Together, they enable training BNNs as if one were training deterministic NNs with minimal added overheads. During Bayesian fine-tuning, we further propose an uncertainty regularization to supervise and calibrate the uncertainty quantification of learned BNNs at low cost. To empirically evaluate BayesAdapter, we conduct extensive experiments on a diverse set of challenging benchmarks, and observe satisfactory training efficiency, competitive predictive performance, and calibrated and faithful uncertainty estimates.
[ { "affiliations": [], "name": "BAYESIAN FINE-TUNING" } ]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th {USENIX} symposium on operating systems design and implementation ({OSDI}", "year": 2016 }, { "authors": [ "Anoop Korattikara Balan", "Vivek Rathod", "Kevin P Murphy", "Max Welling" ], "title": "Bayesian dark knowledge", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Norman Bleistein", "Richard A Handelsman" ], "title": "Asymptotic expansions of integrals", "venue": "Courier Corporation,", "year": 1986 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural network", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Qiong Cao", "Li Shen", "Weidi Xie", "Omkar M Parkhi", "Andrew Zisserman" ], "title": "Vggface2: A dataset for recognising faces across pose and age", "venue": "In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2018 }, { "authors": [ "Sharan Chetlur", "Cliff Woolley", "Philippe Vandermersch", "Jonathan Cohen", "John Tran", "Bryan Catanzaro", "Evan Shelhamer" ], "title": "cudnn: Efficient primitives for deep learning", "venue": "arXiv preprint arXiv:1410.0759,", "year": 2014 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Stefan Depeweg", "José Miguel Hernández-Lobato", "Finale Doshi-Velez", "Steffen Udluft" ], "title": "Learning and policy search in stochastic dynamical systems with Bayesian neural networks", "venue": "arXiv preprint arXiv:1605.07127,", "year": 2016 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Ricard Durall", "Margret Keuper", "Janis Keuper" ], "title": "Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Stanislav Fort", "Huiyi Hu", "Balaji Lakshminarayanan" ], "title": "Deep ensembles: A loss landscape perspective", "venue": "arXiv preprint arXiv:1912.02757,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "Kathrin Grosse", "David Pfaff", "Michael Thomas Smith", "Michael Backes" ], "title": "The limitations of model uncertainty in adversarial settings", "venue": "arXiv preprint arXiv:1812.02606,", "year": 2018 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "arXiv preprint arXiv:1706.04599,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "José Miguel Hernández-Lobato", "Ryan Adams" ], "title": "Probabilistic backpropagation for scalable learning of Bayesian neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Gary B Huang", "Marwan Mattar", "Tamara Berg", "Eric Learned-Miller" ], "title": "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments", "venue": "Technical report,", "year": 2007 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in Bayesian deep learning for computer vision", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mohammad Emtiyaz Khan", "Didrik Nielsen", "Voot Tangkaratt", "Wu Lin", "Yarin Gal", "Akash Srivastava" ], "title": "Fast and scalable Bayesian deep learning by weight-perturbation in adam", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Max Welling" ], "title": "Variational dropout and the local reparameterization trick", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Agustinus Kristiadi", "Matthias Hein", "Philipp Hennig" ], "title": "Being bayesian, even just a bit, fixes overconfidence in relu networks", "venue": "arXiv preprint arXiv:2002.10118,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christian Leibig", "Vaneeda Allken", "Murat Seçkin Ayhan", "Philipp Berens", "Siegfried Wahl" ], "title": "Leveraging uncertainty information from deep neural networks for disease detection", "venue": "Scientific Reports,", "year": 2017 }, { "authors": [ "Qiang Liu", "Dilin Wang" ], "title": "Stein variational gradient descent: A general purpose Bayesian inference algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Structured and efficient variational deep learning with matrix gaussian posteriors", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative normalizing flows for variational Bayesian neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical Bayesian framework for backpropagation networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Stylianos Moschoglou", "Athanasios Papaioannou", "Christos Sagonas", "Jiankang Deng", "Irene Kotsia", "Stefanos Zafeiriou" ], "title": "Agedb: the first manually collected, in-the-wild age database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": "PhD thesis, University of Toronto,", "year": 1995 }, { "authors": [ "Kazuki Osawa", "Siddharth Swaroop", "Anirudh Jain", "Runa Eschenhagen", "Richard E Turner", "Rio Yokota", "Mohammad Emtiyaz Khan" ], "title": "Practical deep learning with Bayesian principles", "venue": "arXiv preprint arXiv:1906.02506,", "year": 2019 }, { "authors": [ "Kazuki Osawa", "Yohei Tsuji", "Yuichiro Ueno", "Akira Naruse", "Rio Yokota", "Satoshi Matsuoka" ], "title": "Large-scale distributed second-order optimization using kronecker-factored approximate curvature for deep convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Nick Pawlowski", "Andrew Brock", "Matthew CH Lee", "Martin Rajchl", "Ben Glocker" ], "title": "Implicit weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1711.01297,", "year": 2017 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "A scalable laplace approximation for neural networks", "venue": "In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Soumyadip Sengupta", "Jun-Cheng Chen", "Carlos Castillo", "Vishal M Patel", "Rama Chellappa", "David W Jacobs" ], "title": "Frontal to profile face verification in the wild", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2016 }, { "authors": [ "Jiaxin Shi", "Shengyang Sun", "Jun Zhu" ], "title": "Kernel implicit variational inference", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jiaxin Shi", "Shengyang Sun", "Jun Zhu" ], "title": "A spectral approach to gradient estimation for implicit distributions", "venue": "arXiv preprint arXiv:1806.02925,", "year": 2018 }, { "authors": [ "Lewis Smith", "Yarin Gal" ], "title": "Understanding measures of uncertainty for adversarial example detection", "venue": "arXiv preprint arXiv:1803.08533,", "year": 2018 }, { "authors": [ "Shengyang Sun", "Changyou Chen", "Lawrence Carin" ], "title": "Learning structured weight uncertainty in Bayesian neural networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Shengyang Sun", "Guodong Zhang", "Jiaxin Shi", "Roger Grosse" ], "title": "Functional variational Bayesian neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Justus Thies", "Michael Zollhofer", "Marc Stamminger", "Christian Theobalt", "Matthias Niebner" ], "title": "Face2face: Real-time face capture and reenactment of rgb videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Sheng-Yu Wang", "Oliver Wang", "Richard Zhang", "Andrew Owens", "Alexei A Efros" ], "title": "Cnn-generated images are surprisingly easy to spot.", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Yeming Wen", "Paul Vicol", "Jimmy Ba", "Dustin Tran", "Roger Grosse" ], "title": "Flipout: Efficient pseudoindependent weight perturbations on mini-batches", "venue": "arXiv preprint arXiv:1803.04386,", "year": 2018 }, { "authors": [ "Florian Wenzel", "Kevin Roth", "Bastiaan S Veeling", "Jakub Swiatkowski", "Linh Tran", "Stephan Mandt", "Jasper Snoek", "Tim Salimans", "Rodolphe Jenatton", "Sebastian Nowozin" ], "title": "How good is the bayes posterior in deep neural networks really", "venue": "arXiv preprint arXiv:2002.02405,", "year": 2020 }, { "authors": [ "Dong Yi", "Zhen Lei", "Shengcai Liao", "Stan Z Li" ], "title": "Learning face representation from scratch", "venue": "arXiv preprint arXiv:1411.7923,", "year": 2014 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Guodong Zhang", "Shengyang Sun", "David Duvenaud", "Roger Grosse" ], "title": "Noisy natural gradient as variational inference", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Rep", "2018. Tianyue Zheng", "Weihong Deng", "Jiani Hu" ], "title": "Cross-age lfw: A database for studying cross-age", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "Despite their theoretical appealingness, Bayesian neural networks (BNNs) are falling far behind in terms of adoption in real-world applications compared with deterministic NNs, mainly due to their limited scalability in training and low fidelity in uncertainty estimates. In this work, we develop a new framework, named BayesAdapter, to address these issues and bring Bayesian deep learning to the masses. The core notion of BayesAdapter is to adapt pre-trained deterministic NNs to be BNNs via Bayesian fine-tuning. We implement Bayesian fine-tuning with a plug-and-play instantiation of stochastic variational inference, and propose exemplar reparameterization to reduce gradient variance and stabilize the finetuning. Together, they enable training BNNs as if one were training deterministic NNs with minimal added overheads. During Bayesian fine-tuning, we further propose an uncertainty regularization to supervise and calibrate the uncertainty quantification of learned BNNs at low cost. To empirically evaluate BayesAdapter, we conduct extensive experiments on a diverse set of challenging benchmarks, and observe satisfactory training efficiency, competitive predictive performance, and calibrated and faithful uncertainty estimates." }, { "heading": "1 INTRODUCTION", "text": "Much effort has been devoted to developing flexible and efficient Bayesian deep models to make accurate, robust, and well-calibrated decisions (MacKay, 1992; Neal, 1995; Graves, 2011; Blundell et al., 2015), with Bayesian neural networks (BNNs) as popular examples. The principled uncertainty quantification inside BNNs is critical for realistic decision-making, well evaluated in scenarios ranging from model-based reinforcement learning (Depeweg et al., 2016) and active learning (Hernández-Lobato & Adams, 2015), to healthcare (Leibig et al., 2017) and autonomous driving (Kendall & Gal, 2017). BNNs are also known to be capable of resisting over-fitting.\nHowever, there are fundamental obstacles posed in front of ML practitioners when trying to push the limit of BNNs to larger datasets and deeper architectures: (i) The scalability of the existing BNNs is generally restrictive owing to the essential difficulties of learning a complex, non-degenerate distribution over parameters in a high-dimensional and over-parameterized space (Liu & Wang, 2016; Louizos & Welling, 2017; Sun et al., 2019). (ii) The Bayes posteriors learned from scratch are often systematically worse than their point-estimate counterparts in terms of predictive performance when “cold posterior” strategies are not applied (Wenzel et al., 2020). (iii) It is shown that the BNNs have the possibility to assign low (epistemic) uncertainty for realistic out-of-distribution (OOD) data (e.g., adversarial examples), rendering their uncertainty estimates unreliable in safety-critical scenarios (Grosse et al., 2018).\nTo solve these problems, we present a scalable workflow, named BayesAdapter, to learn more reliable BNNs. In a holistic view, we unfold the learning of a BNN into two steps: deterministic pre-training of the deep neural network (DNN) counterpart of the BNN followed by several-round Bayesian fine-tuning. This enables us to learn a principled BNN with slightly more efforts than training a regular DNN, and provides us with the opportunities to embrace qualified off-the-shelf pre-trained DNNs (e.g., those on PyTorch Hub). The converged parameters of the deterministic model serve as a strong start point for Bayesian fine-tuning, allowing us to bypass extensive local\noptimum suffered by a direct learning of BNN1. To render the fine-tuning in the style of training normal NNs, we resort to stochastic variational inference (VI) to update the approximate posterior. We develop optimizers with built-in weight decay for the parameters of the variational distribution to absorb the regularization effects from the prior, and develop exemplar reparametrization to reduce the gradient variance. Moreover, to make the uncertainty estimation of the learned models reliable, we propose to additionally, explicitly regularize the model to behave uncertainly on representative foreseeable OOD data during fine-tuning. This regularization takes the form of a margin loss, and is readily applicable to most of the existing BNNs. Figure 1 depicts the whole framework of BayesAdapter. Extensive empirical studies validate the efficiency and effectiveness of our workflow. In summary, our contributions are as follows:\n1. We propose BayesAdapter, to quickly and cheaply adapt a pre-trained DNN to be Bayesian without compromising performance when facing new tasks.\n2. We provide an easy-to-use instantiation of stochastic VI, which allows learning a BNN as if training a deterministic NN and frees the users from tedious details of BNN.\n3. We augment the fine-tuning with a generally applicable uncertainty regularization term to rectify the predictive uncertainty according to a collection of OOD data.\n4. Extensive studies validate that BayesAdapter is scalable; the delivered BNN models are high-quality; and the acquired uncertainty quantification is calibrated and transferable." }, { "heading": "2 BAYESADAPTER", "text": "In this section, we first motivate BayesAdapter by drawing a connection between maximum a posteriori (MAP) and Bayesian inference. We then describe the proposed procedure Bayesian fine-tuning, and a practical and robust implementation of stochastic VI to realize it. Figure 1 illustrates the overall workflow of BayesAdapter." }, { "heading": "2.1 FROM DNNS TO BNNS", "text": "Let D = {(xi, yi)}ni=1 be a given training set, where xi ∈ Rd and yi ∈ Y denote the input data and label, respectively. A DNN model can be fit via MAP as following:\nmax w\n1\nn ∑ i [log p(yi|xi;w)] + 1 n log p(w). (1)\nWe use w ∈ Rp to denote the high-dimensional model parameters, and p(y|x;w) as the predictive distribution associated with the model. The prior term p(w), when taking the form of an isotropic Gaussian, reduces to the common L2 weight decay regularizer in optimization. Despite the wide adoption, DNNs are known to be prone to over-fitting, generating over-confident predictions, and are unable to convey valuable information on the trustworthiness of their predictions. Naturally, Bayesian neural networks (BNNs) come into the picture to address these limitations.\n1Here the BNN mainly refers to mean-field variational BNNs, and the results in Sec 4.1 testify this point.\nTypically, a BNN imposes a prior p(w) on model parameters, which is put together with the likelihood p(D|w) to infer the posterior p(w|D). Among the wide spectrum of BNN algorithms (MacKay, 1992; Neal, 1995; Graves, 2011; Blundell et al., 2015; Liu & Wang, 2016; Gal & Ghahramani, 2016; Louizos & Welling, 2017), variational BNNs are particularly promising due to their ease of training compared with other BNN variants. Formally, variational BNNs derive a θ-parameterized varitional distribution q(w|θ) to approximate the true posterior p(w|D), by maximizing the evidence lower bound (ELBO) (scaled by 1/n):\nmax θ Eq(w|θ) [ 1 n ∑ i log p(yi|xi;w) ]\n︸ ︷︷ ︸ Lell\n− 1 n DKL (q(w|θ)‖p(w))︸ ︷︷ ︸\nLc\n, (2)\nwhereLell is the expected log-likelihood andLc is the complexity loss. By casting posterior inference into optimization, Eq. (2) makes the training of BNNs more approachable. However, most existing BNNs2 trained under such a criterion exhibit limitations in scalability and performance (Osawa et al., 2019a; Wenzel et al., 2020) compared with their deterministic counterparts, mainly attributed to the higher difficulty of learning high-dimensional distributions than point estimates, and challenges in finding non-degenerated optima of highly nonlinear functions characterized by NNs.\nGiven that MAP converges to the mode of the Bayesian posterior, it might be plausible to adapt pretrained deterministic DNNs to be Bayesian economically. Following this hypothesis, we propose to repurpose the converged parameters w∗ of MAP, and use it to instantiate q(w|θ) as a Gaussian N (w;θ) with θ = (µ,Σ), where µ is initialized as w∗ and Σ ∈ Rp×p denotes the covariance. Then, we arrive at a BNN with posterior predictive:\np(y|x,D) = EN (w;µ,Σ)p(y|x;w) ≈ 1\nS S∑ s=1 p(y|x;w(s)),where w(s) ∼ N (w;µ,Σ), s = 1, ..., S. (3)\nEq. (3) is also called Bayes ensemble, where µ is perturbed, and the predictions from multiple likely models are assembled. Σ controls the magnitude of perturbation. A classic method to generate an informative Σ is by Laplace approximation (Bleistein & Handelsman, 1986), but it is more like a postprocessing procedure, lacking the flexibility to jointly adapt the mean and covariance of the Gaussian posterior w.r.t. data, and its naive implementation without strong assumptions may be computationally prohibitive. Instead, we suggest a more practical workflow – that fine-tunes the approximate posterior N (w;µ,Σ) by maximizing the ELBO with randomly initialized Σ." }, { "heading": "2.2 BAYESIAN FINE-TUNING IN THE STYLE OF FINE-TUNING DNNS", "text": "We develop practical learning algorithms under the stochastic VI scheme to fine-tune the imperfect variational posterior, and to cope with contemporary ML frameworks. In the following, we discuss how to deal with each term in Eq. (2). Algorithm 1 gives an overview of BayesAdapter.\nComplexity loss Lc. Without losing generality, we assume an isotropic Gaussian prior p(w) = N (w;0, σ20I). Then the complexity loss is derived as:\nLc = − 1\nn DKL\n( N (w;µ,Σ)‖N (w;0, σ20I) ) = −µ\nTµ+ tr(Σ) 2σ20n + log detΣ 2n + c, (4)\nwhere tr and det are matrix trace and determinant, respectively. c is a constant. The gradients of Lc w.r.t. µ and Σ can be estimated precisely as:\n∇µLc = − µ\nσ20n , ∇ΣLc =\nσ20Σ −1 − I\n2σ20n . (5)\nEq. (5) indicates that maxµ Lc amounts to applying a weight decay regularizer with coefficient λ = 1 σ20n\non µ, which can be conveniently optimized by leveraging the built-in weight decay modules in ML frameworks such as TensorFlow (Abadi et al., 2016) or PyTorch (Paszke et al., 2019).\nDirectly computing ∇ΣLc involves matrix inversion. Implementing the posterior as matrix-variate Gaussian is an alternative, while existing algorithms for matrix-variate Gaussian posterior typically exhibit high complexity in time or memory, limited compatibility with contemporary NN building block operations (e.g., convolution), and struggle to scale with data-parallel distributed training (Louizos & Welling, 2016; Sun et al., 2017; Osawa et al., 2019b). To simplify the implementation and boost scalability, we assume a fully factorized Gaussian variational by devising Σ as\n2We use BNNs equivalently with variational BNNs in the following text when there is no ambiguity.\ndiag(exp(2ψ)), where ψ ∈ Rp is the parameter to be optimized along with µ (i.e., θ = (µ,ψ)). Injecting this into Eq. (5) gets a more concise gradient estimator: ∇ψLc = 1/n−λ exp(2ψ), meaning that maxψ Lc adds an exponential weight decay of ψ with coefficient λ, which can be realized by modifying only two lines of code on top of de facto DL frameworks (see Figure 1).\nExpected log-likelihood Lell. With the complexity loss expressed as weight decay, we now develop efficient ways for calculating the Lell at the end of forward pass, and for performing backpropagation afterwards. In particular, we derive a Monte Carlo (MC) estimation of Lell based on reparameterization (Kingma & Welling, 2013): we sample a p-dimensional Gaussian noise ∼ N (0, I), then obtain the sampled parameter for the whole mini-batch B of data viaw = µ+exp(ψ) , given which we approximate Lell with L′ell = 1|B| ∑ (xi,yi)∈B log p(yi|xi;w). The gradients of µ and ψ can be derived automatically with autodiff libraries, thus the training resembles that of normal DNNs.\nHowever, gradients derived by L′ell might exhibit high variance, caused by sharing one set of sampled parameters w across all the training instances in B. Local reparameterization is proposed to reduce the variance, but it requires at least 2x forward-backward FLOPS than vanilla reparameterization (refer to Kingma et al. (2015) for more details). Flipout (Wen et al., 2018) is an alternative solution. But it is only suitable for perturbation based MC estimation and its modeling assumptions make Flipout unable to handle complex variational posterior like a FLOW (Louizos & Welling, 2017), or an implicit model (Shi et al., 2018b). Besides, it is still as slow as local reparameterization. To mitigate these issues, we propose exemplar reparametrization (ER) which samples a separate set of parameters for every exemplar in the minibatch. Formally, for ∀xi ∈ B, we draw w(i) = µ + exp(ψ) (i) where (i) ∼ N (0, I), and approximate the expected log-likelihood by L∗ell = 1|B| ∑ (xi,yi)∈B log p(yi|xi;w (i)).\nObviously, ER is distribution agnostic, and is readily applicable to various variational distributions. While ER generates more parameters at training, they are mostly temporary, and the resultant computational FLOPS are provably identical to that of the vanilla reparameterization. The challenge of ER is to cope with nowadays ML frameworks and maintain computing efficiency, because off-the-shelf computation kernels in autodiff libraries typically assume a batch of instances share a common set of parameters. We present an example in Fig-\nure 2 on how the standard convolution op can be converted into its exemplar version without compromising computational efficiency. The key insight here is that multiple exemplar convolutions can be expressed as a group convolution, which can be performed in parallel using a single group convolution kernel, leveraging the optimized implementations provided by various device-propriety kernel backends (e.g. cuDNN (Chetlur et al., 2014)). Other common operators such as matrix multiplication are straightforward to handle (refer to Appendix A).\nWith this insight, BayesAdapter enables to obtain a BNN with only minor computational cost in addition to pre-training, and can immediately benefit from the availability of higher-performance computational kernels (e.g., more powerful group convolution kernel)." }, { "heading": "3 CALIBRATE THE UNCERTAINTY ESTIMATION", "text": "So far we have developed an inexpensive fine-tuning procedure to obtain BNNs from deterministic NNs. While BNNs can offer uncertainty estimates, these uncertainty measures are highly nonsmooth due to the non-convexity of NNs – they might exhibit high uncertainty for data from faraway out-of-distribution (OOD) regions, but become vulnerable on OOD samples close to the normal ones (Grosse et al., 2018), rendering BNNs unable to react to potentially harmful inputs. We quantify this phenomenon in Section 4.2. To address this problem, we next develop methods to further calibrate the uncertainty estimation of naively trained BNNs. Inspired by recent work on OOD detection (Wang et al., 2020; Durall et al., 2020), we propose to additionally incorporate uncertainty regularization on top of the above fine-tuning procedure. The idea is to force BNNs to generate inconsistent predictions for each sample from a cheaply collected OOD sample set, so that they acquire the ability to yield high uncertainty for OOD samples with similar fingerprints.\nAlgorithm 1: BayesAdapter Input: normal training set D in the size of n, OOD training set D†, weight decay coefficient λ for both\nthe pre-training and the fine-tuning, threshold γ, learning rates lrµ, lrψ , fine-tuning epochs T 1 Pre-train the DNN counterpart of the target BNN on D by MAP; denote the converged parameters as µ 2 Create randomly initialized parameters ψ; make the computation modules be Bayesian (see Figure 2) 3 Build optimizers optµ and optψ (see Figure 1) with learning rate lrµ and lrψ for µ and ψ respectively 4 for epoch = 1, 2, ..., T do 5 for mini-batch B = {(xi, yi)}|B|i=1 in D, mini-batch B † = {x†i} |B†| i=1 in D\n† do 6 Build the whole mini-batch {x1, ...,xB,x†1, ...,x † |B†|,x † 1, ...,x † |B†|}, and feed it into the model 7 Given the predictive distribution and labels {yi}|B|i=1, compute L ∗ ell and Lunc 8 Derive the gradients of L∗ell + Lunc w.r.t. µ and ψ via AutoGrad 9 Update the parameters µ and ψ with optimizers optµ and optψ\nTo achieve this, we start by defining a differentiable uncertainty metric in terms of mutual information, following Smith & Gal (2018):\nI(w, y|x,D) ≈ H (\n1 S ∑S s=1 p(y|x;w (s)) ) − 1 S ∑S s=1H ( p(y|x;w(s)) ) ,where w(s) ∼ N (w;µ,ψ), s = 1, ..., S. (6)\nH is the Shannon entropy. I highly correlates with softmax variance (Smith & Gal, 2018), and measures the epistemic uncertainty which describes uncertainty in the model and can be used to identify OOD instances. Then, assuming access to an OOD dataset D† = {x†i}n †\ni=1, we enforce the model to behave uncertainly on each of them by optimizing a margin loss with threshold γ:\nmaxθ Lunc = 1|B†| ∑ x † i∈B † min ( I(w, y|x†i ,D), γ ) , (7)\nwhere B† refers to a mini-batch of OOD data. For efficiency, we adopt S = 2 MC samples for estimating I(w, y|x†i ,D) in Eq. (7) in the training. While this loss has a seemingly opposite form from the consistency-promoting loss in semi-supervised learning (SSL) (Laine & Aila, 2016), they share the same design philosophy: Lunc maximizes the prediction inconsistency of OOD instances so as to distinguish them from in-distribution instances, while SSL minimizes the prediction inconsistency of unlabeled data so to classify them without labels. Put it in the context of autonomous driving: if the model is trained on data only containing scenes in regular weather, we can take a small set of scene data of extreme weather, e.g., tornado and sandstorm, to regularize the training following Eq. (7). Then the model will learn to identify these abnormal scenes based on predictive uncertainty, thus can refuse to make unreliable decisions in these scenes.\nConstructing the OOD dataset D† is flexible and application-specific. In discriminative tasks, two types of OOD data of particular concerns are adversarial and fake samples, which can be both collected trivially following procedures described below.\nAdversarial samples. Directly generating adversary samples following methods like PGD (Madry et al., 2017) might be expensive. We propose a more cost-effective alternative based on a key observation: given a valid perturbation space [−δm, δm]d where δm is the maximum norm under the l∞ threat model, we can see that uniform noises δ ∼ U(−δm, δm)d radically encompass the adversarial perturbations which usually reside at local optimas. Thus we can add uniformly perturbed samples into uncertainty training to direct the model to behave uncertainly on randomly contaminated data, bypassing the potential cost of generating real adversary samples. The results in Sec 4.2 surprisingly confirm the effectiveness of uniform noises, and imply a strong connection between uniform noises and adversarial ones, which deserves a future investigation.\nFake samples. Fake samples can be obtained by utilizing pretrained state-of-the-art GANs (Miyato et al., 2018; Brock et al., 2018), DeepFake (Deepfakes, 2018), and FaceSwap (Faceswap, 2018). We use only 1000 random fake samples for Bayesian fine-tuning on diverse benchmarks.\nFor both, we empirically find the proposed uncertainty regularization is data efficient – with access to a proxy set of adversarial samples and a small set of fake samples, the model can acquire reliable, transferable uncertainty quantification." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate BayesAdapter on a diverse set of challenging benchmarks.\nSettings. We first conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) using wide-ResNet-28-10 (Zagoruyko & Komodakis, 2016) and ResNet-50 (He et al., 2016), respectively. Besides, we train face recognition models on CASIA (Yi et al., 2014) with MobileNetV2 (Sandler et al., 2018), and perform comprehensive evaluation on face verification datasets including LFW (Huang et al., 2007), CPLFW (Zheng & Deng, 2018), CALFW (Zheng et al., 2017), CFP (Sengupta et al., 2016), VGGFace2 (Cao et al., 2018), and AgeDB-30 (Moschoglou et al., 2017). We pre-train DNNs following standard protocols, and perform Bayesian fine-tuning for 40, 12, and 16 epochs with weight decay coefficients (i.e., λ) 2e-4, 1e-4, and 5e-4 on these benchmarks respectively. We set uncertainty threshold γ according to an observation that the normal examples usually present < 0.75 mutual information uncertainty, across the studied scenarios. Then we use γ = 0.75 in the regularization to push both the adversarial and fake data to exhibit distinguishable uncertainty from the normal data. We initialize ψ uniformly from [−6,−5]p and use 20-step PGD as validation adversaries. On the three benchmarks, the perturbation budgets δm are set 0.031, 16/255, and 16/255, and the fake samples are obtained from SNGAN (Miyato et al., 2018), BigGAN (Brock et al., 2018), and DeepFake. We perform intensive data augmentation for fake training data with a random strategy including Gaussian blur, JPEG compression, etc. We defer more details to Appendix B. We run every experiment 3 times on 8 RTX 2080-TIs and report the average.\nBaselines. We compare the full BayesAdapter to extensive baselines including: (1) MAP, (2) Laplace Approx.: Laplace Approximation with diagonal Fisher information matrix, (3) MC dropout (detailed in Appendix B), (4) BNN: BNNs trained from scratch by solving Eq. (2) without uncertainty regularization, (5) BayesAdapter-: a variant of BayesAdapter without uncertainty regularization. We also include Deep Ensemble (Lakshminarayanan et al., 2017), one of the state-of-the-art BNNs, and SWAG (Maddox et al., 2019), whose performance is not worse than SGLD (Welling & Teh, 2011), KFAC Laplace (Ritter et al., 2018), and temperature scaling (Guo et al., 2017), into the comparison on CIFAR-103.\nMetrics. We concern (i) the posterior predictive performance with S = 100 MC samples; (ii) the average precision (AP) of directly using the uncertainty estimated by Eq. (6) (S = 20) to distinguish OOD test samples (labeled 1) from normal test samples (labeled 0). Eq. (6) of the deterministic baseline MAP is 0, so we take the predictive entropy as an alternative uncertainty measure for MAP." }, { "heading": "4.1 PREDICTIVE PERFORMANCE", "text": "We compare the prediction performance, which is of central importance in practice, of various methods in Table 1 and 2. Deep Ensemble shows outperforming classification performance because the ensemble candidates can investigate diverse function modes, but it is orders of magnitude more expensive than BayesAdapter. BayesAdapter- notably surpasses the MAP, especially in NLL, verifying the modeling superiority of a Bayesian formulation and highlights the practical value of our workflow. Laplace Approx. is consistently worse than MAP. In all settings, BNN is significantly defeated by BayesAdapter-, confirming our claim that performing Bayesian fine-tuning from the\n3Currently, we have not scaled Deep Ensemble and SWAG up to ImageNet due to resource constraints.\nconverged deterministic checkpoints is beneficial to bypass the local optimas potentially encountered by direct Bayesian inference. The popular baselines MC dropout and SWAG show weaker performance on ImageNet and CIFAR-10, respectively, revealing limited applicability. Also of note that no method shows dominant performance on face recognition, probably due to the diversity of these validation sets. Across these tasks, BayesAdapter is slightly worse than its regularization-free version BayesAdapter-. This is reasonable since such a regularization enforces the model to trade partial capacity for fidelity of uncertainty estimates. Nevertheless, BayesAdapter is substantially better than its fine-tuning start point MAP and the BNN trained from scratch in most settings.\nSpeedup. BayesAdapter is a much more economical way to obtain BNNs. To interpret the speedup of BayesAdapter over BNN, we assume deterministic ResNet-50 takes one unit time t for one epoch of training on ImageNet, and observe Bayesian ResNet-50 takes ≈ 2.1t for one epoch. Thus, BNN trained from scratch consumes 189t for 90-epoch training, while BayesAdapter- need t ∗ 90+2.1t ∗ 12 = 115.2t, saving 73.8t (around 40%) training time than BNN. 4" }, { "heading": "4.2 QUALITY OF UNCERTAINTY ESTIMATES", "text": "We study the effects of the proposed uncertainty regularization by visualizing the predictive uncertainty of BayesAdapter and BayesAdapter- on validation data in Figure 3. On both CIFAR-10 and ImageNet, BayesAdapter yields evidently higher uncertainty for OOD data than normal data, while BayesAdapter- is on the contrary, showing it can effectively calibrate the predictive uncertainty.\nTo precisely evaluate its efficacy, we quantitatively assess the quality of the predictive uncertainty of various methods by estimating AP, which reflects if the model knows what it knows. As stated, we take adversarial samples crafted by PGD and fake samples from GANs and DeepFake as proxies of harmful OOD data. We report the results in Table 3 and Table 6 in Appendix C. As shown, SWAG, Laplace Approx., MC dropout, BNN, and BayesAdapter- perform all as bad as MAP across settings, except that MC dropout is capable of partially detecting OOD data on face tasks5. Despite impressive prediction accuracy, Deep Ensemble also yields unreliable uncertainty estimates on these two kinds of challenging OOD data. These results echo our concern on the reliability of existing BNNs’ predictive uncertainty. By contrast, BayesAdapter, which is fine-tuned upon MAP for only several rounds based on low-cost supervisions, achieves near-perfect results in detecting OOD instances on CIFAR-10 and face recognition, and also detects most of the OOD instances on ImageNet (see Appendix D for some samples)." }, { "heading": "4.3 ABLATION STUDY", "text": "4In practice, BayesAdapter would be a little slower than BayesAdapter- due to the incorporation of the OOD training set, but still much more efficient than BNN. 5We speculate this may relate to where to add dropout in the NN architecture, but leave it for future study.\nModel calibration. Model calibration is another important aspect of the uncertainty estimation. Suggested by pioneering works, we take the Expected Calibration Error (ECE) (Guo et al., 2017) as the measure of calibration, and report the ECE of the studied methods in Table 4. The ECE of BayesAdapter is on par with the MC dropout, Deep Ensemble, and BNN baselines, but BayesAdapter can meanwhile offer much better uncertainty for detecting risky OOD data.\nTransferability of uncertainty quantification. One may wonder if the uncertainty quantification learned according to specialized OOD data can generalize to other OOD data. To figure out this problem, we evaluate the BayesAdapter trained on CIFAR-10 on 10000 samples from PGGAN (Karras et al., 2017) whose patterns are unseen during training. We compute their uncertainty and calculate the AP metric, obtaining 0.932. As comparison, the APs of MAP, Deep Ensemble, SWAG, Laplace Approx., MC dropout, BNN, BayesAdapter- on such data are 0.789, 0.797, 0.809, 0.800, 0.792, 0.793, 0.803 respectively. On the other hand, we craft adversarial examples by the fast gradient sign method (FGSM) (Goodfellow et al., 2014) against the ResNet-152 DNN model with 1000 validation images from ImageNet. Then we estimate the AP on these instances, and obtain 0.011, 0.125, 0.027, 0.202, 0.019, and 0.882 for MAP, Laplace Approx., MC dropout, BNN, BayesAdapter-, and BayesAdapter respectively. These studies validate the transferability of our uncertainty estimation.\nThe effectiveness of exemplar reparameterization. We build a toy model with only a Bayesian convolutional layer, fixing the model input and target output, and computing the variance of stochastic gradients across 500 runs. We average the gradient variance ofµ andψ over all their coordinates, and observe that standard reparameterization typically introduces more than 100× variance than exemplar reparameterization, despite with the same FLOPS.\nAblation study on uncertainty threshold γ. We perform an ablation study regarding γ on CIFAR10 to evaluate the hyper-parameter tolerance of the proposed method. Table 5 presents the results. The results reveal that values of γ ∈ [0.75, 1.0] may be good choices for OOD detection, and also echo the observation that normal examples usually present < 0.75 uncertainty.\nThe impacts of ensemble number. We draw the change of test accuracy w.r.t. the number of MC samples S for estimating Eq. (3) in Figure 4. The model is trained by BayesAdapter on ImageNet. The points on the red line represent the individual accuracies of the 100 parameter samples. The yellow dashed line refers to the deterministic inference with only the Gaussian mean. The green line displays the effects of Bayes ensemble – the predictive performance increases from < 74% to > 76% quickly before seeing 20 parameter samples, and gradually saturate after that. That is why we use 20 samples for estimating uncertainty and crafting adversarial samples.\nUncertainty-based rejective decision. In practice, we expect our models can reject to predict for data with relatively large uncertainty, and only care about the data that they are certain about. In this spirit, we sort the validation data of ImageNet w.r.t. the uncertainty provided by BayesAdapter, and split them into 10 buckets of equal size. We depict the average accuracy of each bucket in Figure 5. As expected, our BNN is more accurate for instances with smaller uncertainty. Quantitatively, there are 95% instances with uncertainty less than 0.45, and their accuracy is 78.6%; there are 90% instances with uncertainty less than 0.37, and their accuracy is\n80.7%; there are 80% instances with uncertainty less than 0.25, and their accuracy is 84.8%." }, { "heading": "5 RELATED WORK", "text": "Fruitful works have emerged in the BNN community in the last decade (Graves, 2011; Welling & Teh, 2011; Blundell et al., 2015; Kingma & Welling, 2013; Balan et al., 2015; Liu & Wang, 2016; Kendall & Gal, 2017). However, most of the existing works cannot achieve the goal of practicability. For example, Liu & Wang (2016); Louizos & Welling (2016; 2017); Shi et al. (2018a); Sun et al. (2019) trade learning efficiency for flexible variational posteriors, leading to restrictive scalability. Khan et al.; Zhang et al.; Osawa et al. build Adam-like optimizers to do variational inference, but their parallel training throughput and compatibility with data augmentation are inferior to SGD. Empirical Bayes methods, e.g., Monte Carlo (MC) dropout (Gal & Ghahramani, 2016), deep ensemble (Lakshminarayanan et al., 2017), and SWAG (Maddox et al., 2019), usually maintain impressive predictive performance, but suffer from degenerated uncertainty estimates (Fort et al., 2019) or expensive training/storage cost. What’s worse, the existing works usually evaluate on impractical OOD data (Louizos & Welling, 2017; Pawlowski et al., 2017) to show the promise of Bayesian principle. Instead, we offer a new evaluation standard in this work, which may benefit the following works.\nLaplacian approximation (Bleistein & Handelsman, 1986; Ritter et al., 2018) is a known approach to transform a DNN to a BNN, but it is inflexible due to its postprocessing nature and some strong assumptions made for practical concerns. Alternatively, BayesAdapter works in the style of finetuning, which is more natural and economical for deep networks. Bayesian modeling the last layer of a DNN is proposed recently (Kristiadi et al., 2020), and its combination with BayesAdapter deserves an investigation. BayesAdapter connects to MOPED (Krishnan et al.) in that their variational configurations are both based on MAP. Yet, beyond MOPED, BayesAdapter is further designed to achieve good user-friendliness, improved learning stability, and trustable uncertainty estimation, by virtue of optimizers with built-in weight decay, exemplar reparameterization, and uncertainty regularization, respectively, which significantly boost the practicability of BayesAdapter, especially in real-world and large-scale settings." }, { "heading": "6 CONCLUSION", "text": "In this work, we propose a scalable BayesAdapter framework to learn practical BNNs. Our core idea is to learn a BNN by first pre-training its DNN counterpart and then performing Bayesian finetuning. In BayesAdapter, we develop a plug-and-play instantiation of stochastic VI, and propose exemplar reparameterization to reduce the gradient variance. We also propose a generic uncertainty regularization to calibrate the uncertainty quantification given low-cost supervisions. We evaluate BayesAdapter in diverse realistic scenarios and report promising results." }, { "heading": "A THE EXEMPLAR VERSION OF POPULAR OPERATORS", "text": "As introduced in Sec 2.2, the regular convolution can be elegantly converted into an exemplar version by resorting to group convolution. The other popular operators are relatively easy to handle. For example, we substitute the qualified batch matrix multiplication, which is highly optimized in the well-known autodiff libraries, for matrix multiplication. For the affine transformation in batch normalization (Ioffe & Szegedy, 2015), we can at first sample dedicated affine weight and bias for every exemplar in the batch, then perform transformation with these two batches of parameters by just not broadcasting on the batch dimension." }, { "heading": "B MORE EXPERIMENTAL DETAILS", "text": "The only two important hyper-parameters are the weight decay coefficient λ and the uncertainty threshold γ. Other hyper-parameters for defining PGD or specifying learning rates, etc., all follow standard practice in the DL community. The number of fake data training (1000) and the number of MC samples for evaluation (S) are flexible and not tuned.\nFor λ, we keep it consistent between pre-training and fine-tuning (stated in Algorithm 1), without elaborated tuning, for example, λ = 2e − 4 for the wide-ResNet-28-10 architecture on CIFAR-10, λ = 1e−4 for ResNet-50 architecture on ImageNet, and λ = 5e−4 for MobileNet-V2 architecture on CASIA. These values correspond to isotropic Gaussian priors with σ20 as 0.1, 0.0078, and 0.0041 on CIFAR-10, ImageNet, and CASIA, respectively. It is notable that for a “small” dataset like CIFAR-10, a flatter prior is preferred. While on larger datasets with stronger data evidence, we need a sharper prior for regularization.\nFor γ, we use γ = 0.75 for training across all the scenarios. But it is not used for OOD detection in the testing phase. For estimating the results of OOD detection, we use the non-parametric metric average precision (see the metric part of Section 4), which is the Area Under the Precision-Recall Curve and is more suitable than the ROC-AUC metric when there is class imbalance.\nFor the pre-training, we follow standard protocols available online. On CIFAR-10, we perform CutOut (DeVries & Taylor, 2017) transformation upon popular resize/crop/flip transformation for data augmentation. On ImageNet, we leverage the ResNet-50 checkpoint on PyTorch Hub as the converged deterministic model. On face tasks, we train MobileNetV2 following popular hyperparameter settings, and the pre-training takes 90 epochs. We use the same weight decay coefficients in both the pre-training and the fine-tuning.\nFor the fine-tuning, we set lrψ to decay at 1/4, 1/2, and 3/4 of the total fine-tuning steps from 0.1, and set lrµ to be the final value of lrψ on the CIFAR-10, ImageNet, and face recognition benchmarks. We add a coefficient 3 before the Lunc term in Line 8 of Algorithm 1 for Bayesian fine-tuning on ImageNet to achieve better uncertainty calibration. For models on face recognition, we utilize the features before the last FC layer of the MobileNetV2 architecture to conduct feature distance-based face classification in the validation phase, due to the open-set nature of the validation data. The Bayes ensemble is similarly achieved by assembling features from multiple runs as the final feature for estimating predictive performance. But we still adopt the output from the last FC layer for uncertainty estimation (i.e., calculating Eq. (6)).\nThe training perturbation budget is identical to the evaluation budget on CIFAR-10 and ImageNet. But we set the budget of the uniform noise used for training in face tasks to be 1/4 of the evaluation budget to make the models more sensitive to the perturbed data. We adopt PGD for generating adversarial samples in the validation phase. Concretely, we attack the posterior predictive objective, i.e., Eq. (3), with S = 20 MC samples. On CIFAR-10, we set δm = 0.031 and perform PGD for 20 steps with step size at 0.003. On ImageNet and face recognition, we set δm = 16/255 and perform PGD for 20 steps with step size at 1/255.\nRegarding the fake data, we craft 1000 fake samples for training and 10000 ones for evaluation with SNGAN (Miyato et al., 2018) on CIFAR-10; we craft 1000 fake samples for training and 1000 ones for evaluation with BigGAN (Brock et al., 2018) on ImageNet; we randomly sample 1000 fake samples for training and 10000 ones for evaluation from DeepFakes (Deepfakes, 2018), FaceSwap (Faceswap, 2018) and Face2Face (Thies et al., 2016) on face recognition.\nAs for the MC dropout, we add dropout-0.3 (0.3 denotes the dropout rate) before the second convolution in the residual blocks in wide-ResNet-28-10, dropout-0.2 after the second and the third convolutions in the bottleneck blocks in ResNet-50, and dropout-0.2 before the last fully connected (FC) layer in MobileNetV2.\nFor reproducing Deep Ensemble, we train 5 MAPs separately, and assemble them for prediction and uncertainty quantification. For reproducing SWAG, we take use of its official implementation, and leverage 20 MC samples for decision making." }, { "heading": "C MORE RESULTS FOR UNCERTAINTY ESTIMATION", "text": "We provide the comparison on the quality of uncertainty estimates on face recognition in Table 6. It is an immediate observation that BayeAdapter outperforms the extensive baselines significantly, and can detect almost all the OOD instances across the validation datasets. By contrast, BayeAdapter-, MAP, and BNN are similarly unsatisfactory. Surprisingly, MC dropout exhibits some capacity to detect adversarial instances and DeepFake ones in the face tasks. Comparing these results with those of MC dropout on CIFAR-10 and ImageNet, we speculate that such results may stem from the location of deploying dropout in the architecture, which deserves a future investigation.\nD VISUALIZATION OF SOME OOD DATA\nWe provide some random samples of the OOD data used for evaluation in Figure 6. Obviously, these samples are pretty realistic and challenging.\nE VISUALIZATION OF THE LEARNED POSTERIOR\nWe plot the parameter posterior of the first convolutional kernel in ResNet-50 architecture learned by BayesAdapter on ImageNet. The results are depicted in Figure 7. The learned posterior variance seems to be disordered, unlike the mean. We leave more explanations as future work." } ]
2,020
null
SP:8359aea398860c827e9751215f55d399b2c9cfc0
[ "This paper proposes WordsWorth score (WW score), a score to represent the importance of the word obtained from the trained model. Then, the score is applied to the greedy attack proposed by (Yang et al., 2018). In detail, the greedy attack first tries to search for the most important $k$ words in a text, and then it searches for values to replace the selected $k$ words. This paper uses the WW score to select the $k$ words in the first step." ]
Black box attacks on traditional deep learning models trained for text classification target important words in a piece of text, in order to change model prediction. Current approaches towards highlighting important features are time consuming and require large number of model queries. We present a simple yet novel method to calculate word importance scores, based on model predictions on single words. These scores, which we call WordsWorth scores, need to be calculated only once for the training vocabulary. They can be used to speed up any attack method that requires word importance, with negligible loss of attack performance. We run experiments on a number of datasets trained on word-level CNNs and LSTMs, for sentiment analysis and topic classification and compare to state-of-the-art baselines. Our results show the effectiveness of our method in attacking these models with success rates that are close to the original baselines. We argue that global importance scores act as a very good proxy for word importance in a local context because words are a highly informative form of data. This aligns with the manner in which humans interpret language, with individual words having well-defined meaning and powerful connotations. We further show that these scores can be used as a debugging tool to interpret a trained model by highlighting relevant words for each class. Additionally, we demonstrate the effect of overtraining on word importance, compare the robustness of CNNs and LSTMs, and explain the transferability of adversarial examples across a CNN and an LSTM using these scores. We highlight the fact that neural networks make highly informative predictions on single words.
[ { "affiliations": [], "name": "WORDSWORTH SCORES" }, { "affiliations": [], "name": "ATTACKING CNNS" } ]
[ { "authors": [ "Moustafa Alzantot", "Yash Sharma", "Ahmed Elgohary", "Bo-Jhang Ho", "Mani Srivastava", "Kai-Wei Chang" ], "title": "Generating natural language adversarial examples", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Samuel Barham", "Soheil Feizi" ], "title": "Interpretable adversarial training for text", "venue": "CoRR, abs/1905.12864,", "year": 2019 }, { "authors": [ "Brandon Carter", "Jonas Mueller", "Siddhartha Jain", "David Gifford" ], "title": "What made you do this? understanding black-box decisions with sufficient input subsets, 2018", "venue": null, "year": 2018 }, { "authors": [ "Shi Feng", "Eric Wallace", "Alvin Grissom II", "Mohit Iyyer", "Pedro Rodriguez", "Jordan Boyd-Graber" ], "title": "Pathologies of neural models make interpretations difficult", "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Ji Gao", "Jack Lanchantin", "Mary Lou Soffa", "Yanjun Qi" ], "title": "Black-box generation of adversarial text sequences to evade deep learning classifiers. 2018 IEEE Security and Privacy Workshops (SPW), May 2018", "venue": "doi: 10.1109/spw.2018.00016. URL http://dx.doi.org/10.1109/SPW", "year": 2018 }, { "authors": [ "Zhitao Gong", "Wenlu Wang", "Bo Li", "Dawn Song", "Wei-Shinn Ku" ], "title": "Adversarial texts with gradient methods", "venue": null, "year": 2018 }, { "authors": [ "Yu-Lun Hsieh", "Minhao Cheng", "Da-Cheng Juan", "Wei Wei", "Wen-Lian Hsu", "Cho-Jui Hsieh" ], "title": "On the robustness of self-attentive models", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Di Jin", "Zhijing Jin", "Joey Tianyi Zhou", "Peter Szolovits" ], "title": "Is BERT really robust? natural language attack on text classification and entailment", "venue": "CoRR, abs/1907.11932,", "year": 2019 }, { "authors": [ "Ákos Kádár", "Grzegorz Chrupała", "Afra Alishahi" ], "title": "Representation of linguistic form and function in recurrent neural networks", "venue": "Computational Linguistics,", "year": 2017 }, { "authors": [ "Volodymyr Kuleshov", "Shantanu Thakoor", "Tingfung Lau", "Stefano Ermon" ], "title": "Adversarial examples for natural language classification problems, 2018", "venue": "URL https://openreview.net/ forum?id=r1QZ3zbAZ", "year": 2018 }, { "authors": [ "Qi Lei", "Lingfei Wu", "Pin-Yu Chen", "Alex Dimakis", "Inderjit S. Dhillon", "Michael J. Witbrock" ], "title": "Discrete adversarial attacks and submodular optimization with applications to text classification", "venue": "Proceedings of Machine Learning and Systems", "year": 2019 }, { "authors": [ "Jinfeng Li", "Shouling Ji", "Tianyu Du", "Bo Li", "Ting Wang" ], "title": "Textbugger: Generating adversarial text against real-world applications", "venue": "Proceedings 2019 Network and Distributed System Security Symposium,", "year": 2019 }, { "authors": [ "Jiwei Li", "Will Monroe", "Dan Jurafsky" ], "title": "Understanding neural networks through representation erasure", "venue": "CoRR, abs/1612.08220,", "year": 2016 }, { "authors": [ "Andrew L. Maas", "Raymond E. Daly", "Peter T. Pham", "Dan Huang", "Andrew Y. Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis", "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,", "year": 2011 }, { "authors": [ "Dong Nguyen" ], "title": "Comparing automatic and human evaluation of local explanations for text classification", "venue": "In Proceedings of the", "year": 2018 }, { "authors": [ "Anibal Pedraza", "Gloria Bueno" ], "title": "Robustness to adversarial examples can be improved with overfitting", "venue": "International Journal of Machine Learning and Cybernetics,", "year": 2020 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ian J. Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "CoRR, abs/1605.07277,", "year": 2016 }, { "authors": [ "Shuhuai Ren", "Yihe Deng", "Kun He", "Wanxiang Che" ], "title": "Generating natural language adversarial examples through probability weighted word saliency", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Suranjana Samanta", "Sameep Mehta" ], "title": "Towards crafting text adversarial samples", "venue": "CoRR, abs/1707.02812,", "year": 2017 }, { "authors": [ "Jincheng Xu", "Qingfeng Du" ], "title": "On the interpretation of convolutional neural networks for text classification", "venue": "ECAI 2020 - 24th European Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Puyudi Yang", "Jianbo Chen", "Cho-Jui Hsieh", "Jane-Ling Wang", "Michael I. Jordan" ], "title": "Greedy attack and gumbel attack: Generating adversarial examples for discrete data", "venue": "CoRR, abs/1805.12316,", "year": 2018 }, { "authors": [ "Yuan Zang", "Fanchao Qi", "Chenghao Yang", "Zhiyuan Liu", "Meng Zhang", "Qun Liu", "Maosong Sun" ], "title": "Word-level textual adversarial attacking as combinatorial optimization, 2019", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "Black box attacks on traditional deep learning models trained for text classification target important words in a piece of text, in order to change model prediction. Current approaches towards highlighting important features are time consuming and require large number of model queries. We present a simple yet novel method to calculate word importance scores, based on model predictions on single words. These scores, which we call WordsWorth scores, need to be calculated only once for the training vocabulary. They can be used to speed up any attack method that requires word importance, with negligible loss of attack performance. We run experiments on a number of datasets trained on word-level CNNs and LSTMs, for sentiment analysis and topic classification and compare to state-of-the-art baselines. Our results show the effectiveness of our method in attacking these models with success rates that are close to the original baselines. We argue that global importance scores act as a very good proxy for word importance in a local context because words are a highly informative form of data. This aligns with the manner in which humans interpret language, with individual words having well-defined meaning and powerful connotations. We further show that these scores can be used as a debugging tool to interpret a trained model by highlighting relevant words for each class. Additionally, we demonstrate the effect of overtraining on word importance, compare the robustness of CNNs and LSTMs, and explain the transferability of adversarial examples across a CNN and an LSTM using these scores. We highlight the fact that neural networks make highly informative predictions on single words." }, { "heading": "1 INTRODUCTION", "text": "Deep learning models are vulnerable to carefully crafted adversarial examples. The goal of such an attack is to fool a classifier into giving incorrect prediction while the perturbed input appears normal to human observers. The probelm is important from the point of view of robustness as well as interpretability. Thoroughly analyzing different kinds of vulnerabilities in neural networks would help us in creating robust models for deployment in the real world, in addition to throwing some light on the internal working of these models. In this work, we consider text classification, where finding important words in a body of text is the first step towards malicious modification. For this problem, we propose a novel method for calculating word importance. After training a model, we calculate importance scores over the entire training vocabulary, word by word. We further use these importance scores for black box attacks and demonstrate that the attack success rate is comparable to the original methods, particularly for CNNs. Since these scores are global and calculated over the training vocabulary, they can also be used as a tool to interpret a trained model. They provide a measure for comparing different architectures and models beyond training and validation accuracy. Over a single training dataset, we can compare a small CNN to a large CNN, a CNN to an LSTM, or the word importance distribution of one class against another, as we outline in our experiments section. The motivation for our particular algorithm comes from the fact that in a piece of text, most of the time, words and phrases have a strong influence on their own. This gives us a rationale for evaluating a model on single words, in direct contrast to the leave-one-out technique, which involves deleting a word from a document and measuring its importance by the change in model prediction on this\nmodified input. Further, we expect a well-trained network to treat a word approximately the same, irrespective of its location in the input, when surrounding words are removed. Thus a particular word can occur at any position in a document with 200 words and its importance will be roughly the same. We expect a well-trained model to exhibit this behaviour and our experiments confirm this. In summary, our contributions are as follows:\n• We propose a simple and efficient method for calculating word importance for attacking traditional deep learning models in the black box setting.\n• We argue that these scores can act as a tool for model interpretation and outline a number of use cases in this context." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ADVERSARIAL ATTACKS ON NLP MODELS:", "text": "The idea of perturbation, whether random or malicious, is rather simple in the image domain, where salt and pepper noise can be enough to images to fool models. This kind of noise is hard for humans to detect. However, since text data is discrete, perturbations in text are difficult to quantify. Besides, people easily notice errors in computer-generated text. This places additional constraints for an NLP attack to be counted as successful, where a successful attack is one that forces the model to give an incorrect prediction while a human would make the correct prediction on the input. We limit ourselves to text classification problems, using sentiment analysis and topic classification as examples. We only consider the attack scenarios in which specific words in the input are replaced by valid words from the dictionary. Thus we are not considering attacks in which extra information is appended to input data, or where word replacements purposefully introduce spelling errors. The former take an entirely different approach; the latter introduce errors and do not preserve semantics. In addition, training a neural network to be robust to spelling errors would stop these attacks. Further, we limit ourselves to black box attacks where the attacker has no information about model architectures and parameters." }, { "heading": "2.2 FIND AND REPLACE ATTACKS ON TEXT CLASSIFICATION", "text": "Most attacks on text classification solve the problem in two parts; by locating important words in the input, and by finding suitable replacements for these words. We only consider attacks where substitutions are valid words picked from a dictionary, to avoid introducing grammatical errors, and ignore the case, for example, when spelling errors are introduced in important words." }, { "heading": "2.2.1 WHITE BOX ATTACKS", "text": "In the white-box setting, where an attacker has full knowledge of the model architecture, gradients serve as a good proxy for word importance. Gong et al. (2018) use gradient based methods to locate important words. Samanta & Mehta (2017) use gradients to calculate word importance, with linguistic constraints over substitution words. Lei et al. (2019) carry joint word and sentence attacks, by generating sentence paraphrases in the first stage, and resorting to greedy word substitutions if the first stage fails. Again, important words are located by the magnitude of the gradient of word embedding." }, { "heading": "2.2.2 BLACK BOX ATTACKS", "text": "In the black box scenario, where gradients are not available, saliency maps are calculated for words through different methods. Yang et al. (2018) provide a greedy algorithm which we will outline in detail in the next section. Li et al. (2016) propose masking each feature with zero padding, using the decrease in the predicted probability as the score of the feature or word, and masking the top-k features as unknown. Alzantot et al. (2018) and Kuleshov et al. (2018) propose variations of genetic algorithms. Kuleshov et al. (2018) replace words one by one until the classifier is misdirected while observing a bound on the\nnumber of perturbed features. They run each new iteration on the modified input. For substitution, they used post processed GloVe to find pool of suitable words. They also compute ’thought vectors’ for sentences and ensure that these are preserved. Alzantot et al. (2018) select words by random sampling, where probability of each word being selected is proportional to the number of suitable neighbours for replacement. They use Google 1 billion words language model to ensure that replacements match the context provided by the rest of the input. Ren et al. (2019) propose a saliency-based greedy algorithm, calculated by deleting words during the search phase and select substitutions from WordNet. Another similar attack model is Jin et al. (2019), which has extra semantic similarity checking when searching adversarial examples, and calculates word importance by deleting words. Zang et al. (2019) propose a particle swarm optimization algorithm for the search problem. Gao et al. (2018) define different scoring functions where they look at prediction before and after removing a particular word from a subset of input, and perform character level modifications in the second stage. Li et al. (2019) use the sentence probability directly but once again, when ranking words, they try masking words in a sentence. A common thread among all search methods for black box attacks is erasure or omission, where the effect of a word is observed by comparing the classifier output probability for original input to that for input with this particular word removed or replaced by zero." }, { "heading": "2.3 INTERPRETABILITY IN MACHINE LEARNING THROUGH ERASURE", "text": "Li et al. (2016) is a pioneering body of work in the domain of interpretability that highlights the importance of interpreting networks by erasing parts of various layers. This Leave-One-Out method is followed by most interpretation algorithms. For a particular word, they calculate importance score as the average of prediction difference due to erasing this word from all test examples. Feng et al. (2018) gradually remove unimportant input words so that only the important ones are left at the end. Barham & Feizi (2019) propose sparse projected gradient descent to generate adversarial examples to improve interpretability. Nguyen (2018) looks at different methods of local explanations for labels, which include LIME, random feature deletion and first derivative saliency. Kádár et al. (2017) measure salience of a word by removing it and noting the change in prediction. Jin et al. (2019) mention deleting a particular word to calculate its importance score. Ren et al. (2019) use word saliency which is the change in the classifier output if a word is set to unknown. Carter et al. (2018) find sufficient input subsets while calculating the feature importance by masking words. For calculating word score matrices,Xu & Du (2020) propose a method which involves masking words. We want to highlight the aspect that all the dominant techniques for interpretation use leave-one-out method for calculating word importance. WordsWorth scores provide a reliable way of calculating feature importance, as shown by attack success rates. Thus, they can be reliably used to interpret a model after it has been trained. When these scores show that a particular word is important or unimportant for predicting a particular class, we can be sure that this is how the model behaves." }, { "heading": "3 GREEDY ALGORITHM FOR BLACK BOX ATTACKS", "text": "The greedy algorithm mentioned in Yang et al. (2018) consists of two steps: finding the most important words in a text, and finding the most distracting replacements for these words, with some constraint. For an attack where k features are allowed to be perturbed, the top k important words are picked first, and then replaced one by one. In the first step, greedy finds the most important words by calculating importance scores for each word in the input using leave-one-out technique. The score of a word is the difference in prediction probability for original input and for input with the word removed. The second step of the algorithm includes finding suitable replacement for these words. Throughout this paper we will use their greedy algorithm as a baseline for comparison, since it achieves the highest success rate among all black box methods (Hsieh et al., 2019). Greedy uses the pretrained GloVe embeddings and limits the search in second step to within a prespecified distance, to preserve semantics. However, it should be noted that GloVe embeddings do not always provide semantic preserving replacements, and a post-processed form of embeddings would work better, such as the ones used by Kuleshov et al. (2018). In our experiments, we use 50-dimensional GloVe embeddings to find replacements for important words. We limit our search to the ten nearest neighbours for each word." }, { "heading": "4 WORDSWORTH SCORES FOR FEATURE IMPORTANCE", "text": "For determining importance of individual words in a text document, we propose WordsWorth scores, which are the prediction scores of each individual word in the vocabulary, from the trained classifier. Since the CNNs and LSTMs on which we experiment have a fixed input size(limited to 200 words throughout the experiments), for calculating these scores, the integer representation of the word (from the tokenizer) is appended with zeros and fed to the classifier. This is equivalent to evaluating the classifier on a piece of text where the text consists of a single word. The algorithm for greedy attack using WordsWorth scores is given below.\nAlgorithm 1 Step 1: Calculate WW scores over V , the training vocabulary Input F , a trained CNN or LSTM Input p the number of classes in data Input d the size of classifier input Input V ∈ Rd, the training vocabulary having m words Output WW ∈ Rd∗p, Wordsworth scores over the training vocabulary\n1: for w ∈ V do 2: define x = 00, 01, 02, ...0d−2, w 3: WW (w) = F(x) 4: end for\nAlgorithm 2 Step 2: Greedily replace top k words to maximize incorrect class probability Input F , a trained CNN or LSTM Input X ∈ Rd, text input to be modified Input k the maximum number of features to be perturbed Input D ∈ Rm∗10 the nearest neighbour dictionary with 10 neighbours for each training vocab word Input WW ∈ Rd∗p, Wordsworth scores over the training vocabulary Output X ′, the maliciously modified input with maximum k words modified\n1: 2: Pick i1, i2, .., ik such that WW (Xi1) >= WW (Xi2).. >= WW (Xik) 3: Initialize X ′ = X 4: for j ∈ i1, i2, .., ik do 5: wj = X ′j , Dj = 10 nearest neighbour of wj 6: for w ∈ Dj do 7:\nX ′w = { X ′j if j 6= i w if j = i\n8: end for 9: X ′ = argX′w max |F(X ′ w)−F(X ′)|\n10: end for" }, { "heading": "5 EXPERIMENTS", "text": "Comparison with two other blackbox attacks: Here we present the performance of del one (Li et al., 2016) and greedy (Yang et al., 2018), along with their modified versions, where word importance has been computed through WordsWorth scores for the modified versions. We call the modified versions as del one ww and greedy ww respectively. We also show the AUC scores for original data, named as original, to serve as a baseline." }, { "heading": "5.1 SENTIMENT ANALYSIS: IMDB REVIEWS", "text": "" }, { "heading": "5.1.1 DATASET AND MODEL ARCHITECTURE", "text": "We use the IMDB dataset (Maas et al., 2011), which consists of 25000 training reviews and 25000 test reviews of variable length. Each review in the training set has a positive/negative label attached\nto it. Training vocabulary size is 5000 and we cut each review to 200 words max. We use a simple CNN as the starting point of our experiments, with 32 dimensional embedding layer, 50 filters and 100 units in a dense layer. The input layer uses word2vec embeddings that are learned during training. ReLU activation is used. The network is trained on 25000 training examples. We use the Adam optmizer with default learning rate of 0.001 and use early stopping. Validation accuracy is 88.78%. We pick 300 examples at random from the test dataset and plot the ROC AUC values versus number of features perturbed for different algorithms. The results for CNN in figure 1(left) show that the modified versions of both algorithms have a performance that is comparable to that of the original versions. The distance between greedy and greedy ww is larger than that between del one and del one ww. This implies that if simply deleting words is the strategy, WordsWorth scores are almost as effective as manually deleting each word one by one and finding the one that contributes most to the model prediction." }, { "heading": "5.1.2 RUNTIME COMPARISON WITH BASELINES", "text": "WordsWorth scores can be calculated once over the vocabulary learnt during training once the classifier has been trained. At test time, model evaluation can be replaced by a simple lookup. Thus, for a 5000 vocabulary size, WW score calculation takes 5000 model prediction. On the other hand, with 200 word reviews on average, the original baselines(greedy as well as del one) need 5000 evaluations to locate important words just for 25 text examples, because they involve deleting each word in a review to calculate its importance only for this particular review. Thus, if a word appears in multiple reviews, its importance has to be calculated separately in the context of each review, for greedy and del one. If attacks are carried out in bulk, WordsWorth evaluations are essentially free after the first 25 reviews. This indicates a considerable slashing of computation time and resources. Additionally, WordsWorth score computations use a sparse input, which are more suitable for low-power platforms as compared to greedy and del one." }, { "heading": "5.1.3 DO GREEDY AND GREEDY WW FIND THE SAME WORDS TO BE IMPORTANT?", "text": "During this experiment, when we compared the top ten words found by greedy and greedy ww for each test example, 7.3 words were same on average. When we looked at the top 5 words, 3.5 were same on average. This strengthens the idea that both algorithms choose quite similar set of words for each instance." }, { "heading": "5.1.4 LSTM", "text": "We repeat the experiment on an LSTM with 100 examples chosen at random and report the results in figure 1(right). Here, similar trends can be observed, with del one ww performing close to del one and greedy ww performing close to greedy. However, the difference here is larger as compared to CNN, which could be due to the LSTM learning a more robust representation." }, { "heading": "5.2 SENTIMENT ANALYSIS: YELP REVIEWS", "text": "The Yelp reviews dataset consists of positive and negative reviews. We train a CNN with 32 input units, 32 filters and 64 hidden units with Relu activation. We use 83200 training example and 15000 validation examples. The CNN has 89.96% train accuracy, 93.74% validation accuracy. We use the Adam optmizer with default learning rate of 0.001 and use early stopping. The input layer uses word2vec embeddings that are learned during training. We carry out attacks on 500 random test examples and report results in figure 2(left). Here we show the accuracy of the classifier on all 500 examples as the number of perturbed features increases. Here we have added replace random and delete random as two additional baselines. Replace random replaces k features chosen at random, whereas delete random deletes k random features." }, { "heading": "5.3 TOPIC CLASSIFICATION: AG NEWS", "text": "The AG news dataset consists of news related to 4 categories. We train a CNN with 32 input units, 32 filters and 64 hidden units with ReLU activation. It has 96000 training example and 24000 validation examples. We train for 2 epochs with 96.4% train accuracy, 94.8% validation accuracy. We use the Adam optmizer with default learning rate of 0.001 and use early stopping. The input layer uses word2vec embeddings that are learned during training. We carry out attacks on 500 random test examples and report results in figure 2(right). Here we show the accuracy of the classifier on all 500 examples as the number of perturbed features increases. The results on this multiclass dataset confirm that WW scores are a good proxy for local word importance. The attacks here are untargeted, with the objective being to minimize the correct class probability. Targeted attacks can also be similarly launched using these scores." }, { "heading": "5.4 ADDITIONAL EXPERIMENTS ON IMDB REVIEWS", "text": "In this section we describe a number of other experiments we ran on the IMDB reviews dataset." }, { "heading": "5.4.1 IS GREEDY A LOWER BOUND FOR ATTACK SUCCESS?", "text": "We carried out further experiments by creating a new algorithm that evaluates greedy and greedy ww for each test example and chooses the best result of both. If greedy and greedy ww were finding different types of vulnerabilities, we would have expected the algorithm to perform better than both. In fact, the algorithm did no better than greedy, and thus greedy appears to be a bound for greedy ww. Recall that in greedy, feature deletion is followed by feature insertion, so it does not follow directly that evaluation of input with a feature deleted should perform better than evaluation with everything except the feature deleted. We hypothesize that the success of greedy attacks is partially explained by WordsWorth scores. Most of the times, greedy is just picking the words with the highest global importance and finding replacements. In some cases it optimizes further, which explains its improved performance over\ngreedy ww. We would like to point out that a surprisingly high fraction of successful greedy attacks is explained by our single word scores. This suggests that most of the time, the impact of a word on the prediction is independent of its context." }, { "heading": "5.4.2 THE CASE OF SMALL ARCHITECTURES", "text": "To test the algorithm with smaller networks, we train a very small CNN (8 dimensional embedding layer,8 filters and 16 units in a dense layer) and run our experiments on it. As with all other experiments, we use the Adam optmizer with default learning rate of 0.001 and use early stopping. The input layer uses word2vec embeddings that are learned during training. The test accuracy is the same as that for the larger CNN, but the model appears to be holding up better to greedy ww attacks, as shown in figure 3(left). Compare the results to figure 1 (left), for a large CNN. This shows that text classification problems can be easily learned by relatively small CNNS, particularly when the number of classes is small.\nThe Pearson correlation between WordsWorth scores for this model and our main CNN is 0.83. The relatively poor performance of the bigger CNN could be due to overfitting. Since the task of binary classification is rather simple, the smaller network could be learning more robust and meaningful representations. However, contradictory hypotheses exist for images, such as Oscar Deniz & Bueno (2020). Here, we highlight the fact that robustness to attacks, as well as score comparison, could be one interesting way to compare small vs big and deep vs shallow models." }, { "heading": "5.4.3 TRANSFERABILITY AND SECURITY", "text": "The phenomenon of transferability is well documented in adversarial attacks on deep models, where adversarial examples generated for one trained model are often successful in attacking another model (Papernot et al., 2016). To demonstrate the phenomenon of transferability, we attack an LSTM with greedy and del one, and with greedy ww cnn and del one ww cnn where the WordsWorth scores have been calculated through a CNN, and the second step in adversarial search is evaluated directly on the LSTM. The correlation between the CNN and LSTM WW scores came out to be 0.88. Results of the attacks are shown in figure 3(right).\nThere is some drop in performance but still a noticeable degree of success. The close alignment of greedy and greedy ww cnn shows that the importance scores calculated through CNN are valid for LSTM too, even though directly using LSTM scores gives better performance. Compare this to figure 1(right) where LSTM was attacked with WW scores from the LSTM itself. We argue that this close, non-random alignment in figure 3 (right) explains the phenomenon of transferability in general. Features that are important for one architecture are important for another architecture too, when both models have been trained on the same dataset. Our argument here has two parts: that scores from CNN and scores from LSTM have very high correlation(0.88), and scores from CNN can be used to attack LSTM(which is the transferability\nphenomenon) with a reasonable success rate , and that the former explains the latter. Additionally, this highlights the aspect that for attacking a black box model, an adversary can train a small model locally and use it to highlight the vulnerable points of a piece of text, while using the black box model to find out substitutes, since the latter requires a much fewer number of model evaluations than the former." }, { "heading": "6 INTERPRETING NEURAL NETWORKS THROUGH WORDSWORTH SCORES", "text": "In this section we show how to use WordsWorth scores for interpreting a model." }, { "heading": "6.1 IMDB REVIEWS: LOCATING IMPORTANT WORDS", "text": "For the IMDB dataset, we computed the WordsWorth scores over our entire vocabulary (limited to 5000 top words) for the CNN as well as the LSTM. The top ten important words for the CNN are given in the table 2.In this manner, the model designer can directly find the top ranked words associated with each sentiment after training and examine errors in training. For the CNN, the mean WW score is 0.559 and standard deviation is 0.0530. We also include a snapshot of the scores for the entire vocabulary for the CNN in figure 4." }, { "heading": "6.2 AG NEWS", "text": "We computed the WordsWorth scores over our entire vocabulary (limited to 20000 top words) for the trained CNN. The top ten important words for each category are given in the table 3. Here, ’martha’ is the 9th most important word for category ’Business’ and this potentially represents a generalization error, one that a model designer might want to investigate. Using score-based insights to actually improve generalization is a direction for future research." }, { "heading": "6.2.1 AG NEWS WORD IMPORTANCE SCORE DISTRIBUTION", "text": "We also plot the scores for each class for AG News in figure 6 and 7. Different categories have different distributions associated with them. This is an interesting fact and could point to differences in writing style for each category, or to a difference in word distribution within each category." }, { "heading": "6.3 OVERTRAINING AND WORD IMPORTANCE SCORES", "text": "We overtrain a CNN on the IMDB reviews dataset and calculate word importance scores over the training vocabulary after every epoch. A few snapshots are in figure 5. Stats related to training are shown in table 1. Overtraining has a noticeable effect on individual word scores, with score distribution becoming homogeneous throughout the epochs. Again, investigating how word scores evolve can serve as an additional tool for a model designer in addition to training and validation accuracy." }, { "heading": "7 CONCLUSION", "text": "We consider the problem of quickly finding important words in a text to perturb in order to maximize the efficacy of black box attacks on deep NLP models, in the context of text classification. For this problem we present WordsWorth, a feature ranking algorithm that performs comparably well to the state of the art approaches, particularly when only a small number of feature perturbations are allowed, while being orders of magnitude faster by virtue of being essentially a lookup on training vocabulary. We also use these scores as a tool model interpretation, compare different architectures and give a metric for evaluating performance beyond training and validation accuracy. We also explain the phenomenon of transferability observed in text adversarial attacks and show that black box attacks can yield valuable information about the training dataset. All in all, we argue that text generated by humans is a highly compact and informative representation of data and the way neural networks interpret language aligns with human understanding. Overall, we provide a method for evaluating importance in parallel with word erasure techniques. Combining the two techniques would yield even richer insights into the workings of models. Seen another way, WordsWorth attacks uncover a particular kind of vulnerability in deep models. Our work is the first step in designing a rule based algorithm to attack deep models that deal with text, and the next one would be to look at complex interactions. By aligning the performance of rule based algorithms with empirical methods currently popular in deep learning, we can improve our understanding of these otherwise blackbox models." }, { "heading": "4 0.1678 0.9376 0.3187 0.8725", "text": "" }, { "heading": "A APPENDIX", "text": "" } ]
2,020
null
SP:2788722ffb82bb4ee15189b47e16d178eccecf3e
[ "This paper proposes to restart the momentum parameter in SGD (with Nesterov's momentum) according to some carefully chosen schedules in training deep neural network, which is named as SRSGD. Two different restarting schedules are proposed: linear schedule and exponential schedule. The strong point of this paper is its extensive experimental evaluations, which justify that SRSGD significantly improves the convergence speed and generalization over standard momentum SGD. The empirical analysis also sheds some light on the parameter tuning and interpretation of SRSGD." ]
Stochastic gradient descent (SGD) algorithms, with constant momentum and its variants such as Adam, are the optimization methods of choice for training deep neural networks (DNNs). There is great interest in speeding up the convergence of these methods due to their high computational expense. Nesterov accelerated gradient (NAG) with a time-varying momentum, denoted as NAG below, improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst. In this paper, we propose scheduled restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance, in training ResNet-200 for ImageNet classification, SRSGD achieves an error rate of 20.93% vs. the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with significantly fewer training epochs compared to the SGD baseline.
[]
[ { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Mahmoud Assran", "Michael Rabbat" ], "title": "On the convergence of nesterov’s accelerated gradient method in stochastic settings", "venue": "arXiv preprint arXiv:2002.12414,", "year": 2020 }, { "authors": [ "Necdet Serhat Aybat", "Alireza Fallah", "Mert Gurbuzbalaban", "Asuman Ozdaglar" ], "title": "Robust accelerated gradient methods for smooth strongly convex functions", "venue": "arXiv preprint arXiv:1805.10579,", "year": 2018 }, { "authors": [ "Necdet Serhat Aybat", "Alireza Fallah", "Mert Gurbuzbalaban", "Asuman Ozdaglar" ], "title": "A universally optimal multistage accelerated stochastic gradient method", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM Journal on Imaging Sciences,", "year": 2009 }, { "authors": [ "Yoshua Bengio", "Nicolas Boulanger-Lewandowski", "Razvan Pascanu" ], "title": "Advances in optimizing recurrent networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Sébastien Bubeck" ], "title": "Convex optimization: Algorithms and complexity", "venue": "arXiv preprint arXiv:1405.4980,", "year": 2014 }, { "authors": [ "John Chen", "Anastasios Kyrillidis" ], "title": "Decaying momentum helps neural network training", "venue": "arXiv preprint arXiv:1910.04952,", "year": 2019 }, { "authors": [ "Michael B Cohen", "Jelena Diakonikolas", "Lorenzo Orecchia" ], "title": "On acceleration with noise-corrupted gradients", "venue": "arXiv preprint arXiv:1805.12591,", "year": 2018 }, { "authors": [ "Olivier Devolder", "François Glineur", "Yurii Nesterov" ], "title": "First-order methods of smooth convex optimization with inexact oracle", "venue": "Mathematical Programming,", "year": 2014 }, { "authors": [ "Timothy Dozat" ], "title": "Incorporating Nesterov momentum into Adam", "venue": null, "year": 2016 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Robert M Freund", "Haihao Lu" ], "title": "New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Accelerated gradient methods for nonconvex nonlinear and stochastic programming", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Pontus Giselsson", "Stephen Boyd" ], "title": "Monotonicity and restart in fast gradient methods", "venue": "In 53rd IEEE Conference on Decision and Control,", "year": 2014 }, { "authors": [ "Gabriel Goh" ], "title": "Why momentum really works", "venue": "Distill, 2(4):e6,", "year": 2017 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Moritz Hardt" ], "title": "Robustness versus acceleration", "venue": "http://blog.mrtz.org/2014/08/18/ robustness-versus-acceleration.html,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual networks. https://github", "venue": "com/KaimingHe/deep-residual-networks,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "W Ronny Huang", "Zeyad Emam", "Micah Goldblum", "Liam Fowl", "Justin K Terry", "Furong Huang", "Tom Goldstein" ], "title": "Understanding generalization through visualizations", "venue": null, "year": 1906 }, { "authors": [ "Anatoli Iouditski", "Yuri Nesterov" ], "title": "Primal-dual subgradient methods for minimizing uniformly convex functions", "venue": "arXiv preprint arXiv:1401.1792,", "year": 2014 }, { "authors": [ "Prateek Jain", "Sham M Kakade", "Rahul Kidambi", "Praneeth Netrapalli", "Aaron Sidford" ], "title": "Accelerating stochastic gradient descent for least squares regression", "venue": "In Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Michael I Jordan" ], "title": "Accelerated gradient descent escapes saddle points faster than gradient descent", "venue": "arXiv preprint arXiv:1711.10456,", "year": 2017 }, { "authors": [ "Rahul Kidambi", "Praneeth Netrapalli", "Prateek Jain", "Sham Kakade" ], "title": "On the insufficiency of existing momentum schemes for stochastic optimization", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Andrei Kulunchakov", "Julien Mairal" ], "title": "A generic acceleration framework for stochastic composite optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Guanghui Lan" ], "title": "An optimal method for stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2012 }, { "authors": [ "Quoc V Le", "Navdeep Jaitly", "Geoffrey E Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Qihang Lin", "Lin Xiao" ], "title": "An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Chaoyue Liu", "Mikhail Belkin" ], "title": "Accelerating sgd with momentum for over-parameterized learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Fixing weight decay regularization in adam", "venue": null, "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Boris S Mordukhovich" ], "title": "Variational analysis and generalized differentiation I: Basic theory, volume 330", "venue": "Springer Science & Business Media,", "year": 2006 }, { "authors": [ "Arkaddii S Nemirovskii", "Yu E Nesterov" ], "title": "Optimal methods of smooth convex minimization", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1985 }, { "authors": [ "Yu Nesterov" ], "title": "Gradient methods for minimizing composite functions", "venue": "Mathematical Programming,", "year": 2013 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex programming volume", "venue": "i: Basic course", "year": 1998 }, { "authors": [ "Yurii E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ 2)", "venue": "In Dokl. Akad. Nauk Sssr,", "year": 1983 }, { "authors": [ "Brendan O’donoghue", "Emmanuel Candes" ], "title": "Adaptive restart for accelerated gradient schemes", "venue": "Foundations of Computational Mathematics,", "year": 2015 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Boris T Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "arXiv preprint arXiv:1904.09237,", "year": 2019 }, { "authors": [ "James Renegar" ], "title": "Efficient first-order methods for linear programming and semidefinite programming", "venue": "arXiv preprint arXiv:1409.5832,", "year": 2014 }, { "authors": [ "R Tyrrell Rockafellar" ], "title": "Convex analysis", "venue": "Number 28. Princeton university press,", "year": 1970 }, { "authors": [ "R Tyrrell Rockafellar", "Roger J-B Wets" ], "title": "Variational analysis, volume 317", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Vincent Roulet", "Alexandre" ], "title": "d’Aspremont. Sharpness, restart and acceleration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Vincent Roulet", "Nicolas Boumal", "Alexandre" ], "title": "d’Aspremont. Computational complexity versus statistical performance on sparse recovery problems", "venue": "arXiv preprint arXiv:1506.03295,", "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Weijie Su", "Stephen Boyd", "Emmanuel" ], "title": "Candes. A differential equation for modeling nesterov’s accelerated gradient method: Theory and insights", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and Intelligent Laboratory Systems,", "year": 1987 }, { "authors": [ "Matthew D Zeiler" ], "title": "Adadelta: an adaptive learning rate method", "venue": "arXiv preprint arXiv:1212.5701,", "year": 2012 }, { "authors": [ "Sixin Zhang", "Anna E Choromanska", "Yann LeCun" ], "title": "Deep learning with elastic averaging sgd", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Martin Zinkevich", "Markus Weimer", "Lihong Li", "Alex J Smola" ], "title": "Parallelized stochastic gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Training many machine learning (ML) models reduces to solving the following finite-sum optimization problem\nmin w f(w) := min w\n1\nN\nN∑\ni=1\nfi(w) := min w\n1\nN\nN∑\ni=1\nL(g(xi,w), yi), w ∈ Rd, (1)\nwhere {xi, yi}Ni=1 are the training samples and L is the loss function, e.g., cross-entropy loss for a classification task, that measure the discrepancy between the ground-truth label yi and the prediction by the model g(·,w), parametrized by w. The problem (1) is known as empirical risk minimization (ERM). In many applications, f(w) is non-convex, and g(·,w) is chosen among deep neural networks (DNNs) due to their preeminent performance across various tasks. These deep models are heavily overparametrized and require large amounts of training data. Thus, both N and the dimension of w can scale up to millions or even billions. These complications pose serious computational challenges.\nOne of the simplest algorithms to solve (1) is gradient descent (GD), which updates w according to:\nwk+1 = wk − sk 1\nN\nN∑\ni=1\n∇fi(wk), (2)\nwhere sk > 0 is the step size at the k-th iteration. Computing ∇f(wk) on the entire training set is memory intensive and often prohibitive for devices with limited random access memory (RAM) such as graphics processing units (GPUs) used for deep learning (DL). In practice, we sample a subset of the training set, of size m with m N , to approximate ∇f(wk) by the mini-batch gradient 1/m ∑m j=1∇fij (wk), resulting in the (mini-batch)-stochastic gradient descent (SGD). SGD and its\nUnder review as a conference paper at ICLR 2021\naccelerated variants are among the most used optimization algorithms in ML. These gradient-based algorithms have low computational complexity, and they are easy to parallelize, making them suitable for large scale and high dimensional problems (Zinkevich et al., 2010; Zhang et al., 2015).\nNevertheless, GD and SGD have issues with slow convergence, especially when the problem is ill-conditioned. There are two common techniques to accelerate GD and SGD: adaptive step size (Duchi et al., 2011; Hinton et al.; Zeiler, 2012) and momentum (Polyak, 1964). The integration of both adaptive step size and momentum with SGD leads to Adam (Kingma & Ba, 2014), one of the most used optimizers for training DNNs. Many recent developments have improved Adam (Reddi et al., 2019; Dozat, 2016; Loshchilov & Hutter, 2018; Liu et al., 2020). GD with constant momentum leverages the previous step to accelerate GD according to:\nvk+1 = wk − sk∇f(wk); wk+1 = vk+1 + µ(vk+1 − vk), (3) where µ > 0 is a constant. A similar acceleration can be achieved by the heavy-ball (HB) method (Polyak, 1964). The momentum update in both (3) and HB have the same convergence rate of O(1/k) as that of GD for convex smooth optimization. A breakthrough due to Nesterov (1983; 2018) replaces µ with (k − 1)/(k + 2), which is known as the Nesterov accelerated gradient (NAG) with time-varying momentum. For simplicity, we denote this method as NAG below. NAG accelerates the convergence rate to O(1/k2), which is optimal for convex and smooth loss functions (Nesterov, 1983; 2018). NAG can also speed up the process of escaping from saddle points (Jin et al., 2017). In practice, NAG momentum can accelerate GD for nonconvex optimization, especially when the underlying problem is poorly conditioned (Goh, 2017). However, NAG accumulates error and causes instability when the gradient is inexact (Devolder et al., 2014; Assran & Rabbat, 2020). In many DL applications, constant momentum achieves state-of-the-art result. For instance, training DNNs for image classification. Since NAG momentum achieves a much better convergence rate than constant momentum with exact gradient for general convex optimization, we consider the following question:\nCan we leverage NAG with a time-varying momentum parameter to accelerate SGD in training DNNs and improve the test accuracy of the trained models?\nContributions. We answer the above question by proposing the first algorithm that integrates scheduled restart NAG momentum with plain SGD. Here, we restart the momentum, which is orthogonal to the learning rate restart (Loshchilov & Hutter, 2016). We name the resulting algorithm scheduled restart SGD (SRSGD). Theoretically, we prove the error accumulation of Nesterov accelerated SGD (NASGD) and the convergence of SRSGD. The major practical benefits of SRSGD are fourfold:\n• SRSGD remarkably speeds up DNN training. For image classification, SRSGD significantly reduces the number of training epochs while preserving or even improving the network’s accuracy. In particular, on CIFAR10/100, the number of training epochs is reduced by half with SRSGD, while on ImageNet the reduction in training epochs is also remarkable. • DNNs trained by SRSGD generalize significantly better than the current benchmark optimizers.\nThe improvement becomes more significant as the network grows deeper as shown in Fig. 1. • SRSGD reduces overfitting in training very deep networks such as ResNet-200 for ImageNet\nclassification, enabling the accuracy to keep increasing with depth. • SRSGD is straightforward to implement and only requires changes in a few lines of the SGD code.\nThere is also no additional computational or memory overhead.\nWe focus on image classification with DNNs, in which SGD with constant momentum is the choice.\nRelated Work. Momentum has long been used to accelerate SGD. SGD with scheduled momentum and a good initialization can handle the curvature issues in training DNNs and enable the trained models to generalize well (Sutskever et al., 2013). Kingma & Ba (2014) and Dozat (2016) integrated momentum with adaptive step size to accelerate SGD. In this work, we study the time-varying momentum version of NAG with restart for stochastic optimization. Adaptive and scheduled restart have been used to accelerate NAG with the exact gradient (Nemirovskii & Nesterov, 1985; Nesterov, 2013; Iouditski & Nesterov, 2014; Lin & Xiao, 2014; Renegar, 2014; Freund & Lu, 2018; Roulet et al., 2015; O’donoghue & Candes, 2015; Giselsson & Boyd, 2014; Su et al., 2014). These studies of restart NAG momentum are for convex optimization with the exact gradient. Restart techniques have also been used for stochastic optimization (Kulunchakov & Mairal, 2019). In particular, Aybat et al. (2019) developed a multistage variant of NAG with momentum restart between stages. Our work focuses on developing NAG-based optimization for training DNNs. Many efforts have also been\nUnder review as a conference paper at ICLR 2021\ndevoted to studying the non-acceleration issues of SGD with HB and NAG momentum (Kidambi et al., 2018; Liu & Belkin, 2020), as well as accelerating first-order algorithms with noise-corrupted gradients (Cohen et al., 2018; Aybat et al., 2018; Lan, 2012). Ghadimi & Lan (2013; 2016) provides analysis for the general stochastic gradient-based optimization algorithms. .\nOrganization. In Section 2, we review and discuss momentum for accelerating GD for convex smooth optimization. In Section 3, we present the SRSGD algorithm and its theoretical guarantees. In Section 4, we verify the efficacy of the proposed SRSGD in training DNNs for image classification on CIFAR and ImageNet. In Section 4.3, we perform empirical analysis of SRSGD. We end with some concluding remarks. Technical proofs, some experimental details, and more results in training LSTMs (Hochreiter & Schmidhuber, 1997) and WGANs (Arjovsky et al., 2017; Gulrajani et al., 2017) are provided in the Appendix.\nNotation. We denote scalars and vectors by lower case and lower case bold face letters, respectively, and matrices by upper case bold face letters. For a vector x = (x1, · · · , xd) ∈ Rd, we denote its `p norm (p ≥ 1) by ‖x‖p = ( ∑d i=1 |xi|p)1/p. For a matrix A, we use ‖A‖p to denote its induced norm by the vector `p norm. Given two sequences {an} and {bn}, we write an = O(bn) if there exists a positive constant s.t. an ≤ Cbn. We denote the interval a to b (included) as (a, b]. For a function f(w) : Rd → R, we denote its gradient as∇f(w) and its Hessian as∇2f(w)." }, { "heading": "2 REVIEW: MOMENTUM IN GRADIENT DESCENT", "text": "GD. GD (2) is a popular approach to solve (1), which dates back to Cauchy (1847). If f(w) is convex and L-smooth (i.e., ‖∇2f(w)‖2 ≤ L), then GD converges with rate O(1/k) by letting sk ≡ 1/L (we use this sk in all the discussion below), which is independent of the dimension of w. HB. HB (4) (Polyak, 1964) accelerates GD by using the historical information, which gives\nwk+1 = wk − sk∇f(wk) + µ(wk −wk−1), µ > 0. (4) We can also accelerate GD by using the Nesterov/lookahead momentum, which leads to (3). Both (3) and (4) have a convergence rate of O(1/k) for convex smooth optimization. Recently, several variants of (3) have been proposed for DL, e.g., (Sutskever et al., 2013) and (Bengio et al., 2013).\nNAG. NAG (Nesterov, 1983; 2018; Beck & Teboulle, 2009) replaces µ with (tk − 1)/tk+1, where tk+1 = (1 + √ 1 + 4t2k)/2 with t0 = 1. NAG iterates as following\nvk+1 = wk − sk∇f(wk); wk+1 = vk+1 + tk − 1 tk+1 (vk+1 − vk). (5)\nNAG achieves a convergence rate O(1/k2) with the step size sk = 1/L. Remark 1. Su et al. (2014) showed that (k− 1)/(k+ 2) is the asymptotic limit of (tk − 1)/tk+1. In the following presentation of NAG with restart, for the ease of notation, we will replace the momentum coefficient (tk − 1)/tk+1 with (k − 1)/(k + 2).\nAdaptive Restart NAG (ARNAG). The sequences, {f(wk)− f(w∗)} where w∗ is the minimum of f(w), generated by GD and GD with constant momentum (GD + Momentum, which follows (3)) converge monotonically to zero. However, that sequence generated by NAG oscillates, as illustrated in Fig. 2 (a) when f(w) is a quadratic function. O’donoghue & Candes (2015) proposed ARNAG\nUnder review as a conference paper at ICLR 2021\n(6), which restart the time-varying momentum of NAG according to the change of function values, to alleviate this oscillatory phenomenon. ARNAG iterates as following\nvk+1 = wk − sk∇f(wk); wk+1 = vk+1 + m(k)− 1 m(k) + 2 (vk+1 − vk), (6)\nwhere m(1) = 1; m(k + 1) = m(k) + 1 if f(wk+1) ≤ f(wk), and m(k + 1) = 1 otherwise. Scheduled Restart NAG (SRNAG). SR is another strategy to restart the time-varying momentum of NAG. We first divide the total iterations (0, T ] (integers only) into a few intervals {Ii}mi=1 = (Ti−1, Ti], such that (0, T ] = ⋃m i=1 Ii. In each Ii we restart the momentum after every Fi iterations. The update rule is then given by:\nvk+1 = wk − sk∇f(wk); wk+1 = vk+1 + (k mod Fi) (k mod Fi) + 3 (vk+1 − vk). (7)\nBoth AR and SR accelerate NAG to linear convergence for convex problems with the PolyakLojasiewicz (PL) condition (Roulet & d’Aspremont, 2017).\nCase Study – Quadratic Function. Consider the following quadratic optimization (Hardt, 2014)\nmin x f(x) =\n1 2 xTLx− xT b, (8)\nwhere L ∈ Rd×d is the Laplacian of a cycle graph, and b is a d-dimensional vector whose first entry is 1 and all the other entries are 0. Note that f(x) is convex with Lipschitz constant 4. In particular, we set d = 1K (1K:= 103). We run T = 50K iterations with step size 1/4. In SRNAG, we restart, i.e., we set the momentum to 0, after every 1K iterations. Fig. 2 (a) shows that GD + Momentum as in (3) converges faster than GD, while NAG speeds up GD + Momentum dramatically and converges to the minimum in an oscillatory fashion. Both AR and SR accelerate NAG significantly." }, { "heading": "3 ALGORITHM PROPOSED: SCHEDULED RESTART SGD (SRSGD)", "text": "Computing gradient for ERM, (1), can be computational costly and memory intensive, especially when the training set is large. In many applications, such as training DNNs, SGD is used. In this section, we first prove that the error bound of SGD with NAG cannot be bounded by a convergent sequence, then we formulate our new SRSGD as a solution to accelerate the convergence of SGD using the NAG momentum." }, { "heading": "3.1 UNCONTROLLED BOUND OF NESTEROV ACCELERATED SGD (NASGD)", "text": "Replacing ∇f(wk) := 1/N∑Ni=1∇fi(wk) in (5) with the mini-batch gradient 1/m∑mj=1∇fij (wk) will lead to uncontrolled error bound. Theorem 1 formulates this observation for NASGD. Theorem 1 (Uncontrolled Bound of NASGD). Let f(w) be a convex and L-smooth function with ‖∇f(w)‖ ≤ R, whereR > 0 is a constant. The sequence {wk}k≥0 generated by (5), with stochastic gradient of bounded variance (Bubeck, 2014; Bottou et al., 2018) 1 and using any constant step size sk ≡ s ≤ 1/L, satisfies\nE ( f(wk)− f(w∗) ) = O(k), (9)\n1We leave the analysis under the other assumptions (Jain et al., 2018) as a future work.\nUnder review as a conference paper at ICLR 2021\nwhere w∗ is the minimum of f , and the expectation is taken over the generation of the stochastic gradient.\nOne idea to prove Theorem 1 is by leveraging the established resulting in Lan (2012). We will provide a new proof of Theorem 1 in Appendix A. The proof shows that the uncontrolled error bound is because the time-varying momentum gets close to 1 as iteration increases. To remedy this, we can restart the momentum in order to guarantee that the time-varying momentum with restart is less than a number that is strictly less than 1. Devolder et al. (2014) proved a similar error bound for the δ-inexact gradient, and we provide a brief review of NAG with δ-inexact gradient in Appendix B. As far as we know that there is no lower bound of E(f(wk)− f(w∗)) available even for the δ-inexact gradient, and we leave the lower bound estimation as an open problem.\nWe consider three different inexact gradients: Gaussian noise with constant and decaying variance corrupted gradients for the quadratic optimization (8), and training logistic regression model for MNIST (LeCun & Cortes, 2010) classification. The detailed settings and discussion are provided in Appendix B. We denote SGD with NAG momentum as NASGD and NASGD with AR and SR as ARSGD and SRSGD, respectively. The results shown in Fig. 2 (b) and (c) (iteration vs. optimal gap for quadratic optimization (8)) and Fig. 3 (a) (iteration vs. loss for training logistic regression) confirm Theorem 1. For these cases, SR improves the performance of NAG with inexact gradients. Moreover, when an inexact gradient is used, ARNAG/ARSGD performs almost the same as GD/SGD asymptotically because ARNAG/ARSGD restarts too often and almost degenerates to GD/SGD." }, { "heading": "3.2 SRSGD AND ITS CONVERGENCE", "text": "For ERM (1), SRSGD replaces∇f(w) in (7) with stochastic gradient with batch size m and gives\nvk+1 = wk − sk 1 m m∑ j=1 ∇fij (wk); wk+1 = vk+1 + (k mod Fi) (k mod Fi) + 3 (vk+1 − vk), (10)\nwhere Fi is the restart frequency used in the interval Ii. We implemented SRSGD, in both PyTorch (Paszke et al., 2019) and Keras (Chollet et al., 2015), by changing just a few lines of code on top of the existing implementation of the SGD optimizer. We provide a snippet of SRSGD code in Appendix J (PyTorch) and K (Keras). We formulate the convergence of SRSGD for general convex and nonconvex problems in Theorem 2 and provide its proof in Appendix C. Theorem 2 (Convergence of SRSGD). Suppose f(w) is L-smooth. Consider the sequence {wk}k≥0 generated by (10) with stochastic gradient that is bounded and has bounded variance, and consider any restart frequency F using any constant step size sk := s ≤ 1/L. Assume that ∑ k∈A ( Ef(wk+1)− Ef(wk) ) = R̄ < +∞ with R̄ being a constant and the set A := {k ∈ Z+|Ef(wk+1) ≥ Ef(wk)}, then we have\nmin 1≤k≤K\n{ E‖∇f(wk)‖22 } = O ( s+ 1\nsK\n) . (11)\nIf f(w) is further convex and ∑ k∈B ( Ef(wk+1)− Ef(wk) ) = R̂ < +∞ with R̂ being a constant and the set B := {k ∈ Z+|E‖wk+1 −w∗‖2 ≥ E‖wk −w∗‖2}, then\nmin 1≤k≤K\n{ E ( f(wk)− f(w∗) )} = O ( s+ 1\nsK\n) , (12)\nwhere w∗ is the minimum of f . To obtain (∀ > 0) error, we set s = O( ) and K = O(1/ 2).\nTheorem 2 relies on the assumption that ∑ k∈A or B ( Ef(wk+1)− Ef(wk) ) is bounded, and we provide an empirical verification in Appendix C.1. We leave it open for how to establish the convergence result for SRSGD without this assumption." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We evaluate SRSGD on a variety of benchmarks for image classification, including CIFAR10, CIFAR100, and ImageNet. In all experiments, we show the advantage of SRSGD over the widely used and well-calibrated SGD baselines with a constant momentum of 0.9 and decreasing learning\nUnder review as a conference paper at ICLR 2021\nrate at certain epochs, which we denote as SGD. We also compare SRSGD with the well-calibrated SGD in which we switch momentum to the Nesterov momentum of 0.9, and we denote this optimizer as SGD + NM. We fine tune the SGD and SGD + NM baselines to obtain the best validation performance, and we then adopt the same set of parameters for training with SRSGD. In the SRSGD experiments, we tune the restart frequencies on small DNNs for each task based on the validation performance and apply the calibrated restart frequencies to large DNNs for the same task. Note that ARSGD is impractical for training on large-scale datasets since it requires to compute the loss over the whole training set at each iteration, which is very computationally inefficient. Alternatively, ARSGD can estimate loss and restart using mini-batches, but then ARSGD restarts too often and degenerates to SGD without momentum as we mentioned in Section 3. Thus, we do not compare with ARSGD in our CIFAR and ImageNet experiments. The details about hyper-parameters calibration can be found in Appendix D.4. We provide the detailed description of datasets and experimental settings in Appendix D. Additional experimental results in training LSTMs (Hochreiter & Schmidhuber, 1997) and WGANs (Arjovsky et al., 2017; Gulrajani et al., 2017) with SRSGD, as well as the comparison between SRSGD and SGD + NM on ImageNet classification task, are provided in Appendix E. We also note that in all the following experiments, the training loss will blow up if we apply NASGD without restart. These further confirm the stabilizing effect of scheduled restart in training DNNs." }, { "heading": "4.1 CIFAR10 AND CIFAR100", "text": "We summarize our results for CIFAR in Tables 1 and 2. We also explore two different restarting frequency schedules for SRSGD: linear and exponential schedule. These schedules are governed by two parameters: the initial restarting frequency F1 and the growth rate r. In both scheduling schemes, the restarting frequency at the 1st learning rate stage is set to F1 during training. Then the restarting frequency at the (k + 1)-th learning rate stage is determined by:\nFk+1 = { F1 × rk, exponential schedule F1 × (1 + (r − 1)× k), linear schedule.\nWe search F1 and r using the method outlined in Appendix D.4. For CIFAR10, (F1 = 40, r = 1.25) and (F1 = 30, r = 2) are good initial restarting frequencies and growth rates for the exponential and linear schedules, respectively. For CIFAR100, those values are (F1 = 45, r = 1.5) for the exponential schedule and (F1 = 50, r = 2) for the linear schedule.\nImprovement in Accuracy Increases with Depth. We observe that the linear schedule of restart yields better test error on CIFAR than the exponential schedule for most of the models except for Pre-ResNet-470 and Pre-ResNet-1001 on CIFAR100 (see Tables 1 and 2). SRSGD with either linear or exponential restart schedule outperforms SGD. Furthermore, the advantage of SRSGD over SGD is more significant for deeper networks. This observation holds strictly when using the linear schedule (see Fig. 1) and is generally true when using the exponential schedule with only a few exceptions.\nFaster Convergence Reduces the Training Time by Half. SRSGD also converges faster than SGD. This result is consistent with our MNIST case study in Section 3 and indeed expected since SRSGD\nUnder review as a conference paper at ICLR 2021\ncan avoid the error accumulation when there is an inexact oracle. For CIFAR, Fig. 3 (b) shows that SRSGD yields smaller training loss than SGD during the training. Interestingly, SRSGD converges quickly to good loss values in the 2nd and 3rd stages. This suggests that the model can be trained with SRSGD in many fewer epochs compared to SGD while achieving a similar error rate.\nResults in Table 3 confirm the hypothesis above. We train Pre-ResNet models with SRSGD in only 100 epochs, decreasing the learning rate by a factor of 10 at the 80th, 90th, and 95th epoch while using the same linear schedule for restarting frequency as before with (F1 = 30, r = 2) for CIFAR10 and (F1 = 50, r = 2) for CIFAR100. We compare the test error of the trained models with those trained by the SGD baseline in 200 epochs. We observe that SRSGD training consistently yields lower test errors than SGD except for the case of Pre-ResNet-110 even though the number of training epochs of our method is only half of the number of training epochs required by SGD. For Pre-ResNet-110, SRSGD needs 110 epochs with learning rate decreased at the 80th, 90th, and 100th epoch to achieve the same error rate as the 200-epoch SGD training on CIFAR10. On CIFAR100, SRSGD training for Pre-ResNet-110 needs 140 epochs with learning rate decreased at the 80th, 100th and 120th epoch to outperform the 200-epoch SGD. Comparison with SGD short training is provided in Appendix F.2.\nComparison with Adam and RMSProp. SRSGD outperforms not only SGD with momentum but also other popular optimizers including Adam and RMSProp (Tieleman & Hinton, 2012) for image classification tasks. In fact, for image classification tasks, Adam and RMSProp yield worse performance than the baseline SGD with momentum (Chen & Kyrillidis, 2019). Table 4 compares SRSGD with Adam and RMSprop on CIFAR10.\nUnder review as a conference paper at ICLR 2021" }, { "heading": "4.2 IMAGENET", "text": "Next, we discuss our experimental results on the 1000-way ImageNet classification task (Russakovsky et al., 2015). We conduct our experiments on ResNet-50, 101, 152, and 200 with 5 different seeds. We use the official PyTorch implementation for all of our ResNet models (Paszke et al., 2019). Following common practice, we train each model for 90 epochs and decrease the learning rate by a factor of 10 at the 30th and 60th epoch. We use an initial learning rate of 0.1, a momentum scaled by 0.9, and a weight decay value of 0.0001. Additional details and comparisons between SRSGD and SGD + NM are given in Appendix E.\nWe report single crop validation errors of ResNet models trained with SGD and SRSGD on ImageNet in Table 5. In contrast to our CIFAR experiments, we observe that for ResNets trained on ImageNet with SRSGD, linearly decreasing the restarting frequency to 1 at the last stage (i.e., after the 60th epoch) helps improve the generalization of the models. Thus, in our experiments, we use linear scheduling with (F1 = 40, r = 2). From epoch 60 to 90, the restarting frequency decays to 1 linearly.\nAdvantage of SRSGD continues to grow with depth. Similar to the CIFAR experiments, we observe that SRSGD outperforms the SGD baseline for all ResNet models that we study. As shown in Fig. 1, the advantage of SRSGD over SGD grows with network depth, just as in our CIFAR experiments with Pre-ResNet architectures.\nAvoiding Overfitting in ResNet-200. ResNet-200 demonstrates that SRSGD is better than the SGD baseline at avoiding overfitting2. The ResNet-200 trained with SGD has a top-1 error of 22.13%, higher than the ResNet-152 trained with SGD, which achieves a top-1 error of 22.03% (see Table 5). He et al. (2016b) pointed out that ResNet-200 suffers from overfitting. The ResNet-200 trained with our SRSGD has a top-1 error of 20.93%, which is 1.2% lower than the ResNet-200 trained with the SGD and also lower than the ResNet-152 trained with both SRSGD and SGD, an improvement by 0.53% and 1.1%, respectively. We hypothesize that SRSGD with appropriate restart frequency is locally not monotonic (see Fig. 3 (b, c)), and this property allows SRSGD to escape from bad minima in order to reach a better one, which helps avoid overfitting in very deep networks. Theoretical analysis of the observation that SRSGD is less overfitting in training DNNs is under our investigation.\nTraining ImageNet in Fewer Number of Epochs. As in the CIFAR experiments, we note that when training on ImageNet, SRSGD converges faster than SGD at the first and last learning rate while quickly reaching a good loss value at the second learning rate (see Fig. 3 (c)). This observation suggests that ResNets can be trained with SRSGD in fewer epochs while still achieving comparable error rates to the same models trained by the SGD baseline using all 90 epochs. We summarize the results in Table 6. On ImageNet, we note that SRSGD helps reduce the number of training epochs for very deep networks (ResNet-101, 152, 200). For smaller networks like ResNet-50, training with fewer epochs slightly decreases the accuracy." }, { "heading": "4.3 EMPIRICAL ANALYSIS", "text": "SRSGD Helps Reduce the Training Time. We find that SRSGD training using fewer epochs yields comparable error rates to both the SGD baseline and the SRSGD full training with 200 epochs on CIFAR. We conduct an ablation study to understand the impact of reducing the number of epochs\n2By overfitting, we mean that the model achieves low training error but high test error.\nUnder review as a conference paper at ICLR 2021\non the final error rate when training with SRSGD on CIFAR10 and ImageNet. In the CIFAR10 experiments, we vary the number of epoch reduction from 15 to 90 while in the ImageNet experiments, we vary the number of epoch reduction from 10 to 30. We summarize our results in Fig. 4, and provide detailed results in Appendix F. For CIFAR10, we can train with 30 fewer epochs while still maintaining a comparable error rate to the full SRSGD training, and with a better error rate than the SGD baseline trained in full 200 epochs. For ImageNet, SRSGD training with fewer epochs decreases the accuracy but still obtains comparable results to the 90-epoch SGD baseline.\nImpact of Restarting Frequency. We examine the impact of restarting frequency on the network training. We choose a case study of training a Pre-ResNet-290 on CIFAR10 using SRSGD with a linear schedule scheme for the restarting frequency. We fix the growth rate r = 2 and vary the initial restarting frequency F1 from 1 to 80. As shown in Fig. 5, SRSGD with a large F1, e.g. F1 = 80, approximates NASGD (yellow). We also show the training loss and test accuracy of NASGD in red. As discussed in Section 3, it suffers from error accumulation due to stochastic gradients and converges slowly or even diverges. SRSGD with small F1, e.g. F1 = 1, approximates SGD without momentum (green). It converges faster initially but reaches a worse local minimum (i.e. larger loss). Typical SRSGD (blue) converges faster than NASGD and to a better local minimum than both NASGD and SGD without momentum. It also achieves the best test error. We provide more empirical analysis results in Appendix F, G and H. The impact of the growth rate r is studied in Appendix G.2." }, { "heading": "5 CONCLUSIONS", "text": "We propose the Scheduled Restart SGD (SRSGD), with two major changes from the widely used SGD with constant momentum. First, we replace the momentum in SGD with the iteration-dependent momentum that used in Nesterov accelerated gradient (NAG). Second, we restart the NAG momentum according to a schedule to prevent error accumulation when the stochastic gradient is used. For image classification, SRSGD can significantly improve the accuracy of the trained DNNs. Also, compared to the SGD baseline, SRSGD requires fewer training epochs to reach the same trained model’s accuracy. There are numerous avenues for future work: 1) deriving the optimal restart scheduling and the corresponding convergence rate of SRSGD and 2) integrating the scheduled restart NAG momentum with adaptive learning rate algorithms, e.g., Adam (Kingma & Ba, 2014).\nUnder review as a conference paper at ICLR 2021" }, { "heading": "Part", "text": "" }, { "heading": "Appendices", "text": "The appendices are structured as follows. In Section A, we prove Theorem 1. In Section B, we review an error accumulation result of the Nesterov accelerated gradient with δ-inexact gradient. In Section C, we prove Theorem 2. In Section D, we provide some experimental details; in particular, the calibration of restarting hyperparameters. In Section E, we compare SRSGD with benchmark optimization algorithms on some other tasks, including training LSTM and Wasserstein GAN. In Section F, we provide detailed experimental settings in studying the effects of reducing the number of epoch in training deep neural networks with SRSGD, and we provide some more experimental results. In Section G and H, we further study the effects of restarting frequency and training with less epochs by using SRSGD. In Section I, we visualize the optimization trajectory of SRSGD and compare it with benchmark methods. A snippet of our implementation of SRSGD in PyTorch and Keras are available in Section J and K, respectively." }, { "heading": "Table of Contents", "text": "Appendix 14" }, { "heading": "A Uncontrolled Bound of NASGD 15", "text": "A.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Uncontrolled Bound of NASGD: Analysis . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "B NAG with δ-Inexact Oracle & Experimental Settings in Section 3.1 19", "text": "" }, { "heading": "C Convergence of SRSGD 20", "text": "C.1 Numerical Verification of the assumptions in Theorem 2 . . . . . . . . . . . . . 22" }, { "heading": "D Datasets and Implementation Details 23", "text": "D.1 CIFAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 D.2 ImageNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 D.3 Training ImageNet in Fewer Number of Epochs: . . . . . . . . . . . . . . . . . 23 D.4 Details on Restarting Hyper-parameters Search . . . . . . . . . . . . . . . . . . 23" }, { "heading": "E SRSGD vs. SGD and SGD + NM on ImageNet Classification and Other Tasks 24", "text": "E.1 Comparing with SGD with Nesterov Momentum on ImageNet Classification . . 24 E.2 Long Short-Term Memory (LSTM) Training for Pixel-by-Pixel MNIST . . . . . 24 E.3 Wasserstein Generative Adversarial Networks (WGAN) Training on MNIST . . 25" }, { "heading": "F Error Rate vs. Reduction in Training Epochs 27", "text": "F.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 F.2 Short Training on CIFAR10/CIFAR100 Using SGD . . . . . . . . . . . . . . . 28 F.3 Additional Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 28\nG Impact of Restarting Frequency for ImageNet and CIFAR100 29 G.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 G.2 Impact of the Growth Rate r . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 G.3 Additional Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 31" }, { "heading": "H Full Training with Less Epochs at the Intermediate Learning Rates 32", "text": "I Visualization of SRSGD’s trajectory 33" }, { "heading": "J SRSGD Implementation in Pytorch 35", "text": "K SRSGD Implementation in Keras 36\nUnder review as a conference paper at ICLR 2021" }, { "heading": "A UNCONTROLLED BOUND OF NASGD", "text": "Consider the following optimization problem\nmin w f(w), (13)\nwhere f(w) is L-smooth and convex.\nStart from wk, GD update, with step size 1r , can be obtained based on the minimization of the function\nQr(v,w k) := 〈v −wk,∇f(wk)〉+ r\n2 ‖v −wk‖22. (14)\nWith direct computation, we can get that\nQr(v k+1,wk)−minQr(v,wk) = ‖gk −∇f(wk)‖2 2r ,\nwhere gk := 1m ∑m j=1∇fij (wk). We assume the variance is bounded, which gives The stochastic gradient rule,Rs, satisfies E[Qr(vk+1,wk)−minQr(v,wk)|χk] ≤ δ, with δ being a constant and χk being the sigma algebra generated by w1,w2, · · · ,wk, i.e.,\nχk := σ(w1,w2, · · · ,wk).\nNASGD can be reformulated as\nvk+1 ≈ arg min v Qr(v,w k) with ruleRs,\nwk+1 = vk+1 + tk − 1 tk+1 (vk+1 − vk), (15)\nwhere t0 = 1 and tk+1 = (1 + √ 1 + 4t2k)/2." }, { "heading": "A.1 PRELIMINARIES", "text": "To proceed, we introduce several definitions and some useful properties in variational and convex analysis. More detailed background can be found at Mordukhovich (2006); Nesterov (1998); Rockafellar & Wets (2009); Rockafellar (1970).\nLet f be a convex function, we say that f is L-smooth (gradient Lipschitz) if f is differentiable and\n‖∇f(v)−∇f(w)‖2 ≤ L‖v −w‖2, and we say f is ν-strongly convex if for any w,v ∈ dom(f)\nf(w) ≥ f(v) + 〈∇f(v),w − v〉+ ν 2 ‖w − v‖22.\nBelow of this subsection, we list several basic but useful lemmas, the proof can be found in Nesterov (1998). Lemma 1. If f is ν-strongly convex, then for any v ∈ dom(f) we have\nf(v)− f(v∗) ≥ ν 2 ‖v − v∗‖22, (16)\nwhere v∗ is the minimizer of f . Lemma 2. If f is L-smooth, for any w,v ∈ dom(f),\nf(w) ≤ f(v) + 〈∇f(v),w − v〉+ L 2 ‖w − v‖22.\nUnder review as a conference paper at ICLR 2021" }, { "heading": "A.2 UNCONTROLLED BOUND OF NASGD: ANALYSIS", "text": "In this part, we denote ṽk+1 := arg min\nv Qr(v,w\nk). (17)\nLemma 3. If the constant r > 0, then\nE ( ‖vk+1 − ṽk+1‖22|χk ) ≤ 2δ\nr . (18)\nProof. Note that Qr(v,wk) is strongly convex with constant r, and ṽk+1 in (17) is the minimizer of Qr(v,w k). With Lemma 1 we have\nQr(v k+1,wk)−Qr(ṽk+1,wk) ≥\nr 2 ‖vk+1 − ṽk+1‖22. (19)\nNotice that\nE [ Qr(v k+1,wk)−Qr(ṽk+1,wk) ] = E [ Qr(v\nk+1,wk)−min v Qr(v,w\nk) ] ≤ δ.\nThe inequality (18) can be established by combining the above two inequalities.\nLemma 4. If the constant satisfy r > L, then we have\nE ( f(ṽk+1) + r\n2 ‖ṽk+1 −wk‖22 − (f(vk+1) +\nr 2 ‖vk+1 −wk‖22)\n) (20)\n≥ −τδ − r − L 2 E[‖wk − ṽk+1‖22],\nwhere τ = L 2\nr(r−L) + 1.\nProof. The convexity of f gives us\n0 ≤ 〈∇f(vk+1),vk+1 − ṽk+1〉+ f(ṽk+1)− f(vk+1). (21)\nFrom the definition of the stochastic gradient ruleRs, we have −δ ≤ E ( Qr(ṽ k+1,wk)−Qr(vk+1,wk) )\n(22)\n= E [ 〈ṽk+1 −wk,∇f(wk)〉+ r\n2 ‖ṽk+1 −wk‖22\n] −\nE [ 〈vk+1 −wk,∇f(wk)〉+ r\n2 ‖vk+1 −wk‖22\n] .\nWith (21) and (22), we have\n−δ ≤ ( f(ṽk+1) + r\n2 ‖ṽk+1 −wk‖22\n) − ( f(vk+1) + r\n2 ‖vk+1 −wk‖22\n) + (23)\nE〈∇f(wk)−∇f(ṽk+1), ṽk+1 − vk+1〉.\nWith the Schwarz inequality 〈a, b〉 ≤ ‖a‖ 2 2\n2µ + µ 2 ‖b‖22 with µ = L\n2\nr−L , a = ∇f(vk+1)−∇f(ṽk+1) and b = wk − ṽk+1,\n〈∇f(wk)−∇f(ṽk+1), ṽk+1 − vk+1〉 (24)\n≤ (r − L) 2L2 ‖∇f(wk)−∇f(ṽk+1)‖22 + L2 2(r − L)‖v k+1 − ṽk+1‖22\n≤ (r − L) 2 ‖wk − ṽk+1‖22 +\nL2\n2(r − L)‖v k+1 − ṽk+1‖22.\nCombining (23) and (24), we have\n−δ ≤ E ( f(ṽk+1) + r\n2 ‖ṽk+1 −wk‖22\n) − E ( f(vk+1) + r\n2 ‖vk+1 −wk‖22\n) (25)\n+ L2\n2(r − L)E‖v k+1 − ṽk+1‖22 + r − L 2 E‖wk − ṽk+1‖22.\nBy rearrangement of the above inequality (25) and using Lemma 3, we obtain the result.\nUnder review as a conference paper at ICLR 2021\nLemma 5. If the constants satisfy r > L, then we have the following bounds\nE ( f(vk)− f(vk+1) ) ≥ r\n2 E‖wk − vk+1‖22 + rE〈wk − vk, ṽk+1 −wk〉 − τδ, (26)\nE ( f(v∗)− f(vk+1) ) ≥ r\n2 E‖wk − vk+1‖22 + rE〈wk − v∗, ṽk+1 −wk〉 − τδ, (27)\nwhere τ := L 2 r(r−L) + 1 and v ∗ is the minimum .\nProof. With Lemma 2, we have\n− f(ṽk+1) ≥ −f(wk)− 〈ṽk+1 −wk,∇f(wk)〉 − L 2 ‖ṽk+1 −wk‖22. (28)\nUsing the convexity of f , we have\nf(vk)− f(wk) ≥ 〈vk −wk,∇f(wk)〉, i.e.,\nf(vk) ≥ f(wk) + 〈vk −wk,∇f(wk)〉. (29)\nAccording to the definition of ṽk+1 in (14), i.e.,\nṽk+1 = arg min v Qr(v,w k) = arg min v 〈v −wk,∇f(wk)〉+ r 2 ‖v −wk‖22,\nand the optimization condition gives\nṽk+1 = wk − 1 r ∇f(wk). (30)\nSubstituting (30) into (29), we obtain\nf(vk) ≥ f(wk) + 〈vk −wk, r(wk − ṽk+1)〉. (31)\nDirect summation of (28) and (31) gives\nf(vk)− f(ṽk+1) ≥ ( r − L\n2\n) ‖ṽk+1 −wk‖22 + r〈wk − vk, ṽk+1 −wk〉. (32)\nSumming (32) and (20), we obtain the inequality (26)\nE [ f(vk)− f(vk+1) ] ≥ r\n2 E‖wk − vk+1‖22 + rE〈wk − vk, ṽk+1 −wk〉 − τδ. (33)\nOn the other hand, with the convexity of f , we have\nf(v∗)− f(wk) ≥ 〈v∗ −wk,∇f(wk)〉 = 〈v∗ −wk, r(wk − ṽk+1)〉. (34) The summation of (28) and (34) results in\nf(v∗)− f(ṽk+1) ≥ ( r − L\n2\n) ‖wk − ṽk+1‖22 + r〈wk − v∗, ṽk+1 −wk〉. (35)\nSumming (35) and (20), we obtain\nE ( f(v∗)− f(vk+1) ) ≥ r\n2 E‖wk − vk+1‖22 + rE〈wk − v∗, ṽk+1 −wk〉 − τδ, (36)\nwhich is the same as (27).\nTheorem 3 (Uncontrolled Bound of NASGD (Theorem 1 with detailed bounded)). Let the constant r satisfy r < L and the sequence {vk}k≥0 be generated by NASGD with stochastic gradient that has bounded variance. By using any constant step size sk ≡ s ≤ 1/L, then we have\nE[f(vk)−min v f(v)] ≤ (2τδ r +R2) 4k 3 . (37)\nUnder review as a conference paper at ICLR 2021\nProof. We denote F k := E(f(vk)− f(v∗)).\nBy (26)× (tk − 1) + (27), we have\n2[(tk − 1)F k − tkF k+1] r ≥ tkE‖vk+1 −wk‖22 (38)\n+ 2E〈ṽk+1 −wk, tkwk − (tk − 1)vk − v∗〉 − 2τtkδ\nr .\nWith t2k−1 = t 2 k − tk, (38)× tk yields\n2[t2k−1F k − t2kF k+1] r ≥ E‖tkvk+1 − tkwk‖22 (39)\n+ 2tkE〈ṽk+1 −wk, tkwk − (tk − 1)vk − v∗〉 − 2τt2kδ\nr\nSubstituting a = tkvk+1 − (tk − 1)vk − v∗ and b = tkwk − (tk − 1)vk − v∗ into identity\n‖a− b‖22 + 2〈a− b, b〉 = ‖a‖22 − ‖b‖22. (40) It follows that\nE‖tkvk+1 − tkwk‖22 + 2tkE〈ṽk+1 −wk, tkwk − (tk − 1)vk − v∗〉 (41) = E‖tkvk+1 − tkwk‖22 + 2tkE〈vk+1 −wk, tkwk − (tk − 1)vk − v∗〉\n+2tkE〈ṽk+1 − vk+1, tkwk − (tk − 1)vk − v∗〉 =\n(40) E‖tkvk+1 − (tk − 1)vk − v∗‖22 − ‖tkwk − (tk − 1)vk − v∗‖22 +2tkE〈ṽk+1 − vk+1, tkwk − (tk − 1)vk − v∗〉 = E‖tkvk+1 − (tk − 1)vk − v∗‖22 − E‖tk−1vk − (tk−1 − 1)vk−1 − v∗‖22 + 2tkE〈ṽk+1 − vk+1, tk−1vk − (tk−1 − 1)vk−1 − v∗〉.\nIn the third identity, we used the fact tkwk = tkvk + (tk−1 − 1)(vk − vk−1). If we denote uk = E‖tk−1vk − (tk−1 − 1)vk−1 − v∗‖22, (39) can be rewritten as\n2t2kF k+1 r + uk+1 ≤ 2t 2 k−1F k r + uk + 2τt2kδ r (42)\n+ 2tkE〈vk+1 − ṽk+1, tk−1vk − (tk−1 − 1)vk−1 − v∗〉\n≤ 2t 2 kF k\nr + uk +\n2τt2kδ\nr + t2k−1R 2,\nwhere we used\n2tkE〈vk+1 − ṽk+1, tk−1vk − (tk−1 − 1)vk−1 − v∗〉 ≤ t2kE‖vk+1 − ṽk+1‖22 + E‖tk−1vk − (tk−1vk − (tk−1 − 1)vk−1 − v∗)‖22 = 2t2kδ/r + t 2 k−1R 2.\nDenoting\nξk := 2t2k−1F k\nr + uk,\nthen, we have\nξk+1 ≤ ξ0 + ( 2τδ\nr +R2)\nk∑\ni=1\nt2i = ( 2τδ\nr +R2)\nk3\n3 . (43)\nWith the fact, ξk ≥ 2t 2 k−1F k r ≥ k2F k/4, we then proved the result.\nUnder review as a conference paper at ICLR 2021\nB NAG WITH δ-INEXACT ORACLE & EXPERIMENTAL SETTINGS IN SECTION 3.1\nIn Devolder et al. (2014), the authors defines δ-inexact gradient oracle for convex smooth optimization as follows: Definition 1 (δ-Inexact Oracle). Devolder et al. (2014) For a convex L-smooth function f : Rd → R. For ∀w ∈ Rd and exact first-order oracle returns a pair (f(w),∇f(w)) ∈ R × Rd so that for ∀v ∈ Rd we have\n0 ≤ f(v)− ( f(w) + 〈∇f(w),v −w〉 ) ≤ L\n2 ‖w − v‖22.\nA δ-inexact oracle returns a pair ( fδ(w),∇fδ(w) ) ∈ R× Rd so that ∀v ∈ Rd we have\n0 ≤ f(v)− ( fδ(w) + 〈∇fδ(w),v −w〉 ) ≤ L\n2 ‖w − v‖22 + δ.\nWe have the following convergence results of GD and NAG under a δ-Inexact Oracle for convex smooth optimization. Theorem 4. Devolder et al. (2014)3 Consider\nmin f(w), w ∈ Rd, where f(w) is convex and L-smooth with w∗ being the minimum. Given access to δ-inexact oracle, GD with step size 1/L returns a point wk after k steps so that\nf(wk)− f(w∗) = O ( L\nk\n) + δ.\nOn the other hand, NAG, with step size 1/L returns\nf(wk)− f(w∗) = O ( L\nk2\n) +O(kδ).\nTheorem 4 says that NAG may not robust to a δ-inexact gradient. In the following, we will study the numerical behavior of a variety of first-order algorithms for convex smooth optimizations with the following different inexact gradients.\nConstant Variance Gaussian Noise: We consider the inexact oracle where the true gradient is contaminated with a Gaussian noise N (0, 0.0012). We run 50K iterations of different algorithms. For SRNAG, we restart after every 200 iterations. Fig. 2 (b) shows the iteration vs. optimal gap, f(xk)− f(x∗), with x∗ being the minimum. NAG with the inexact gradient due to constant variance noise does not converge. GD performs almost the same as ARNAG asymptotically, because ARNAG restarts too often and almost degenerates into GD. GD with constant momentum outperforms the three schemes above, and SRNAG slightly outperforms GD with constant momentum.\nDecaying Variance Gaussian Noise: Again, consider minimizing (8) with the same experimental setting as before except that ∇f(x) is now contaminated with a decaying Gaussian noise N (0, ( 0.1bt/100c+1 )2). For SRNAG, we restart every 200 iterations in the first 10k iterations, and restart every 400 iterations in the remaining 40K iterations. Fig. 2 (c) shows the iteration vs. optimal gap by different schemes. ARNAG still performs almost the same as GD. The path of NAG is oscillatory. GD with constant momentum again outperforms the previous three schemes. Here SRNAG significantly outperforms all the other schemes.\nLogisitic Regression for MNIST Classification: We apply the above schemes with stochastic gradient to train a logistic regression model for MNIST classification LeCun & Cortes (2010). We consider five different schemes, namely, SGD, SGD + (constant) momentum, NASGD, ASGD, and SRSGD. In ARSGD, we perform restart based on the loss value of the mini-batch training data. In SRSGD, we restart the NAG momentum after every 10 iterations. We train the logistic regression model with a `2 weight decay of 10−4 by running 20 epochs using different schemes with batch size of 128. The step sizes for all the schemes are set to 0.01. Fig. 3 (a) plots the training loss vs. iteration. In this case, NASGD does not converge, and SGD with momentum does not speed up SGD. ARSGD’s performance is on par with SGD’s. Again, SRSGD gives the best performance with the smallest training loss among these five schemes.\n3We adopt the result from Hardt (2014).\nUnder review as a conference paper at ICLR 2021" }, { "heading": "C CONVERGENCE OF SRSGD", "text": "We prove the convergence of Nesterov accelerated SGD with scheduled restart, i.e., the convergence of SRSGD. We denote that θk := tk−1tk+1 in the Nesterov iteration and θ̂ k is its use in the restart version, i.e., SRSGD. For any restart frequency F (positive integer), we have θ̂k = θk−bk/Fc∗F . In the restart version, we can see that θ̂k ≤ θF =: θ̄ < 1. Lemma 6. Let the constant satisfies r > L and the sequence {vk}k≥0 be generated by the SRSGD with restart frequency F (any positive integer), we have\nk∑\ni=1\n‖vi − vi−1‖22 ≤ r2kR2\n(1− θ̄)2 , (44)\nwhere θ̄ := θF < 1 and R := supx{‖∇f(x)‖2}.\nProof. It holds that\n‖vk+1 −wk‖2 = ‖vk+1 − vk + vk −wk‖2 (45) ≥ ‖vk+1 − vk‖2 − ‖vk −wk‖2 ≥ ‖vk+1 − vk‖2 − θ̄‖vk − vk−1‖2.\nThus,\n‖vk+1 −wk‖22 ≥ ( ‖vk+1 − vk‖2 − θ̄‖vk − vk−1‖2 )2 (46)\n= ‖vk+1 − vk‖22 − 2θ̄‖vk − vk−1‖2‖vk − vk−1‖2 + θ̄2‖vk − vk−1‖22 ≥ (1− θ̄)‖vk+1 − vk‖22 − θ̄(1− θ̄)‖vk+1 − vk‖22.\nSumming (46) from k = 1 to K, we get\n(1− θ̄)2 K∑\nk=1\n‖vk − vk−1‖22 ≤ K∑\nk=1\n‖vk+1 −wk‖22 ≤ r2KR2. (47)\nIn the following, we denote\nA := {k ∈ Z+|Ef(vk) ≥ Ef(vk−1)}. Theorem 5 (Convergence of SRSGD). (Theorem 2 with detailed bound) Suppose f(w) is L-smooth. Consider the sequence {wk}k≥0 generated by (10) with stochastic gradient that is bounded and has bound variance. Using any restart frequency F and any constant step size sk := s ≤ 1/L. Assume that ∑ k∈A ( Ef(wk+1)− Ef(wk) ) = R̄ < +∞, then we have\nmin 1≤k≤K\n{ E‖∇f(wk)‖22 } ≤ rR 2\n(1− θ̄)2 L(1 + θ̃) 2 + rLR2 2 + θ̃R̃ rK . (48)\nIf f(w) is further convex and the set B := {k ∈ Z+|E‖wk+1 −w∗‖2 ≥ E‖wk −w∗‖2} obeys∑ k∈B ( Ef(wk+1)− Ef(wk) ) = R̂ < +∞, then\nmin 1≤k≤K\n{ E ( f(wk)− f(w∗) )} ≤ ‖w 0 −w∗‖2 + R̂ 2γk + γR2 2 , (49)\nwhere w∗ is the minimum of f . To obtain (∀ > 0) error, we set s = O( ) and K = O(1/ 2).\nProof. Firstly, we show the convergence of SRSGD for nonconvex optimization. L-smoothness of f , i.e., Lipschitz gradient continuity, gives us\nf(vk+1) ≤ f(wk) + 〈∇f(wk),vk+1 −wk〉+ L 2 ‖vk+1 −wk‖22. (50)\nUnder review as a conference paper at ICLR 2021\nTaking expectation, we get\nEf(vk+1) ≤ Ef(wk)− rE‖∇f(wk)‖22 + r2LR2\n2 . (51)\nOn the other hand, we have\nf(wk) ≤ f(vk) + θ̂k〈∇f(vk),vk − vk−1〉+ L(θ̂ k)2\n2 ‖vk − vk−1‖22. (52)\nThen, we have\nEf(vk+1) ≤ Ef(vk) + θ̂kE〈∇f(vk),vk − vk−1〉 (53)\n+ L(θ̂k)2\n2 E‖vk − vk−1‖22 − rE‖∇f(wk)‖22 +\nr2LR2\n2 .\nWe also have\nθ̂k〈∇f(vk),vk − vk−1〉 ≤ θ̂k ( f(vk)− f(vk−1) + L\n2 ‖vk − vk−1‖22\n) . (54)\nWe then get that\nEf(vk+1) ≤ Ef(vk) + θ̂k ( Ef(vk)− Ef(vk−1) ) − rE‖∇f(wk)‖22 +Ak, (55)\nwhere\nAk := E L 2 ‖vk − vk−1‖22 +\nL(θ̂k)2\n2 E‖vk − vk−1‖22 +\nr2LR2\n2 .\nSumming the inequality gives us\nEf(vK+1) ≤ Ef(v0) + θ̃ ∑\nk∈A\n( Ef(vk)− Ef(vk−1) ) (56)\n− r K∑\nk=1\nE‖∇f(wk)‖22 + K∑\nk=1\nAk.\nIt is easy to see that θ̃ ∑\nk∈A\n( Ef(vk)− Ef(vk−1) ) = θ̃R̃.\nWe get the result by using Lemma 6\nSecondly, we prove the convergence of SRSGD for convex optimization. Let w∗ be the minimizer of f . We have\nE‖vk+1 −w∗‖22 = E‖wk − γ∇f(wk)−w∗‖22 (57) = E‖wk −w∗‖22 − 2γE〈∇f(wk),wk −w∗〉+ γ2E‖∇f(wk)‖22 ≤ E‖wk − x∗‖22 − 2γE〈∇f(wk),wk −w∗〉+ γ2R2.\nWe can also derive\nE‖wk −w∗‖2 = E‖vk + θ̂k(vk − vk−1)−w∗‖22 = E‖vk −w∗‖22 + 2θ̂kE〈vk − vk−1,vk −w∗〉+ (θ̂k)2E‖vk − vk−1‖22 = E‖vk −w∗‖22 + θ̂kE ( ‖vk −w∗‖22 + ‖vk−1 − vk‖22 − ‖vk−1 −w∗‖22 )\n+ (θ̂)2E‖vk − vk−1‖22 = E‖vk −w∗‖22 + θ̂kE ( ‖vk −w∗‖22 − ‖vk−1 −w∗‖22 ) + 2(θ̂k)2E‖vk − vk−1‖22,\nwhere we used the following identity\n(a− b)T (a− b) = 1 2 [‖a− d‖22 − ‖a− c‖22 + ‖b− c‖22 − ‖b− d‖22].\nUnder review as a conference paper at ICLR 2021\nThen, we have\nE‖vk+1 −w∗‖22 ≤ E‖vk −w∗‖22 − 2γE〈∇f(wk),wk −w∗〉+ 2(θ̂k)2E‖vk − vk−1‖22 (58) + r2R2 + θ̂kE(‖vk −w∗‖22 − ‖vk−1 −w∗‖22).\nWe then get that\n2γE ( f(wk)− f(w∗) ) ≤ E‖vk −w∗‖22 − E‖vk+1 −w∗‖22 (59) + θ̂k ( E‖vk −w∗‖22 − E‖vk−1 −w∗‖22 ) + r2R2.\nSumming the inequality gives us the desired convergence result for convex optimization." }, { "heading": "C.1 NUMERICAL VERIFICATION OF THE ASSUMPTIONS IN THEOREM 2", "text": "In this part, we numerically verify the assumptions in Theorem 2. In particular, we apply SRSGD with learning rate 0.1 to train LeNet 4 for MNIST classification (we test on MNIST due to extremely large computational cost). We conduct numerical verification as follows: starting from a given point w0, we randomly sample 469 mini-batches (note in total we have 469 batches in the training data) with batch size 128 and compute the stochastic gradient using each mini-batch. Next, we advance to the next step with each of these 469 stochastic gradients and get the approximated Ef(w1). We randomly choose one of these 469 positions as the updated weights of our model. By iterating the above procedure, we can get w1,w2, · · · and Ef(w1),Ef(w2), · · · and we use these values to verify our assumptions in Theorem 2. We set restart frequencies to be 20, 40, and 80, respectively. Figure 6 top panels plot k vs. the cardinality of the set A := {k ∈ Z+|Ef(wk+1) ≥ Ef(wk)}, and Figure 6 bottom panels plot k vs. ∑ k∈A ( Ef(wk+1)− Ef(wk) ) . Figure 6 shows that ∑ k∈A ( Ef(wk+1)− Ef(wk) ) converges to a constant R̄ < +∞. We also noticed that when the training gets plateaued, E(f(wk)) still oscillates, but the magnitude of the oscillation diminishes as iterations goes, which is consistent with our plots that the cardinality of A increases linearly, but R̄ converges to a finite number. These numerical results show that our assumption in Theorem 2 is reasonable.\n4We used the PyTorch implementation of LeNet at https://github.com/activatedgeek/LeNet-5.\nUnder review as a conference paper at ICLR 2021" }, { "heading": "D DATASETS AND IMPLEMENTATION DETAILS", "text": "" }, { "heading": "D.1 CIFAR", "text": "The CIFAR10 and CIFAR100 datasets Krizhevsky et al. (2009) consist of 50K training images and 10K test images from 10 and 100 classes, respectively. Both training and test data are color images of size 32× 32. We run our CIFAR experiments on Pre-ResNet-110, 290, 470, 650, and 1001 with 5 different seeds He et al. (2016b). We train each model for 200 epochs with batch size of 128 and initial learning rate of 0.1, which is decayed by a factor of 10 at the 80th, 120th, and 160th epoch. The weight decay rate is 5× 10−5 and the momentum for the SGD baseline is 0.9. Random cropping and random horizontal flipping are applied to training data. Our code is modified based on the Pytorch classification project Yang (2017),5 which was also used by Liu et al. Liu et al. (2020). We provide the restarting frequencies for the exponential and linear scheme for CIFAR10 and CIFAR100 in Table 7 below. Using the same notation as in the main text, we denote Fi as the restarting frequency at the i-th learning rate.\nThe ImageNet dataset contains roughly 1.28 million training color images and 50K validation color images from 1000 classes Russakovsky et al. (2015). We run our ImageNet experiments on ResNet50, 101, 152, and 200 with 5 different seeds. Following He et al. (2016a;b), we train each model for 90 epochs with a batch size of 256 and decrease the learning rate by a factor of 10 at the 30th and 60th epoch. The initial learning rate is 0.1, the momentum is 0.9, and the weight decay rate is 1× 10−5. Random 224× 224 cropping and random horizontal flipping are applied to training data. We use the official Pytorch ResNet implementation Paszke et al. (2019),6 and run our experiments on 8 Nvidia V100 GPUs. We report single-crop top-1 and top-5 errors of our models. In our experiments, we set F1 = 40 at the 1st learning rate, F2 = 80 at the 2nd learning rate, and F3 is linearly decayed from 80 to 1 at the 3rd learning rate (see Table 8)." }, { "heading": "D.3 TRAINING IMAGENET IN FEWER NUMBER OF EPOCHS:", "text": "Table 9 contains the learning rate and restarting frequency schedule for our experiments on training ImageNet in fewer number of epochs, i.e. the reported results in Table 6 in the main text. Other settings are the same as in the full-training ImageNet experiments described in Section D.2 above.\nAdditional Implementation Details: Implementation details for the ablation study of error rate vs. reduction in epochs and the ablation study of impact of restarting frequency are provided in Section F and G below." }, { "heading": "D.4 DETAILS ON RESTARTING HYPER-PARAMETERS SEARCH", "text": "In our CIFAR10 and CIFAR100 experiments, for both linear and exponential schedule, we conduct hyperparameter searches over the restarting frequencies using our smallest model, Pre-ResNet-110,\n5Implementation available at https://github.com/bearpaw/pytorch-classification 6Implementation available at https://github.com/pytorch/examples/tree/master/imagenet\nUnder review as a conference paper at ICLR 2021\nmaking choices based on final validation performance. The same chosen restarting frequencies are applied for all models including Pre-ResNet-110, 290, 470, 650, and 1001. In particular, we use 10,000 images from the original training set as a validation set. This validation set contains 1,000 and 100 images from each class for CIFAR10 and CIFAR100, respectively. We first train Pre-ResNet-110 on the remaining 40,000 training images and use the performance on the validation set averaged over 5 random seeds to select the initial restarting frequency F1 and the growth rate r. Both F1 and r are selected using grid search from the sets of {20, 25, 30, 35, 40, 45, 50} and {1, 1.25, 1.5, 1.75, 2}, respectively. We then train all models including Pre-ResNet-110, 290, 470, 650, and 1001 on all 50,000 training images using the selected values of F1 and r and report the results on the test set which contains 10,000 test images. The reported test performance is averaged over 5 random seeds. We also use the same selected values of F1 and r for our short training experiments in Section 4.3.\nFor ImageNet experiments, we use linear scheduling and sweep over the initial restarting frequency F1 and the growth rate r in the set of {20, 30, 40, 50, 60} and {1, 1.25, 1.5, 1.75, 2}, respectively. We select the values of F1 = 40 and r = 2 which have the highest final validation accuracy averaged over 5 random seeds. Same as in CIFAR10 and CIFAR100 experiments, we select F1 and r using our smallest model, ResNet-50, and apply the same selected hyperparameter values for all models including ResNet-50, 101, 152, and 200. We also use the same selected values of F1 and r for our short training experiments in Section 4.3. However, for ResNet-50, we observe that F1 = 60 and r = 1.75 yields better performance in short training. All reported results are averaged over 5 random seeds." }, { "heading": "E SRSGD VS. SGD AND SGD + NM ON IMAGENET CLASSIFICATION AND OTHER TASKS", "text": "" }, { "heading": "E.1 COMPARING WITH SGD WITH NESTEROV MOMENTUM ON IMAGENET CLASSIFICATION", "text": "In this section, we compare SRSGD with SGD with Nesterov constant momentum (SGD + NM) in training ResNets for ImageNet classification. All hyper-parameters of SGD with constant Nesterov momentum used in our experiments are the same as those of SGD described in section D.2. We list the results in Table 10. Again, SRSGD remarkably outperforms SGD + NM in training ResNets for ImageNet classification, and as the network goes deeper the improvement becomes more significant." }, { "heading": "E.2 LONG SHORT-TERM MEMORY (LSTM) TRAINING FOR PIXEL-BY-PIXEL MNIST", "text": "In this task, we examine the advantage of SRSGD over SGD and SGD with Nesterov Momentum in training recurrent neural networks. In our experiments, we use an LSTM with different numbers of hidden units (128, 256, and 512) to classify samples from the well-known MNIST dataset LeCun & Cortes (2010). We follow the implementation of Le et al. (2015) and feed each pixel of the image into the RNN sequentially. In addition, we choose a random permutation of 28× 28 = 784 elements at the beginning of the experiment. This fixed permutation is applied to training and testing sequences. This task is known as permuted MNIST classification, which has become standard to measure the performance of RNNs and their ability to capture long term dependencies.\nUnder review as a conference paper at ICLR 2021\nImplementation and Training Details: For the LSTM model, we initialize the forget bias to 1 and other biases to 0. All weights matrices are initialized orthogonally except for the hidden-to-hidden weight matrices, which are initialized to be identity matrices. We train each model for 350 epochs with the initial learning rate of 0.01. The learning rate was reduced by a factor of 10 at epoch 200 and 300. The momentum is set to 0.9 for SGD with standard and Nesterov constant momentum. The restart schedule for SRSGD is set to 90, 30, 90 . The restart schedule changes at epoch 200 and 300. In all experiments, we use batch size 128 and the gradients are clipped so that their L2 norm are at most 1. Our code is based on the code from the exponential RNN’s Github.7\nResults: Our experiments corroborate the superiority of SRSGD over the two baselines. SRSGD yields much smaller test error and converges faster than SGD with standard and Nesterov constant momentum across all settings with different number of LSTM hidden units. We summarize our results in Table 11 and Figure 7.\nTable 11: Test errors (%) on Permuted MNIST of trained with SGD, SGD + NM and SRSGD. The LSTM model has 128 hidden units. In all experiments, we use the initial learning rate of 0.01, which is reduced by a factor of 10 at epoch 200 and 300. All models are trained for 350 epochs. The momentum for SGD and SGD + NM is set to 0.9. The restart schedule in SRSGD is set to 90, 30, and 90.\nNetwork No. Hidden Units SGD SGD + NM SRSGD Improvement over SGD/SGD + NM\nLSTM 128 10.10± 0.57 9.75± 0.69 8.61± 0.30 1.49/1.14 LSTM 256 10.42± 0.63 10.09± 0.61 9.03± 0.23 1.39/1.06 LSTM 512 10.04± 0.35 9.55± 1.09 8.49± 1.59 1.55/1.06\nBeyond DNNs: Training LSTM on PMNIST\nTable 10: Test errors (%) on Permuted MNIST of trained with SGD, SGD + NM and SRSGD. The\nLSTM model has 128 hidden units. In all experiments, we use the initial learning rate of 0.01, which\nis reduced by a factor of 10 at epoch 200 and 300. All models are trained for 350 epochs. The\nmomentum for SGD and SGD + NM is set to 0.9. The restart schedule in SRSGD is set to 90, 30,\nand 90.\nNetwork No. Hidden Units SGD SGD + NM SRSGD Improvement over SGD/SGD + NM LSTM 128 10.10 ± 0.57 9.75 ± 0.69 8.61 ± 0.30 1.49/1.14 LSTM 256 10.42 ± 0.63 10.09 ± 0.61 9.03 ± 0.23 1.39/1.06 LSTM 512 10.04 ± 0.35 9.55 ± 1.09 8.49 ± 1.59 1.55/1.06\nSGD + Momentum SRSGDSGD + Nesterov Momentum\nTr ai\nn lo\nss\nIteration\nFigure 6: Training loss vs. training iterations of LSTM trained with SGD (red), SGD + NM (green), and SRSGD (blue) for PMNIST classification tasks.\nWe evaluate our models using the discriminator’s loss, i.e. the Earth Moving distance estimate, since572 in WGAN lower discriminator loss and better sample quality are correlated [2].573\nImplementation and Training Details: The detailed implementations of our generator and discrim-574 inator are given below. For the generator, we set latent dim to 100 and d to 32. For the discriminator,575 we set d to 32. We train each model for 350 epochs with the initial learning rate of 0.01. The learning576 rate was reduced by a factor of 10 at epoch 200 and 300. The momentum is set to 0.9 for SGD with577 standard and Nesterov constant momentum. The restart schedule for SRSGD is set to 60, 120, 180.578 The restart schedule changes at epoch 200 and 300. In all experiments, we use batch size 64. Our579 code is based on the code from the Pytorch WGAN-GP Github.8580\ni m p o r t t o r c h581 i m p o r t t o r c h . nn as nn582\n583 c l a s s G e n e r a t o r ( nn . Module ) :584 d e f i n i t ( s e l f , l a t e n t d i m , d =32) :585 s u p e r ( ) . i n i t ( )586 s e l f . n e t = nn . S e q u e n t i a l (587 nn . ConvTranspose2d ( l a t e n t d i m , d ⇤ 8 , 4 , 1 , 0 ) ,588 nn . BatchNorm2d ( d ⇤ 8) ,589 nn . ReLU( True ) ,590\n591 nn . ConvTranspose2d ( d ⇤ 8 , d ⇤ 4 , 4 , 2 , 1 ) ,592 nn . BatchNorm2d ( d ⇤ 4) ,593 nn . ReLU( True ) ,594\n595 nn . ConvTranspose2d ( d ⇤ 4 , d ⇤ 2 , 4 , 2 , 1 ) ,596 nn . BatchNorm2d ( d ⇤ 2) ,597 nn . ReLU( True ) ,598\n599 nn . ConvTranspose2d ( d ⇤ 2 , 1 , 4 , 2 , 1 ) ,600\n8Implementation available at https://github.com/arturml/pytorch-wgan-gp\nSGD + Momentum SRSGDSGD + esterov Momentum\nTr ai\nn lo\nss\nIteration" }, { "heading": "Test errors (%) on Permuted MNIST", "text": "Figure 7: Training loss vs. training iterations of LSTM trained with SGD (red), SGD + NM (green), and SRSGD (blue) for PMNIST classification tasks." }, { "heading": "E.3 WASSERSTEIN GENERATIVE ADVERSARIAL NETWORKS (WGAN) TRAINING ON MNIST", "text": "We investigate the advantage of SRSGD over SGD with standard and Nesterov momentum in training deep generative models. In our experiments, we train a WGAN with gradient penalty Gulrajani\n7Implementation available at https://github.com/Lezcano/expRNN\n25\nUnder review as a conference paper at ICLR 2021\net al. (2017) on MNIST. We evaluate our models using the discriminator’s loss, i.e. the Earth Moving distance estimate, since in WGAN lower discriminator loss and better sample quality are correlated Arjovsky et al. (2017).\nImplementation and Training Details: The detailed implementations of our generator and discriminator are given below. For the generator, we set latent dim to 100 and d to 32. For the discriminator, we set d to 32. We train each model for 350 epochs with the initial learning rate of 0.01. The learning rate was reduced by a factor of 10 at epoch 200 and 300. The momentum is set to 0.9 for SGD with standard and Nesterov constant momentum. The restart schedule for SRSGD is set to 60, 120, 180. The restart schedule changes at epoch 200 and 300. In all experiments, we use batch size 64. Our code is based on the code from the Pytorch WGAN-GP Github.8\ni m p o r t t o r c h i m p o r t t o r c h . nn as nn\nc l a s s G e n e r a t o r ( nn . Module ) : d e f i n i t ( s e l f , l a t e n t d i m , d =32) :\ns u p e r ( ) . i n i t ( ) s e l f . n e t = nn . S e q u e n t i a l (\nnn . ConvTranspose2d ( l a t e n t d i m , d ∗ 8 , 4 , 1 , 0 ) , nn . BatchNorm2d ( d ∗ 8) , nn . ReLU( True ) ,\nnn . ConvTranspose2d ( d ∗ 8 , d ∗ 4 , 4 , 2 , 1 ) , nn . BatchNorm2d ( d ∗ 4) , nn . ReLU( True ) ,\nnn . ConvTranspose2d ( d ∗ 4 , d ∗ 2 , 4 , 2 , 1 ) , nn . BatchNorm2d ( d ∗ 2) , nn . ReLU( True ) ,\nnn . ConvTranspose2d ( d ∗ 2 , 1 , 4 , 2 , 1 ) , nn . Tanh ( )\n) d e f f o r w a r d ( s e l f , x ) :\nr e t u r n s e l f . n e t ( x )\nc l a s s D i s c r i m i n a t o r ( nn . Module ) : d e f i n i t ( s e l f , d =32) :\ns u p e r ( ) . i n i t ( ) s e l f . n e t = nn . S e q u e n t i a l (\nnn . Conv2d ( 1 , d , 4 , 2 , 1 ) , nn . Ins tanceNorm2d ( d ) , nn . LeakyReLU ( 0 . 2 ) ,\nnn . Conv2d ( d , d ∗ 2 , 4 , 2 , 1 ) , nn . Ins tanceNorm2d ( d ∗ 2) , nn . LeakyReLU ( 0 . 2 ) ,\nnn . Conv2d ( d ∗ 2 , d ∗ 4 , 4 , 2 , 1 ) , nn . Ins tanceNorm2d ( d ∗ 4) , nn . LeakyReLU ( 0 . 2 ) ,\nnn . Conv2d ( d ∗ 4 , 1 , 4 , 1 , 0 ) , )\nd e f f o r w a r d ( s e l f , x ) : o u t p u t s = s e l f . n e t ( x ) r e t u r n o u t p u t s . s q u e e z e ( )\nResults: Our SRSGD is still better than both the baselines. SRSGD achieves smaller discriminator loss, i.e. Earth Moving distance estimate, and converges faster than SGD with standard and Nesterov constant momentum. We summarize our results in Table 12 and Figure 8. We also demonstrate the\n8Implementation available at https://github.com/arturml/pytorch-wgan-gp\nUnder review as a conference paper at ICLR 2021\ndigits generated by the trained WGAN in Figure 9. By visually evaluation, we observe that samples generated by the WGAN trained with SRSGD look slightly better than those generated by the WGAN trained with SGD with standard and Nesterov constant momentum.\nTable 12: Discriminator loss (i.e. Earth Moving distance estimate) of the WGAN with gradient penalty trained on MNIST with SGD, SGD + NM and SRSGD. In all experiments, we use the initial learning rate of 0.01, which is reduced by a factor of 10 at epoch 200 and 300. All models are trained for 350 epochs. The momentum for SGD and SGD + NM is set to 0.9. The restart schedule in SRSGD is set to 60, 120, and 180.\nBey nd DNNs: Traini g WGAN on MNIST\nnn . Tanh ( )601 )602 d e f f o r w a r d ( s e l f , x ) :603 r e t u r n s e l f . n e t ( x )604 605 c l a s s D i s c r i m i n a t o r ( nn . Module ) :606 d e f i n i t ( s e l f , d =32) :607 s u p e r ( ) . i n i t ( )608 s e l f . n e t = nn . S e q u e n t i a l (609 nn . Conv2d ( 1 , d , 4 , 2 , 1 ) ,610 nn . Ins tanceNorm2d ( d ) ,611 nn . LeakyReLU ( 0 . 2 ) ,612 613 nn . Conv2d ( d , d ⇤ 2 , 4 , 2 , 1 ) ,614 nn . Ins tanceNorm2d ( d ⇤ 2) ,615 nn . LeakyReLU ( 0 . 2 ) ,616 617 nn . Conv2d ( d ⇤ 2 , d ⇤ 4 , 4 , 2 , 1 ) ,618 nn . Ins tanceNorm2d ( d ⇤ 4) ,619 nn . LeakyReLU ( 0 . 2 ) ,620 621\nnn . Conv2d ( d ⇤ 4 , 1 , 4 , 1 , 0 ) ,622 )623 d e f f o r w a r d ( s e l f , x ) :624 o u t p u t s = s e l f . n e t ( x )625 r e t u r n o u t p u t s . s q u e e z e ( )626\nResults: Our SRSGD is still better than both the baselines. SRSGD achieves smaller discriminator627 loss, i.e. Earth Moving distance estimate, and converges faster than SGD with standard and Nesterov628 constant momentum. We summarize our results in Table 11 and Figure 7. We also demonstrate the629 digits generated by the trained WGAN in Figure 8. By visually evaluation, we observe that samples630 generated by the WGAN trained with SRSGD look slightly better than those generated by the WGAN631 trained with SGD with standard and Nesterov constant momentum.\nepochs reduction), 75 (15 epochs reduction), 70 (20 epochs reduction), 65 (25 epochs reduction), and643\n60 (30 epochs reduction).644\nF.2 Additional Experimental Results645 Figure 9 shows error rate vs. reduction in epochs for all models trained on CIFAR10 and ImageNet.646 It is a more complete version of Figure 4 in the main text. Table 13 and Table 14 provide detailed test647 errors vs. number of training epoch reduction reported in Figure 4 and Figure 9 . We also conduct an648\n21\nFigure 8: Earth Moving distance estimate (i.e. discriminator loss) vs. training epochs of WGAN with gradient penalty tra ned with SGD (red), SGD + NM (green), and SRSGD (blue) on MNIST.\nSGD SGD + NM SRSGD\nFigure 9: MNIST digits generated by WGAN trained with gradient penalty by SGD (left), SGD + NM (middle), and SRSGD (right)." }, { "heading": "F ERROR RATE VS. REDUCTION IN TRAINING EPOCHS", "text": "F.1 IMPLEMENTATION DETAILS\nCIFAR10 (Figure 4, left, in the main text) and CIFAR100 (Figure 11 in this Appendix): Except for learning rate schedule, we use the same setting described in Section D.1 above and Section 4.1 in the main text. Table 13 contains the learning rate schedule for each number of epoch reduction in Figure 4 (left) in the main text and Figure 11 below.\nImageNet (Figure 4, right, in the main text): Except for the total number of training epochs, other settings are similar to experiments for training ImageNet in fewer number of epochs described in Section D.3. In particular, the learning rate and restarting frequency schedule still follow those in Table 9 above. We examine different numbers of training epochs: 90 (0 epoch reduction), 80 (10 epochs reduction), 75 (15 epochs reduction), 70 (20 epochs reduction), 65 (25 epochs reduction), and 60 (30 epochs reduction).\nUnder review as a conference paper at ICLR 2021" }, { "heading": "F.2 SHORT TRAINING ON CIFAR10/CIFAR100 USING SGD", "text": "For better comparison between SRSGD training using fewer epochs and SGD full training, we also conduct experiments with SGD training using fewer epochs on CIFAR10 and CIFAR100. Table 14 and 15 compares SRSGD short training using 100 epoch, SGD short training using 100 epochs, SRSGD full training using 200 epochs, and SGD full training using 200 epochs for Pre-ResNet-110, 290, and 470 on CIFAR10 and CIFAR100, respectively. The learning rate schedule for SGD short training using 100 epochs is the same as the learning rate schedule for SRSGD short training using 100 epoch given in Section 4 and in Table 13 above. In particular, for both SGD and SRSGD training using 100 epochs, we decrease the learning rate by a factor of 10 at the 80th, 90th, and 95th epoch. We observe that SGD short training has the worst performance compared to the others while SRSGD short training yields either comparable or even better results than SGD full training." }, { "heading": "F.3 ADDITIONAL EXPERIMENTAL RESULTS", "text": "Figure 10 shows error rate vs. reduction in epochs for all models trained on CIFAR10 and ImageNet. It is a more complete version of Figure 4 in the main text. Table 16 and Table 17 provide detailed test errors vs. number of training epoch reduction reported in Figure 4 and Figure 10 . We also conduct an additional ablation study of error rate vs. reduction in epochs for CIFAR100 and include the results in Figure 11 and Table 18 below.\nUnder review as a conference paper at ICLR 2021\nG IMPACT OF RESTARTING FREQUENCY FOR IMAGENET AND CIFAR100\nG.1 IMPLEMENTATION DETAILS\nFor the CIFAR10 experiments on Pre-ResNet-290 in Figure 5 in the main text, as well as the CIFAR100 and ImageNet experiments in Figure 14 and 15 in this Appendix, we vary the initial restarting frequency F1. Other settings are the same as described in Section D above.\nUnder review as a conference paper at ICLR 2021\nCIFAR100\nG.2 IMPACT OF THE GROWTH RATE r\nWe do an ablation study for the growth rate r to understand its impact on the behavior of SRSGD. We choose a case study of training a Pre-ResNet-110 on CIFAR10 using SRSGD with a linear schedule scheme for the restarting frequency. We fix the initial restarting frequency F1 = 30 and vary the growth rate r. We choose r from the set of {0.7, 1.0, 2.0, 10.0}. These values of r represent four different scenarios. When r = 0.7, the restarting frequency decreases every time the learning rate is reduced by a factor of 10. When r = 1.0, the restarting frequency stays constant during the training. When r = 2.0, the restarting frequency increases every time the learning rate is reduced by a factor of 10. Finally, when r = 10.0, it is similar to when r = 2.0, but the restarting frequency increases much faster and to larger values. Figure 12 summarizes the results of our ablation study. We observe that for CIFAR10, decreasing the restarting frequency or keeping it constant during training yields worse results than increasing the restarting frequency. However, increasing the restarting frequency too much also diminishes the performance of SRSGD.\nUnder review as a conference paper at ICLR 2021" }, { "heading": "G.3 ADDITIONAL EXPERIMENTAL RESULTS", "text": "To complete our study on the impact of restarting frequency in Section 5.2 in the main text, we examine the case of CIFAR100 and ImageNet in this section. We summarize our results in Figure 14 and 15 below. Also, Figure 13 is a more detailed version of Figure 5 in the main text.\nCIFAR10\nCIFAR100\nImageNet\nUnder review as a conference paper at ICLR 2021" }, { "heading": "H FULL TRAINING WITH LESS EPOCHS AT THE INTERMEDIATE LEARNING RATES", "text": "We explore SRSGD full training (200 epochs on CIFAR and 90 epochs on ImageNet) with less number of epochs at the intermediate learning rates and report the results in Table 19, 20, 21 and Figure 16, 17, 18 below. The settings and implementation details here are similar to those in Section F, but using all 200 epochs for CIFAR experiments and 90 epochs for ImageNet experiments.\nUnder review as a conference paper at ICLR 2021\nCIFAR100\nI VISUALIZATION OF SRSGD’S TRAJECTORY\nHere we visualize the training trajectory through bad minima of SRSGD, SGD with constant momentum, and SGD. In particular, we train a neural net classifier on a swiss roll data as in Huang et al. (2019) and find bad minima along its training. Each red dot in Figure 19 represents the trained model after each 10 epochs in the training. From each red dot, we search for nearby bad local minima,\nUnder review as a conference paper at ICLR 2021\nwhich are the blue dots. Those bad local minima achieve good training error but bad test error. We plots the trained models and bad local minima using PCA Wold et al. (1987) and t-SNE Maaten & Hinton (2008) embedding. The blue color bar is for the test accuracy of bad local minima; the red color bar is for the number of training epochs." }, { "heading": "PCA Embedding of the Trajectory", "text": "(CONTINUED NEXT PAGE)\nUnder review as a conference paper at ICLR 2021" }, { "heading": "J SRSGD IMPLEMENTATION IN PYTORCH", "text": "i m p o r t t o r c h from . o p t i m i z e r i m p o r t Op t imize r , r e q u i r e d\nc l a s s SRSGD( O p t i m i z e r ) : ””” Schedu led R e s t a r t SGD. Args :\nparams ( i t e r a b l e ) : i t e r a b l e o f p a r a m e t e r s t o o p t i m i z e o r d i c t s d e f i n i n g p a r a m e t e r g ro ups . l r ( f l o a t ) : l e a r n i n g r a t e . w e i g h t d e c a y ( f l o a t , o p t i o n a l ) : w e i gh t decay ( L2 p e n a l t y ) ( d e f a u l t : 0 ) i t e r c o u n t ( i n t e g e r ) : c o u n t t h e i t e r a t i o n s mod 200\nExample : >>> o p t i m i z e r = t o r c h . opt im . SRSGD( model . p a r a m e t e r s ( ) , l r = 0 . 1 ,\nw e i g h t d e c a y =5e−4, i t e r c o u n t =1) >>> o p t i m i z e r . z e r o g r a d ( ) >>> l o s s f n ( model ( i n p u t ) , t a r g e t ) . backward ( ) >>> o p t i m i z e r . s t e p ( ) >>> i t e r c o u n t = o p t i m i z e r . u p d a t e i t e r ( )\nFormula : v { t +1} = p t − l r ∗ g t p { t +1} = v { t +1} + ( i t e r c o u n t ) / ( i t e r c o u n t +3) ∗ ( v { t +1} − v t ) ””” d e f i n i t ( s e l f , params , l r = r e q u i r e d , w e i g h t d e c a y = 0 . ,\ni t e r c o u n t =1 , r e s t a r t i n g i t e r =100) : i f l r i s n o t r e q u i r e d and l r < 0 . 0 : r a i s e V a l u e E r r o r ( ” I n v a l i d l e a r n i n g r a t e : {} ” . f o r m a t ( l r ) ) i f w e i g h t d e c a y < 0 . 0 :\nr a i s e V a l u e E r r o r ( ” I n v a l i d w e i g h t d e c a y v a l u e : {} ” . f o r m a t ( w e i g h t d e c a y ) )\ni f i t e r c o u n t < 1 : r a i s e V a l u e E r r o r ( ” I n v a l i d i t e r c o u n t : {} ” . f o r m a t ( i t e r c o u n t ) ) i f r e s t a r t i n g i t e r < 1 : r a i s e V a l u e E r r o r ( ” I n v a l i d i t e r t o t a l : {} ” . f o r m a t (\nr e s t a r t i n g i t e r ) )\nd e f a u l t s = d i c t ( l r = l r , w e i g h t d e c a y = w e i g h t d e c a y , i t e r c o u n t = i t e r c o u n t , r e s t a r t i n g i t e r = r e s t a r t i n g i t e r ) s u p e r (SRSGD, s e l f ) . i n i t ( params , d e f a u l t s )\nd e f s e t s t a t e ( s e l f , s t a t e ) : s u p e r (SRSGD, s e l f ) . s e t s t a t e ( s t a t e )\nd e f u p d a t e i t e r ( s e l f ) : i d x = 1 f o r group i n s e l f . pa r am groups :\ni f i d x == 1 : group [ ’ i t e r c o u n t ’ ] += 1 i f group [ ’ i t e r c o u n t ’ ] >= group [ ’ r e s t a r t i n g i t e r ’ ] : g roup [ ’ i t e r c o u n t ’ ] = 1 i d x += 1\nr e t u r n group [ ’ i t e r c o u n t ’ ] , g roup [ ’ r e s t a r t i n g i t e r ’ ]\nd e f s t e p ( s e l f , c l o s u r e =None ) : ””” Per fo rm a s i n g l e o p t i m i z a t i o n s t e p . Arguments : c l o s u r e ( c a l l a b l e , o p t i o n a l ) : A c l o s u r e t h a t\nr e e v a l u a t e s t h e model and r e t u r n s t h e l o s s . ””” l o s s = None i f c l o s u r e i s n o t None :\nUnder review as a conference paper at ICLR 2021\nl o s s = c l o s u r e ( )\nf o r group i n s e l f . pa r am groups : w e i g h t d e c a y = group [ ’ w e i g h t d e c a y ’ ] momentum = ( group [ ’ i t e r c o u n t ’ ] − 1 . ) / ( group [ ’ i t e r c o u n t ’ ] +\n2 . ) f o r p i n group [ ’ params ’ ] :\ni f p . g r ad i s None : c o n t i n u e d p = p . g r ad . d a t a i f w e i g h t d e c a y ! = 0 :\nd p . add ( w e i g h t d e c a y , p . d a t a )\np a r a m s t a t e = s e l f . s t a t e [ p ]\ni f ’ momentum buffer ’ n o t i n p a r a m s t a t e : buf0 = p a r a m s t a t e [ ’ momentum buffer ’ ] = t o r c h . c l o n e ( p . d a t a ) . d e t a c h ( ) e l s e :\nbuf0 = p a r a m s t a t e [ ’ momentum buffer ’ ]\nbuf1 = p . d a t a − group [ ’ l r ’ ]∗ d p p . d a t a = buf1 + momentum∗ ( buf1 − buf0 ) p a r a m s t a t e [ ’ momentum buffer ’ ] = buf1\ni t e r c o u n t , i t e r t o t a l = s e l f . u p d a t e i t e r ( )\nr e t u r n l o s s" }, { "heading": "K SRSGD IMPLEMENTATION IN KERAS", "text": "i m p o r t numpy as np i m p o r t t e n s o r f l o w as t f from k e r a s i m p o r t backend as K from k e r a s . o p t i m i z e r s i m p o r t O p t i m i z e r from k e r a s . l e g a c y i m p o r t i n t e r f a c e s i f K. backend ( ) == ’ t e n s o r f l o w ’ :\ni m p o r t t e n s o r f l o w as t f\nc l a s s SRSGD( O p t i m i z e r ) : ””” Schedu led R e s t a r t S t o c h a s t i c g r a d i e n t d e s c e n t o p t i m i z e r . I n c l u d e s s u p p o r t f o r N e s t e r o v momentum and l e a r n i n g r a t e decay . # Arguments\nl e a r n i n g r a t e : f l o a t >= 0 . L e a r n i n g r a t e . ”””\nd e f i n i t ( s e l f , l e a r n i n g r a t e = 0 . 0 1 , i t e r c o u n t =1 , r e s t a r t i n g i t e r =40 , ∗∗ kwargs ) : l e a r n i n g r a t e = kwargs . pop ( ’ l r ’ , l e a r n i n g r a t e ) s e l f . i n i t i a l d e c a y = kwargs . pop ( ’ decay ’ , 0 . 0 ) s u p e r (SRSGD, s e l f ) . i n i t (∗∗ kwargs ) wi th K. name scope ( s e l f . c l a s s . n a m e ) :\ns e l f . i t e r a t i o n s = K. v a r i a b l e ( 0 , d t y p e = ’ i n t 6 4 ’ , name= ’ i t e r a t i o n s ’ ) s e l f . l e a r n i n g r a t e = K. v a r i a b l e ( l e a r n i n g r a t e , name= ’ l e a r n i n g r a t e ’ ) s e l f . decay = K. v a r i a b l e ( s e l f . i n i t i a l d e c a y , name= ’ decay ’ ) # f o r s r s g d s e l f . i t e r c o u n t = K. v a r i a b l e ( i t e r c o u n t , d t y p e = ’ i n t 6 4 ’ , name= ’ i t e r c o u n t ’ ) s e l f . r e s t a r t i n g i t e r = K. v a r i a b l e ( r e s t a r t i n g i t e r , d t y p e = ’\ni n t 6 4 ’ , name= ’ r e s t a r t i n g i t e r ’ ) s e l f . n e s t e r o v = n e s t e r o v\nUnder review as a conference paper at ICLR 2021\n@ i n t e r f a c e s . l e g a c y g e t u p d a t e s s u p p o r t @K. sy m b o l i c d e f g e t u p d a t e s ( s e l f , l o s s , params ) :\ng r a d s = s e l f . g e t g r a d i e n t s ( l o s s , params ) s e l f . u p d a t e s = [K. u p d a t e a d d ( s e l f . i t e r a t i o n s , 1 ) ]\nmomentum = (K. c a s t ( s e l f . i t e r c o u n t , d t y p e =K. d t y p e ( s e l f . decay ) ) − 1 . ) / ( K. c a s t ( s e l f . i t e r c o u n t , d t y p e =K. d t y p e ( s e l f . decay ) ) + 2 . )\nl r = s e l f . l e a r n i n g r a t e i f s e l f . i n i t i a l d e c a y > 0 :\nl r = l r ∗ ( 1 . / ( 1 . + s e l f . decay ∗ K. c a s t ( s e l f . i t e r a t i o n s , K. d t y p e ( s e l f . decay )\n) ) ) # momentum s h a p e s = [K. i n t s h a p e ( p ) f o r p i n params ]\nmoments = [K. v a r i a b l e ( v a l u e =K. g e t v a l u e ( p ) , d t y p e =K. d t y p e ( s e l f . decay ) , name= ’ moment ’ + s t r ( i ) )\nf o r ( i , p ) i n enumera t e ( params ) ]\ns e l f . w e i g h t s = [ s e l f . i t e r a t i o n s ] + moments + [ s e l f . i t e r c o u n t ] f o r p , g , m i n z i p ( params , g rads , moments ) :\nv = p − l r ∗ g new p = v + momentum ∗ ( v − m) s e l f . u p d a t e s . append (K. u p d a t e (m, v ) )\n# Apply c o n s t r a i n t s . i f g e t a t t r ( p , ’ c o n s t r a i n t ’ , None ) i s n o t None :\nnew p = p . c o n s t r a i n t ( new p )\ns e l f . u p d a t e s . append (K. u p d a t e ( p , new p ) )\nc o n d i t i o n = K. a l l (K. l e s s ( s e l f . i t e r c o u n t , s e l f . r e s t a r t i n g i t e r ) ) n e w i t e r c o u n t = K. s w i t c h ( c o n d i t i o n , s e l f . i t e r c o u n t + 1 , s e l f . i t e r c o u n t − s e l f . r e s t a r t i n g i t e r + 1) s e l f . u p d a t e s . append (K. u p d a t e ( s e l f . i t e r c o u n t , n e w i t e r c o u n t ) )\nr e t u r n s e l f . u p d a t e s\nd e f g e t c o n f i g ( s e l f ) : c o n f i g = { ’ l e a r n i n g r a t e ’ : f l o a t (K. g e t v a l u e ( s e l f . l e a r n i n g r a t e ) )\n, ’ decay ’ : f l o a t (K. g e t v a l u e ( s e l f . decay ) ) , ’ i t e r c o u n t ’ : i n t (K. g e t v a l u e ( s e l f . i t e r c o u n t ) ) , ’ r e s t a r t i n g i t e r ’ : i n t (K. g e t v a l u e ( s e l f . r e s t a r t i n g i t e r\n) ) } b a s e c o n f i g = s u p e r (SRSGD, s e l f ) . g e t c o n f i g ( ) r e t u r n d i c t ( l i s t ( b a s e c o n f i g . i t e m s ( ) ) + l i s t ( c o n f i g . i t e m s ( ) ) )" } ]
2,020
null
SP:2c21ee98d8ae42925da9d69e11cc2584e7e9dce8
[ "+ This paper studies the single-path one-shot super-network predictions and ranking correlation throughout an entire search space, as all stand-alone model results are known in advance. This is a crucial step in NAS. As we know, inaccurate architecture rating is the cause of ineffective NAS in almost all existing NAS methods. It makes nearly all previous NAS methods not better the random architecture selection (suggested by two ICLR 2020 papers and many ICLR 2021 submissions). Therefore, analyzing the architecture rating problem is of most importance in NAS. This paper takes a deep insight into the architecture rating problem, which provides a timely metric for evaluating NAS's effectiveness. (+)" ]
Recently presented benchmarks for Neural Architecture Search (NAS) provide the results of training thousands of different architectures in a specific search space, thus enabling the fair and rapid comparison of different methods. Based on these results, we quantify the ranking correlations of single-path architecture search methods in different search space subsets and under several training variations; studying their impact on the expected search results. The experiments support the few-shot approach and Linear Transformers, provide evidence against disabling cell topology sharing during the training phase or using regularization and other common training additions in the NAS-Bench-201 search space. Additionally, we find that super-network size and path sampling strategies require further research to be understood better.
[]
[ { "authors": [ "Paul Tucker", "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: LargeScale Machine Learning on Heterogeneous Systems. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI", "venue": null, "year": 2016 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Jixiang Li", "Qingyuan Li", "Ruijun Xu" ], "title": "SCARLET-NAS: Bridging the gap Between Scalability and Fairness in Neural Architecture Search", "venue": "arXiv preprint arXiv:1908.06022,", "year": 2019 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Ruijun Xu", "Jixiang Li" ], "title": "FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search", "venue": "arXiv preprint arXiv:1907.01845,", "year": 2019 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V. Le" ], "title": "AutoAugment: Learning Augmentation Policies from Data, 2018", "venue": "URL http://arxiv.org/abs/1805", "year": 2018 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Searching for A Robust Neural Architecture in Four GPU Hours", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 1910 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single Path One-Shot Neural Architecture Search with Uniform Sampling", "venue": "In European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shoukang Hu", "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Jianping Shi", "Xunying Liu", "Dahua Lin" ], "title": "DSNAS: Direct Neural Architecture Search without Parameter Retraining", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2002 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "CIFAR-10 (Canadian Institute for Advanced Research)", "venue": "In European Conference on Computer Vision,", "year": 2009 }, { "authors": [ "Changlin Li", "Jiefeng Peng", "Liuchun Yuan", "Guangrun Wang", "Xiaodan Liang", "Liang Lin", "Xiaojun Chang" ], "title": "Blockwisely Supervised Neural Architecture Search with Knowledge Distillation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive Neural Architecture Search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable Architecture Search, 2018b", "venue": "URL http://arxiv.org/abs/1806.09055", "year": 2018 }, { "authors": [ "Joseph Mellor", "Jack Turner", "Amos Storkey", "Elliot J. Crowley" ], "title": "Neural Architecture Search without Training, 2020", "venue": "URL http://arxiv.org/abs/2006.04647", "year": 2006 }, { "authors": [ "Hieu Pham", "Melody Y. Guan", "Barret Zoph", "Quoc V. Le", "Jeff Dean" ], "title": "Efficient Neural Architecture Search via Parameter Sharing, 2018", "venue": "URL http://arxiv.org/abs/1802.03268", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized Evolution for Image Classifier Architecture Search, 2018", "venue": "URL http://arxiv.org/abs/1802.01548", "year": 2018 }, { "authors": [ "Christian Sciuto", "Kaicheng Yu", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the Search", "venue": "Phase of Neural Architecture Search. CoRR,", "year": 2019 }, { "authors": [ "Colin White", "Sam Nolen", "Yash Savani" ], "title": "Local Search is State of the Art for Neural Architecture", "venue": "Search Benchmarks,", "year": 2020 }, { "authors": [ "Shen Yan", "Biyi Fang", "Faen Zhang", "Yu Zheng", "Xiao Zeng", "Mi Zhang", "Hui Xu" ], "title": "HM-NAS: Efficient Neural Architecture Search via Hierarchical Masking", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 1909 }, { "authors": [ "Chris Ying", "Aaron Klein", "Eric Christiansen", "Esteban Real", "Kevin Murphy", "Frank Hutter" ], "title": "Nasbench-101: Towards reproducible neural architecture search", "venue": "In International Conference on Machine Learning,", "year": 1902 }, { "authors": [ "Yiyang Zhao", "Linnan Wang", "Yuandong Tian", "Rodrigo Fonseca", "Tian Guo" ], "title": "Few-shot Neural Architecture Search", "venue": "arXiv preprint arXiv:2006.06863,", "year": 2020 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural Architecture Search with Reinforcement Learning. 2016", "venue": "URL http://arxiv.org/abs/1611.01578", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The development and study of algorithms that automatically design neural networks, Neural Architecture Search (NAS), has become a significant influence in recent years; owed to the promise of creating better models with less human effort and in shorter time.\nWhereas the first generations of algorithms required training thousands of networks in thousands of GPU hours using reinforcement learning (Zoph & Le (2016); Zoph et al. (2018)), greedy progressive optimization (Liu et al. (2018a)), regularized evolution (Real et al. (2018)) and more, the invention of weight sharing during search (Pham et al. (2018)) reduced the computation cost to few GPU hours, and thus made NAS accessible to a much wider audience.\nWhile this also enables gradient based NAS (Liu et al. (2018b)), the necessity to compare operations against each other leads to an increased memory requirement. The issue is commonly alleviated by training a small search network consisting of cells with a shared topology, later scaling the resulting architecture up by adding more cells and increasing the number of channels. Although the standalone network is often trained from scratch, reusing the search network weights can increase both training speed and final accuracy (Yan et al. (2019); Hu et al. (2020)). More recent gradient based methods require to have only one path in memory (Dong & Yang (2019); Cai et al. (2019); Hu et al. (2020)) and can even be applied directly to huge data sets.\nHowever, the aforementioned weight sharing methods only yield a single result, require manually fine-tuning the loss function when there are multiple objectives, and can not guarantee results within constraints (e.g. latency, FLOPs). The single-path one-shot approach seeks to combine the best of both worlds, requiring only one additional step in the search phase (Guo et al. (2020)): Firstly a full sized weight-sharing model (super-network) is fully trained by randomly choosing one of the available operations at each layer in every training step. Then, as specific architectures can be evaluated by choosing the model’s operations accordingly, a hyper-parameter optimization method can be used to find combinations of operations maximizing the super-network accuracy. If the rankings of the architectures by their respective super-network accuracy and by their stand-alone model retraining results are consistent, the quality of the discovered candidates is high.\nHowever, since the single-path method’s search spaces are often gigantic and the network training costly (see e.g. Guo et al. (2020); Chu et al. (2019b;a)), a study of the ranking correlation is usually limited to a handful of architectures. In this work we study the single-path one-shot super-network predictions and ranking correlation throughout an entire search space, as all stand-alone model re-\nsults are known in advance. This enables us to quantify the effects of several super-network training variations and search space subsets, to gain further insights on the popular single-path one-shot method itself.\nWe briefly list the closest related work in Section 2 and introduce the measurement metric, benchmark dataset, super-network training and experiment design in Section 3. We then systematically evaluate several variations in the single-path one-shot approach with a novel method, computing the ranking correlation of the trained super-networks with the ground-truth top-N best architectures. Experiments on search space subsets in Section 4.1 once again demonstrate that the ranking is more difficult as the search space increases in size, and that the operations that make the ranking especially hard are Zero and Pool. Section 4.2 evaluates Linear Transformers (Chu et al. (2019a)), which we find to perform very well in specific search space subsets, and otherwise even harmful. Furthermore, some commonly used training variations such as learning rate warmup, gradient clipping, data augmentation and regularization are evaluated in Section 4.3, where we find that none of these provides a measurable improvement. We further test disabling cell topology sharing only during training time and find that training the network in the same way as evaluating it is more effective. We finally list some grains of salt in Section 5 and conclude the paper with Section 6." }, { "heading": "2 RELATED WORK", "text": "A high quality architecture ranking prediction is the foundation of any NAS algorithm. In this paper we explore the effects of several super-network training variations on the ranking prediction of the aforementioned single-path one-shot approach (Guo et al. (2020)). Recent efforts have shown improvements by strictly fair operation sampling in the super-network training phase (Chu et al. (2019b)) and adding a linear 1×1 convolution to skip connections, improving training stability (Chu et al. (2019a)). Other works divide the search space, exploring multiple models with different operation-subsets (Zhao et al. (2020)), or one model with several smaller blocks that use a trained teacher as a guiding signal (Li et al. (2020b)).\nDue to the often gigantic search spaces and the inherent randomness of network training and hyperparameter optimization algorithms, the reproducibility of NAS methods has become a major concern. NAS Benchmarks attempt to alleviate this issue by providing statistics (e.g. validation loss, accuracy and latency) of several thousand different networks on multiple data sets (Ying et al. (2019); Dong & Yang (2020)), providing the ground-truth training results that we use for our evaluation." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 METRIC", "text": "As we correlate the super-network accuracy prediction and the benchmark results, but are only interested in a correct ranking, we need a ranking correlation metric. We choose Kendall’s Tau (τ , KT), a commonly used ranking metric (Sciuto et al. (2019); Chu et al. (2019b)) that counts how often all pairs of observations (xi, yi) and (xj , yj)\n1. are concordant, agreeing on a sorting order (xi < xj and yi < yj ; or xi > xj and yi > yj)\n2. are discordant, disagreeing on a sorting order (xi < xj and yi > yj ; or xi > xj and yi < yj)\n3. are neither\nand is then calculated by their difference and normalized by the number of possible different pairs.\nτ = (num concordant)−(num discordant) (n2)\nτ ranges from -1 in perfect disagreement to +1 in perfect agreement, and is around zero for independent X and Y .\nA small selection of experiments that use additional metrics can be found in Appendix D." }, { "heading": "3.2 NAS-BENCH-201", "text": "NAS-Bench-201 (Dong & Yang (2020)) is a tabular benchmark, which contains training and evaluation statistics of 15625 different architectures on the common vision data sets CIFAR10, CIFAR100 (Krizhevsky et al. (2009)) and a reduced variant of ImageNet (Deng et al. (2009)). The models differ in the design of the cell, a building block that is stacked several times to create a network. Within the cell, as visualized in Figure 1, at six specific positions (orange edges), one of five operations (Zero, Skip, 1×1 Convolution, 3×3 Convolution, 3×3 Average Pooling) is chosen (56 = 15625). The inputs of each node, such as the cell output (rightmost node) are averaged.\nN * Stem 3x3 Conv\nreduction ResNet\nN * reduction ResNet\nN * Head GAP, Softmax\nAs we are only interested in the final accuracy of each architecture, we average the benchmark test results over all seeds and the last three epochs of training. As the models’ rankings are quite consistent across all data sets (Dong & Yang (2020)), we focus on the CIFAR-10-Valid accuracy. Further results are provided in the supplementary material, see Appendix B.\nSince discrepancies of model rankings for the top performing architectures became apparent (Dong & Yang (2020)), we measure the accuracy of the trained super-networks according to the top-N (10, 25, 50, 150, 250, 500) benchmark architectures, as well as up to 1000 randomly sampled ones. If a reduced search space (due to masking operations, 36 = 729) contains fewer than 1000 different topologies, it is fully evaluated." }, { "heading": "3.3 TRAINING", "text": "In our experiments we train various NAS-Bench-201 networks. Small variants have 2 cells per stage (total of 8 cells, with 3 stages and 2 fixed cells for spatial reduction) and 32 channels in the first cell, which is roughly similar to common topology sharing methods. Medium sized networks have 4 cells per stage and start with 64 channels.\nAll models were subject to the same training schedule. We used CIFAR10 as training set (Krizhevsky et al. (2009)), of which we withheld 5000 images for validation. The batch size is 256, we used SGD with momentum of 0.9 and learning rate of 0.025, which was cosine annealed to 1e-5 over 250 epochs. All results are averaged over five independent runs with different seeds. Further details are listed in Appendix A." }, { "heading": "3.4 EXPERIMENT DESIGN", "text": "All of the following experiments are structured the same way: The top-N network architectures (ordered by top1 accuracy, measured in NAS-Bench-201) are selected, and an over-complete supernetwork predicts their respective accuracy values, as seen in Figure 2. If an operation is not available to the super-network, the top-N networks are also taken from the bench results without that operation.\nVariations to the search space and the super-network (structure or training process) affect the ranking correlation τ between the bench results and the super-network predictions. In the case of Figure 2, removing the Zero operation from the search space improves τ .\nTo make the figures more compact, the exact benchmark and prediction values are ignored in the further figures, only average prediction accuracy and τ depending on N will be shown (see Appendix B for further detailed figures), as seen in e.g. Figure 4. We also add the additional metric τa which describes the ranking correlation of the average prediction accuracy depending on N. More formally, τa is computed as described in Section 3.1 on the series of measurements [(10, A10), (25, A25), (50, A50), ...] where AN is the accuracy of super-network M with topology Ti and weights θs on the validation data Dvalid, averaged over the top-N topologies and multiple seeds.\nAN = ∑\ns∈seeds\n1 |seeds| N∑ i=1 1 N Acc(M, θs, Ti, Dvalid)\nAs we increase N (10, 25, 50, ...) AN should monotonically decrease (e.g. 0.7, 0.65, 0.6, ...), so that τa = −1 is the case where the super-network estimates match the bench results best." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SEARCH SPACE SUBSETS", "text": "The search space itself plays a significant role for NAS algorithms, not only due to its size or the availability of good models. By preventing specific operations from being used (masking) during training, validation and in the benchmark results, we can compare different subsets of the search space.\nA visual example of the importance can be found in Figure 2, where for an increasing number of top-N benchmark networks (columns), the ranking correlation to the super-network predictions is steadily improving. While there is a large number of networks wrongly predicted as useless (10% accuracy, right column) in the top row, masking the Zero operation (bottom row) significantly reduces this portion and thus improves the ranking correlation τ (KT).\nTo get a deeper understanding of the search space subsets, we take a closer look at the NAS-Bench201 rankings, sorted by their top 1 accuracy. Specifically, in Figure 3 we count how many of the NAS-Bench top-N networks use each available operation and how often it is used, across all benchmark results and the five largest subsets (each operation is masked once).\nThe 3×3 Convolution is arguably the most important operation. Unless it is masked, every single top-500 Bench network makes use of it. On average, it even makes up roughly half of the operation choices, in every other subset. This is hardly surprising, since it adds significantly more capacity to the network than the 1×1 Convolution and especially Zero, Skip, or Pool. The order of operation importance, as implied in their usage, continues with Skip, 1×1 Convolution, Zero, and finally Average Pooling. The wide usage of Skip operations was to be expected, as they are known to make deep networks more easily trainable (He et al. (2016)), however they are not present in every network. Perhaps the most surprising is the low importance of Average Pooling, even lower than Zero. It appears that all the benefits of Pool are already covered by the 3×3 Convolution, so that using the unnecessary operation now decreases the network accuracy.\nTwo subsets behave notably different than the full search space. Firstly, in the absence of skip connections (top right), it appears that Average Pooling is used as a substitute. And secondly, in the\n1x1 Conv\n3x3 Conv\nAvg Pool\n1x1 Conv\n3x3 Conv\nAvg Pool\n1x1 Conv\n3x3 Conv\nAvg Pool\nabsence of the 3×3 Convolution (bottom center), Average Poolings and especially 1×1 Convolutions have to make up for the missing capacity and spatial operations.\nWe now train single-path super-networks in several search space subsets and visualize the results in Figure 4. Ideally, the super-network validation accuracy is highest for the top Bench networks, enabling NAS methods to reliably find them, and the ranking correlation τ within the top-N bench networks is always significantly greater than zero, thus increasing the expected quality of the selected architecture. Neither is the common case. The baseline for small networks (top left, red) has the same averaged prediction accuracy for the top 10 as for the top 500 networks, resulting in τa ≈ 0, where the predicted accuracy and the bench accuracy have no statistical correlation. However, in some search space subsets, the single-path method works significantly better. By masking out each operation individually (Figure 4, left column), we find the most harmful operations to be Zero (green, top left, τa=−1) and Pool (purple, bottom left, τa=−1), which are also the least important ones according to Figure 3. Masking the Convolutions, thus increasing the relative amount of unparameterized operations, is harmful.\nMasking Skip (blue, left) is the most harmful to τa (=1). As seen in Figure 4, the top-N networks have a worse average predicted accuracy than the top-M (for N<M) networks, and sometimes even below the random sample, which is terrible. Interestingly τ may improve within the predictions for the top-N architectures.\nWe further mask a second operation in addition to Zero (center column) and Pool (right column) in the remaining columns of Figure 4. On small networks, the masking combination of Zero+Pool and arguably Zero+Skip perform even better, while masking Pool in almost any combination is harmful.\nIt is quite obvious that medium sized super-networks require additional care. The super-networks in several search space subsets fail to generalize at all, even though they learn the training set. In the other spaces, they still behave differently. This may be beneficial, such as in the baseline (left, red), but is more often harmful. Even worse, the averaged predicted accuracy of top-N networks in several subsets is lower than that of a random subset of networks, despite the often improved τ\nvalues. Finally, as seen in Figure 4, the standard deviation over the super-networks that do generalize is notably greater than for their small sized counterparts." }, { "heading": "4.2 LINEAR TRANSFORMERS", "text": "Many common search spaces for single-path methods contain exclusively Convolution operations (or blocks of such). Adding a Skip operation is useful in theory, enabling the discovery of smaller sized networks, but was also found to impact the stability of the super-network training. After all, in a sequential super-network, the operations at any layer may directly receive the output of any previous layer due to the variable size. However, replacing the Skip operation with a linear 1×1 Convolution (Linear Transformer, LT) during the search phase was found to stabilize the training (Chu et al. (2019a)). All Linear Transformers are removed after the search, resulting in a standalone network with the same capacity.\nFigure 5 visualizes the results of super-networks that have Linear Transformers added to their Skip or Skip+Pool operations. We also mask Zero (center) and Zero+Pool (right) to observe the supernetwork in search spaces with fewer operations that are neither Convolution nor Skip. It is noteworthy that in the absence of the Pool operation (right), both variations of the standard super-network are in fact equal. This is also apparent in the plot, although the randomness of non-deterministic training can still be seen.\nWe find it an interesting observation that, unless the search space contains exclusively Skip or Convolutions (the context in which the Transformers have been proposed), the Transformers are always harmful to the ranking correlation τa. The super-networks seem to overestimate the benefit of Skip and Pool operations, which, again, often causes the best networks to be estimated as below average. However, in a fitting search space, even the medium sized super-networks generalize very successfully, have a very low standard deviation and reliably improve τa.\nFinally, are Linear Transformers for a Pool operation useful? Since the search spaces that suit singlepath networks best do not contain any Pool operation, they are generally not necessary. When a Pool operation is present, as in Figure 5 left and center-left, we find no empirical evidence that they are beneficial. Although the additional transformers seem to stabilize training, as seen by the lower standard deviation, they also worsen the τa problem." }, { "heading": "4.3 SHARING, SAMPLING, WARM-UP, AND REGULARIZATION", "text": "Finally, we group four further variations to super-network training in Figure 6 and compare them with the baseline (red).\nTopology sharing (green): As sharing cell topologies does not impact the resource costs of singlepath training, it is generally not used. However, they are shared in the NAS-Bench-201 case, raising the question whether sharing should already be enforced during the super-net training (our default case), or only for the evaluation.\nAs seen in Figure 6, disabling the sharing during training for small super-networks (top row) is generally not beneficial over the baseline, as τa is generally worse and τ almost the same. However, it enables the medium sized networks in multiple spaces to make any useful predictions at all.\nUniform sampling (blue): Additionally, our default baseline strategy of randomly selecting the paths during training is strictly fair, so that every |O| steps, every operation o ∈ O is sampled exactly once; and compare this with the alternative of uniform random sampling.\nInterestingly, the absolute validation accuracy value is increased by uniform sampling. However, this is not relevant, as only the correct ranking matters. We find that, on small super-networks, as measured by τa, the strictly fair baseline performs equal or better than the uniform random sampling strategy. Additionally, we see a trend of τ being slightly in favor of strictly fair randomness, at almost every data point. However, once again, a seemingly inferior method variation enables training the medium sized super-networks to make above-chance predictions on the validation set. We hypothesize this to be a downside of the strictly fair weight update schedule, in which an update is\nonly performed every |O| steps (number of operations) over the accumulated gradients, including for the weights of stem and output layer, which may result in destructively large steps.\nLearning rate warm-up (yellow) and gradient clipping (cyan): To see whether simple learning rate tweaking already solves the aforementioned issue, we add a warm-up phase, linearly increasing the learning rate over 5 epochs to the default starting value of 0.025. It does not. In fact, the effects are detrimental in some search space subsets. On the other hand, clipping the gradients so that their L2 norm is in [−5, 5], as common practice in e.g. DARTS (Liu et al. (2018b)), has no notable effect. Further training variations to solve the issue with fairly little effort may be excluding the last layer and stem weights from the |O|-step update schedule (but losing strict fairness) or lowering their learning rate. However, a much closer look at it seems preferable, to ascertain the root cause and study its implications in greater detail.\nRegularization (purple): Finally, the super-network is only minimally regularized by default (only input shifting, horizontal flipping, and normalizing on the data), so we add the CIFAR-10 AutoAugment augmentation policies (Cubuk et al. (2018)) and label smoothing of α = 0.1.\nInterestingly, this is also detrimental. As seen in the top left and center-right plots, τa decreases below the baseline values, as top ranking architectures are underestimated. The effect is worse with relatively more unparameterized operations available, indicating that the topology estimation is biased in favor of the regularized Convolutions. The effect on medium sized super-networks can not be properly measured, as none of these super-networks should be used to rank architectures." }, { "heading": "5 GRAINS OF SALT", "text": "As in any empirical study, some grains of salt remain. First and foremost, the limited sample sizes in our experiments and the benchmark are a typical concern.\nWe find it disappointing that, aside from limited search spaces, no experiment displayed high values of τ , even though the top-N network groups may be sorted correctly (τa ≈ −1). This indicates that a number of good networks are always wrongly estimated, quite likely due to some of the available operations, or that the single-path one-shot approach is simply not suitable for the given search space or network architecture. While the accuracy difference between the best and 10th-best Bench networks is only roughly 0.013%, masking Skip at least increases that difference to roughly 0.027%, possibly also making a correct ranking easier. Considering these marginal differences, it is very likely that the Bench baseline is also not perfectly correct.\nNext, existing NAS benchmarks may be too small or biased. Other research has discovered surprisingly simple methods that achieve state of the art results (e.g. Mellor et al. (2020); White et al. (2020)), but suffer from an often significantly reduced performance in larger search spaces. This is hardly surprising, considering that the best architectures consist of mostly 3×3 Convolutions (see Figure 3).\nAnd finally, our experiments use single-path one-shot methods, which are commonly employed in search spaces of only Convolution and Skip operations. They are our approach of choice due to their current popularity in the NAS field and the comparably cheap evaluation of many network topologies, which also enables us to study more variations of the baseline method." }, { "heading": "6 CONCLUSIONS", "text": "Some search space subsets are easier to rank. In this specific case, the removal of the Zero and Pool operations keeps the majority of the top-N networks while also improves how well a single-path network can rank them.\nLinear Transformers are useful when there are no other operations besides Convolutions or Skip, and enable medium sized super-networks to be used at all. However they introduce systematic ranking problems in other search space subsets, limiting their general use. We find no evidence that Pool operations with transformers are beneficial.\nDisabling cell topology sharing during the super-network training decreases the ranking correlation τa, the network should be trained the same way as it is evaluated.\nStrictly fair randomness is generally advantageous, but requires further research to be understood better. Especially several medium sized super-networks were unable to generalize, in contrast to those trained with uniform sampling, which we believe to be due to the weight updates of the last layer and the stem. Simply adding learning rate warm-up or gradient clipping is insufficient to fix this issue.\nStrong regularization during the super-network training was found to be detrimental. This is most likely an issue of the regularization only benefitting Convolutions, biasing the topology estimation, and may not be a problem in entirely different search spaces that use fewer to no unparameterized operations.\nWhether an increased super-network size is helpful is tricky to evaluate. In few search spaces, e.g. as seen in Figure 4, the increased network size improved τa, however the generally low validation accuracy (usually < 20%, on 10 classes) and its huge variance make them too unreliable. Even worse, the super-networks may fail to generalize at all depending on the search space. In specific cases this can be alleviated with Linear Transformers, and possibly through a better understanding of path sampling. We present additional Figures in Appendix D.\nDue to the limited space, we have only shown the results for the CIFAR-10-valid Bench accuracy values. However, we provide all Tensorboard (Abadi et al. (2016)) files and the code to parse and generate plots in the supplementary material." }, { "heading": "A TRAINING SETUP", "text": "A.1 ENVIRONMENT\nEach super-network was trained and evaluated on a single Nvidia GTX 1080 Ti GPU, using driver version 440.64, CUDA version 10.0.130 and CuDNN version 7605 in our Slurm cluster. The code is run in a Singularity container using Ubuntu 18.04 with Python 3.6.9. We used PyTorch in version 1.5.1 and nas-bench in version 1.3, further details can be found in the provided sysinfo.txt.\nA.2 NETWORK\nAside from the deliberate variations in super-network training and the seeds, all of them were trained and evaluated in the same way, as listed in Table 1. Unless a detail is mentioned there, we are confident of not using it (e.g. we use no regularization or gradient clipping by default). The full list of arguments of each training job can be found in the respective log task.txt, see Section B.2." }, { "heading": "B PROVIDED DATA AND CODE", "text": "Please see the supplementary material for the following data and code. Due to the amount of Tensorboard files, a 7zip compression is necessary to be below the allowed 100MB limit.\nB.1 BENCH201\nSince the original NAS-Bench-201 contains far more information than we need for the evaluation and requires impractically many resources (25+ GB RAM), we have a reduced version that averages results over seeds and contains only the required stats.\nThe data file (nasbench201 1.1 mini.pt) and the code to use it (code/bench.py) are provided.\nB.2 RUN DATA\nThe relevant logs and Tensorboard files of every run (slurm job) are provided in the run data folder, grouped by experiments. The code/parse runs.py script is used to extract desired metrics from the\nthese files and average them across the jobs that used different seeds. Running the script generates a text output (csv format) that is used in plots.py.\nB.3 PLOTTING\nRunning plots.py will use the previously generated csv text, containing the mean, standard deviation, etc. and generate the plots from the paper." }, { "heading": "C FURTHER FIGURES", "text": "Due to space limitations, we could not add further plots to the paper. The remaining ones for the evaluation on Cifar10-valid are found below.\nIf you are interested in evaluation results on the other data sets, please take a look at Appendix B." }, { "heading": "D ADDITIONAL METRICS AND WIDE-CHANNEL SUPER-NETWORKS", "text": "In addition to Kendall’s Tau, we now also provide the Spearman Correlation Coefficient (SCC) and the Pearson Correlation Coefficient (PCC) (Li et al. (2020a)) for a selection of the experiments, and additional experiments on small but wide super-networks, starting with 96 (instead of 32) channels." } ]
2,020
null
SP:fd7c0858a0f642af7bfe4340bbbd8c598a4f5e32
[ "Authors propose a new method for formulating set prediction tasks. They propose to use a noisy energy-based model with langevin mcmc + noisy startup as their model. The can approximate the gradient of the likelihood function by computing the enery of ground truth pairs and energy of synthesized pairs where the target is sampled from the model distribution." ]
Set prediction is about learning to predict a collection of unordered variables with unknown interrelations. Training such models with set losses imposes the structure of a metric space over sets. We focus on stochastic and underdefined cases, where an incorrectly chosen loss function leads to implausible predictions. Example tasks include conditional point-cloud reconstruction and predicting future states of molecules. In this paper, we propose an alternative to training via set losses by viewing learning as conditional density estimation. Our learning framework fits deep energy-based models and approximates the intractable likelihood with gradient-guided sampling. Furthermore, we propose a stochastically augmented prediction algorithm that enables multiple predictions, reflecting the possible variations in the target set. We empirically demonstrate on a variety of datasets the capability to learn multi-modal densities and produce different plausible predictions. Our approach is competitive with previous set prediction models on standard benchmarks. More importantly, it extends the family of addressable tasks beyond those that have unambiguous predictions.
[ { "affiliations": [], "name": "David W. Zhang" }, { "affiliations": [], "name": "Gertjan J. Burghouts" }, { "affiliations": [], "name": "Cees G. M. Snoek" } ]
[ { "authors": [ "David Belanger", "Andrew McCallum" ], "title": "Structured prediction energy networks", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "David Belanger", "Bishan Yang", "Andrew McCallum" ], "title": "End-to-end learning for structured prediction energy networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nicolas Carion", "Francisco Massa", "Gabriel Synnaeve", "Nicolas Usunier", "Alexander Kirillov", "Sergey Zagoruyko" ], "title": "End-to-end object detection with transformers", "venue": "arXiv preprint arXiv:2005.12872,", "year": 2020 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of Control, Signals and Systems,", "year": 1989 }, { "authors": [ "Justin Domke" ], "title": "Generic methods for optimization-based modeling", "venue": "In Artificial Intelligence and Statistics, pp", "year": 2012 }, { "authors": [ "Yilun Du", "Igor Mordatch" ], "title": "Implicit generation and modeling with energy based models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Haoqiang Fan", "Hao Su", "Leonidas J Guibas" ], "title": "A point set generation network for 3d object reconstruction from a single image", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Stuart Geman", "Donald Geman" ], "title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1984 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Kuan-Chieh Wang", "Joern-Henrik Jacobsen", "David Duvenaud", "Mohammad Norouzi", "Kevin Swersky" ], "title": "Your classifier is secretly an energy based model and you should treat it like one", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Georges Ifrah" ], "title": "The Universal History of Numbers", "venue": "Harvill London,", "year": 2000 }, { "authors": [ "Stelzner Karl", "Kristian Kersting", "Adam R Kosiorek" ], "title": "Generative adversarial set transformers", "venue": "Workshop on Object-Oriented Learning at ICML,", "year": 2020 }, { "authors": [ "Adam R Kosiorek", "Hyunjik Kim", "Danilo J Rezende" ], "title": "Conditional set generation with transformers", "venue": "Workshop on Object-Oriented Learning at ICML,", "year": 2020 }, { "authors": [ "Harold W Kuhn" ], "title": "The Hungarian method for the assignment problem", "venue": "Naval Research Logistics Quarterly,", "year": 1955 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "Marc’ Aurelio Ranzato", "Fu Jie Huang" ], "title": "A tutorial on energy-based learning", "venue": "Predicting Structured Data,", "year": 2006 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "Christopher JC Burges. Mnist handwritten digit database." ], "title": "URL http://yann", "venue": "lecun. com/exdb/mnist, 7:23, 2010.", "year": 2010 }, { "authors": [ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam Kosiorek", "Seungjin Choi", "Yee Whye Teh" ], "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Francesco Locatello", "Dirk Weissenborn", "Thomas Unterthiner", "Aravindh Mahendran", "Georg Heigold", "Jakob Uszkoreit", "Alexey Dosovitskiy", "Thomas Kipf" ], "title": "Object-centric learning with slot attention", "venue": "arXiv preprint arXiv:2006.15055,", "year": 2020 }, { "authors": [ "Igor Mordatch" ], "title": "Concept learning with energy-based models", "venue": "arXiv preprint arXiv:1811.02486,", "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "Probabilistic inference using Markov chain Monte Carlo methods", "venue": "Department of Computer Science,", "year": 1993 }, { "authors": [ "Radford M Neal" ], "title": "MCMC using Hamiltonian dynamics. Handbook of markov chain monte carlo", "venue": null, "year": 2011 }, { "authors": [ "Jiquan Ngiam", "Zhenghao Chen", "Pang W Koh", "Andrew Y Ng" ], "title": "Learning deep energy models", "venue": "In Proceedings of the 28th International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Erik Nijkamp", "Mitch Hill", "Tian Han", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "On the anatomy of MCMC-based maximum likelihood learning of energy-based models", "venue": null, "year": 2020 }, { "authors": [ "Frank Noé", "Alexandre Tkatchenko", "Klaus-Robert Müller", "Cecilia Clementi" ], "title": "Machine learning for molecular simulation", "venue": "Annual Review of Physical Chemistry,", "year": 2020 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Hamid Rezatofighi", "Roman Kaskman", "Farbod T Motlagh", "Qinfeng Shi", "Anton Milan", "Daniel Cremers", "Laura Leal-Taixé", "Ian Reid" ], "title": "Learn to predict sets using feed-forward neural networks", "venue": null, "year": 2001 }, { "authors": [ "S Hamid Rezatofighi", "Vijay Kumar BG", "Anton Milan", "Ehsan Abbasnejad", "Anthony Dick", "Ian Reid" ], "title": "Deepsetnet: Predicting sets with deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Xinlong Wang", "Tete Xiao", "Yuning Jiang", "Shuai Shao", "Jian Sun", "Chunhua Shen" ], "title": "Repulsion loss: Detecting pedestrians in a crowd", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Song-Chun Zhu", "Yingnian Wu" ], "title": "A theory of generative convnet", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jianwen Xie", "Yang Lu", "Ruiqi Gao", "Song-Chun Zhu", "Ying Nian Wu" ], "title": "Cooperative training of descriptor and generator networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Shuangfei Zhai", "Yu Cheng", "Weining Lu", "Zhongfei Zhang" ], "title": "Deep structured energy based models for anomaly detection", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yan Zhang", "Jonathon Hare", "Adam Prugel-Bennett" ], "title": "Deep set prediction networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yan Zhang", "Jonathon Hare", "Adam Prügel-Bennett" ], "title": "Deep set prediction networks", "venue": "arXiv preprint arXiv:1906.06565v6,", "year": 2020 }, { "authors": [ "Yan Zhang", "Jonathon Hare", "Adam Prügel-Bennett" ], "title": "FSPool: Learning set representations with featurewise sort pooling", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hair", "Male", "No Beard", "Wearing Hat", "Wearing Necktie" ], "title": "Notably, the training data does not explicitly supervise for attributes and only exposes a single valid subset detection for each training instance. Experimental setup Each image in the input set is represented by a fixed 128-dimensional feature vector, extracted from the penultimate layer of a ResNet-34 (He et al., 2016", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "This paper strives for set prediction. Making multiple predictions with intricate interactions is essential in a variety of applications. Examples include predicting the set of attributes given an image (Rezatofighi et al., 2020), detecting all pedestrians in video footage (Wang et al., 2018) or predicting the future state for a group of molecules (Noé et al., 2020). Because of their unordered nature, sets constitute a challenge for both the choice of machine learning model and training objective. Models that violate permutation invariance suffer from lower performance, due to the additional difficulty of needing to learn it. Similarly, loss functions should be indifferent to permutations in both the ground-truth and predictions. Additional ambiguity in the target set exacerbates the problem of defining a suitable set loss. We propose Deep Energy-based Set Prediction (DESP) to address the permutation symmetries in both the model and loss function, with a focus on situations where multiple plausible predictions exist. DESP respects the permutation symmetry, by training a permutation invariant energy-based model with a likelihood-based objective.\nIn the literature, assignment-based set distances are applied as loss functions (Zhang et al., 2019; Kosiorek et al., 2020). Examples include the Chamfer loss (Fan et al., 2017) and the Hungarian loss (Kuhn, 1955). Both compare individual elements in the predicted set to their assigned groundtruth counterpart and vice-versa. While they guarantee permutation invariance, they also introduce a structure over sets, in the form of a metric space. Choosing the wrong set distance can result in implausible predictions, due to interpolations in the set space for underdefined problems. For example, Fan et al. (2017) observe different set distances to lead to trade-offs between fine-grained shape reconstruction and compactness, for 3d reconstruction from RGB images. As an additional shortcoming, optimizing for a set loss during training poses a limitation on the family of learnable data distributions. More specifically, conditional multi-modal distributions over sets cannot be learned by minimizing an assignment-based set loss during training. To overcome the challenges of imposed structure and multi-modal distributions, we propose to view set prediction as a conditional density\nestimation problem, where P (Y |x) denotes the distribution for the target set Y given observed features x.\nIn this work we focus on distributions taking the form of deep energy-based models (Ngiam et al., 2011; Zhai et al., 2016; Belanger & McCallum, 2016):\nPθ(Y |x) = 1\nZ(x;θ) exp (−Eθ(x,Y )), (1)\nwith Z as the partition function and Eθ the energy function with parameters θ. The expressiveness of neural networks (Cybenko, 1989) allows for learning multi-modal densities Pθ(Y |x). This sets the approach apart from forward-processing models, that either require conditional independence assumptions (Rezatofighi et al., 2017), or an order on the predictions, when applying the chain rule (Vinyals et al., 2016). Energy-based prediction is regarded as a non-linear combinatorial optimization problem (LeCun et al., 2006):\nŶ = argmin Y Eθ(x,Y ), (2)\nwhich is typically approximated by gradient descent for deep energy-based models (Belanger & McCallum, 2016; Belanger et al., 2017). We replace the deterministic gradient descent with a stochastically augmented prediction algorithm, to account for multiple plausible predictions. We show that our stochastic version outperforms standard gradient descent for set prediction tasks.\nOur main contribution is DESP, a training and prediction framework for set prediction, that removes the limitations imposed by assignment-based set losses. Sampling plays a key role in DESP. For training, sampling approximates the intractable model gradients, while during prediction, sampling introduces stochasticity. We show the generality of our framework by adapting recently proposed permutation invariant neural networks as set prediction deep energy-based models. We demonstrate that our approach (i) learns multi-modal distributions over sets (ii) makes multiple plausible predictions (iii) generalizes over different deep energy-based model architectures and (iv) is competitive even in non-stochastic settings, without requiring problem specific loss-engineering." }, { "heading": "2 DEEP ENERGY BASED SET PREDICTION", "text": "" }, { "heading": "2.1 TRAINING", "text": "Our goal is to train a deep energy based model for set prediction, such that all plausible sets are captured by the model. Regression models with a target in the Rd space, that are trained with a root mean-square error (RMSE) loss, implicitly assume a Gaussian distribution over the target. Analog to the RMSE, assignment-based set losses assume a uni-modal distribution over the set space. Training with the negative log-likelihood (NLL) circumvents the issues of assignment-based set losses. Notably, NLL does not necessitate explicit element-wise comparisons, but treats the set holistically. We reformulate the NLL for the training data distribution PD as:\nE(x,Y )∼PD [− log(Pθ(Y |x))] = E(x,Y )∼PD [Eθ(x,Y )] + Ex∼PD [log(Z(x;θ))] . (3) The gradient of the left summand is approximated by sampling a mini-batch of n tuples {(xi,Y +i )}i=0..n from the training set. The gradient of the right summand is approximated by solely sampling input features {xi}i=0..m. Directly evaluating ∂∂θ log(Z(x;θ)) is intractable; instead we approximate the gradient by sampling {Y −j }j=0..k from the model distribution:\n∂\n∂θ log(Z(x;θ)) = −EY ∼Pθ\n[ ∂\n∂θ Eθ(x,Y )\n] ≈ − k∑ j=0 ∂ ∂θ Eθ(x,Y − j ). (4)\nThe resulting approximate NLL objective is equivalent to contrasting the energy value for real and synthesized targets, with the former being minimized and the latter maximized. The objective is reminiscent of the discriminator’s loss in generative adversarial networks (Goodfellow et al., 2014), where a real sample is contrasted to a sample synthesized by the generator network. In practice, setting k=1 suffices.\nThe Langevin MCMC algorithm allows for efficient sampling from high dimensional spaces (Geman & Geman, 1984; Neal et al., 2011). Access to the derivative of the unnormalized density function\nprovides sufficient information for sampling. We apply the following modified transition function and keep only the last sample:\nY (t+1) = Y (t) − ∂Eθ(x,Y (t))\n∂Y +U (t), (5)\nwith U (t) ∼ N (0, I), > 0, Y (0) ∼ N (0, I) a sample from a fixed initial distribution and Y (T ) the final sample. The proper formulation of the Langevin MCMC algorithm multiplies the gradient in Equation 5 by a factor and further requires a Metropolis-Hastings acceptance step (Neal, 1993). We forgo both of these components in favor of increased efficiency, but at the cost of forfeiting theoretical guarantees for desirable properties such as not being trapped in a subset of the sampling space, i.e., ergodicity. Discarding all but the last sample Y (T ) of each chain constitutes a non typical usage that undermines the usual importance of ergodicity. Notably, this weakens the hard to meet requirement for the sampler to mix between multiple modes in a single MCMC chain, making it sufficient for independently sampled chains to find different local modes. Although the fixed cutoff at T and missing Metropolis-Hastings update result in a biased sampler, previous works have demonstrated the feasibility of training generative models on images with similar Langevin MCMC methods (Xie et al., 2016; 2018; Nijkamp et al., 2020; Du & Mordatch, 2019; Grathwohl et al., 2019).\nThe model density from Equation 1 approaches the data distribution PD while training, leading to an increased ability in distinguishing between synthesized sets Y − from real sets Y +. This in turn enhances the samples Y − to be closer to the ground-truth, making it harder for the model to discriminate between real and fake. In practice, it is necessary to smooth out the data distribution. Otherwise, the deep energy-based model would be required to fit a distribution with zero density everywhere except the training examples. Any gradient based sampling and prediction algorithm would be rendered useless. Additional Gaussian distributed noise on the data samples Y + alleviates this issue and facilitates stable training." }, { "heading": "2.2 PREDICTION", "text": "Prediction from an energy-based viewpoint corresponds to finding the set with the lowest energy value. One approach addresses this intractable optimization problem by approximating a local minimum via gradient descent (Belanger & McCallum, 2016; Belanger et al., 2017). Learning a multimodal distribution is clearly not sufficient, as the deterministic gradient descent algorithm would not be able to cover all possible sets. This would make the learning process pointless, except for a single local minimum in the energy function. We propose to augment the gradient descent optimizer with additional Gaussian noise during the first n steps:\nY (t+1) = Y (t) − ∂ ∂Y Eθ(x,Y (t)) +U (t), for t ≤ S, (6a)\nY (t+1) = Y (t) − ∂ ∂Y Eθ(x,Y (t)), for S < t ≤ T. (6b)\nFor simplicity we choose the same maximum number of steps T , both for training and prediction. One interpretation of the prediction procedure is: 1. Langevin MCMC sample Y (S) based on the energy Eθ and 2. Refine the sample via gradient descent, such that Y (T ) is a local minimum of Eθ that is close to Y (S). Note that the partial derivative ∂∂Y Eθ(x,Y\n(t)) is not stochastic and can be computed independent of a mini-batch. Thus the sole source of randomness lies with the addition of U , resulting in a prediction procedure that allows for different predictions given the same observation.\nFrom the set prediction point of view, the noise term addresses an optimization problem that is specific to set functions. Commonly used set neural networks (Zaheer et al., 2017), require permutation invariant pooling operators. Examples include sum or mean pooling. Both of these result in identical partial gradients for identical elements:\n∂\n∂yi Eθ(x,Y ) =\n∂\n∂yj Eθ(x,Y ), (7)\nwhere yi and yj are two different elements in Y with identical value, i.e., yi=yj . Although we consider set, not multi-set prediction; in practice the set Y needs to be stored as a tensor of numbers\nwith limited precision. For the purpose of successfully sampling Y from Eθ, we restrict the parameters θ to energy functions with numerically stable derivatives. Specifically, the difference in the gradients of two elements in Y is limited by the difference between the same two elements. This poses the additional difficulty for the optimizer of separating different elements that are too close, next to the original task of moving the element to the correct position. It is reasonable to assume several elements in close vicinity for problems where the set size is much larger than the number of features. The independently sampled noise term helps disambiguate such proximal elements and speeds up the optimization procedure.\nA naive alternative would be to solely initialize the set Y (0) with the constraint of a minimal distance between each element. While this approach addresses the problem at step t=0, it is ignored in the subsequent steps t > 0, where two elements may have collapsed. Our proposed prediction procedure adds independently sampled noise at several steps; thus removing some of the responsibility, for separating elements, from the gradient-based optimizer." }, { "heading": "3 SET ENERGY", "text": "The energy-based viewpoint constitutes an immediate advantage for incorporating symmetry into the neural network architecture. Neural networks that are permutation invariant with respect to the input can be straightforwardly adapted for our purpose. Permutation invariant energy functions have the advantage of being able to define densities directly on sets. Set densities do not require normalization over all possible permutations, as two sequences that are equivalent up to permutation are also equivalent in the sample space. In this section we formulate two different energy functions based on recently proposed permutation invariant neural network architectures that are both compatible with our training and prediction framework.\nDeep Sets DeepSets (Zaheer et al., 2017) first applies an MLP on each element, followed by a permutation invariant aggregator and a second MLP. This model is shown to be a universal approximator of continuous permutation invariant functions (Zaheer et al., 2017; Qi et al., 2017). For the set prediction setting, we adopt the following energy function:\nEDS(x,Y ) = f( ⊕ y∈Y g([h(x);y])), (8)\nwith f, g denoting MLPs, h a neural network acting on the input, [· ; · ] the concatenation operator and ⊕ a permutation invariant aggregator. We treat both the observed features and the prediction as input to the neural network, resulting in an energy function that is permutation invariant with respect to the target Y .\nSet Encoder An alternative set neural network is studied by Zhang et al. (2019). They propose to separately map the observed features and the prediction into a shared latent space. In their case, the distance in the latent space is minimized as a part of the loss function during training. We re-interpret this loss as the energy function:\nESE(x,Y ) = Lδ(g(Y )− h(x)), (9)\nwith g denoting a permutation invariant neural network, h a neural network acting on the observed features and Lδ the Huber loss. A minimal energy is reached, when both x and Y map to the same point in the latent space. This energy function stands in contrast toEDS, where the observed features directly interact with individual elements in the predicted set." }, { "heading": "4 RELATED WORK", "text": "Our framework is closely related to the works of Belanger & McCallum (2016) and Mordatch (2018), which also take on an energy-based viewpoint. However, they obtain predictions by minimizing the energy via (deterministic) gradient descent and require memory-intensive backpropagation through the unrolled inner optimization during training (Domke, 2012; Belanger et al., 2017). Similarly, deep set prediction network (DSPN) (Zhang et al., 2019) applies the bi-level optimization scheme (Domke, 2012) for learning. Instead of an energy function, DSPN minimizes the distance\nbetween the input and the predicted set in a shared latent space. Our energy-based viewpoint does not require a latent vector space bottleneck between input and prediction, resulting in a broader choice of models. In addition, our prediction algorithm handles multi-modal distributions through additional stochasticity.\nMost prior set prediction approaches rely on fixed (Fan et al., 2017) or learned orders (Vinyals et al., 2016). They run into the problem, as identified by Zhang et al. (2020b; 2019), that small changes in the set space may require large changes in the neural network outputs, leading to lower performance. Other approaches require the assumption of independent and identically distributed set elements (Rezatofighi et al., 2017; 2020). Some very recent works (Kosiorek et al., 2020; Carion et al., 2020; Locatello et al., 2020; Karl et al., 2020) respect the permutation symmetry in the model, by applying the Transformer (Vaswani et al., 2017; Lee et al., 2019) without position embedding and a non-autoregressive decoder. Nonetheless, the work of Karl et al. (2020) is limited to set generation. Both Carion et al. (2020) and Locatello et al. (2020) rely on the Hungarian loss as a permutation invariant objective function. Kosiorek et al. (2020) deploy the Chamfer loss augmented with an additional set cardinality objective. By casting learning as conditional density estimation, we forgo the necessity of task specific loss-engineering." }, { "heading": "5 EXPERIMENTS", "text": "The experiments answer two overarching questions: 1. Can our density estimation perspective improve over discriminative training via assignment-based losses? and 2. Can our stochastic prediction algorithm yield multiple plausible sets for multi-modal densities? The experiments also demonstrate the applicability of our approach to a variety of energy functions and a range of set prediction tasks: point-cloud generation, set auto-encoding, object detection and anomaly detection. Code is available at: https://github.com/davzha/DESP.\nWe investigate the effectiveness of our approach, by comparing against Chamfer and Hungarian loss based training, with predictions formed by deterministic gradient descent. The Chamfer loss assigns every element in the prediction Ŷ ={yi}i=1..k to the closest element in the ground-truth Y and vice-versa:\nLC(Ŷ ,Y ) = ∑ i min j d(ŷi,yj) + ∑ j min i d(ŷi,yj), (10)\nwhere d is a vector distance instantiated as the Huber loss in the subsequent experiments. The Hungarian loss is computed by solving the linear assignment problem between the two sets:\nLH(Ŷ ,Y ) = min π∈Sk ∑ i d(ŷi,yπ(i)), (11)\nwhere Sk is the set of all permutations on sets of size k. We refer to Appendix A for further details on assignment-based set losses." }, { "heading": "5.1 COMPUTATIONAL COMPLEXITY ANALYSIS", "text": "DESP offers non-trivial computation cost trade-offs, when we compare it to a baseline trained via assignment-based set losses. We identify three main factors that are crucial and specific to our analysis: 1. Number of transition steps T , 2. Complexity of the set neural network and 3. Complexity of the loss function. Similar to baselines that form predictions with an inner optimization (Zhang et al., 2019; Belanger & McCallum, 2016), DESP’s training and inference time scale linearly with T . Though, in practice DESP requires a larger T to achieve reliable sampling quality, potentially resulting in longer training and inference times. The complexity of the set neural network is crucial for determining the computation cost on large set sizes c. By choosing a set neural network with time and memory complexity in O(c), such as DeepSets (Zaheer et al., 2017), DESP can accommodate large set sizes. In comparison to the baselines, DESP avoids the additional computational burden imposed by an assignment-based set loss, which is in O(c2) for the Chamfer loss and in O(c3) for the Hungarian loss." }, { "heading": "5.2 GENERATION OF POLYGONS AND DIGITS", "text": "Considering set prediction as density estimation can be critical. To illustrate this, we point out fundamental limitations of set loss based training that become apparent when multiple plausible sets exist. In terms of probability densities, each plausible set translates to a local maxima. To study different types of randomness in isolation, we create two synthetic datasets:\n• Polygons Generate the set of vertices of a convex n-sided regular polygon, given the size x=n. This task is inherently underdefined; any valid set of vertices can be arbitrarily translated or rotated and remain valid. We limit the scope of permissible answers slightly, by fixing a constant center and radius. Each sample from this dataset has a uniformly randomized angle.\n• Digits Generate a point-cloud, taking the shape of the digit given by x. Point-clouds are sets of low dimensional vectors, describing the spatial arrangement of one or more objects. We limit the digits to x ∈ {one, seven}, because they are similar and each has different forms, following the most common writing styles (Ifrah, 2000). The shape determines the area from which the set of pointsY are sampled. The number of points varies with different shapes, facilitating evaluation of different set sizes and spatial arrangements.\nIn both datasets the observed feature is kept simple, as our focus lies on predicting sets with nontrivial interrelations. Each example in the dataset consists of a discrete input and a set of 2d vectors as the target, as illustrated in Figure 1. More examples can be found in Appendix B. Both datasets share the notion that several correct sets are permissible, such as an offset in rotation for polygons. The difference between the two datasets lies in the relation that connects different plausible predictions.\nModel We use the energy function EDS defined in Equation 8 for both datasets. A 3-layer MLP forms the set equivariant part, followed by a permutation invariant pooling operation and a second 3-layer MLP. We choose FSPool (Zhang et al., 2020b) over more simple aggregators such as sum or mean, as it exhibits much faster convergence rates in preliminary experiments. To accommodate different cardinalities, we zero-pad all sets to a fixed maximum size, similar to Zhang et al. (2019). By ensuring that all non-padding elements are unequal to the zero vector, padding can simply be filtered out from the predictions by setting a threshold around a small area around zero.\nResults We report the Chamfer loss for the Digits dataset and Hungarian loss for the Polygons dataset in Table 1. The metrics are chosen in a way that aligns with a qualitative assessment (Figure 1) of the performance for each dataset respectively. While the baseline with the Chamfer loss objective performs better on Digits, the Hungarian baseline outperforms the former on Polygons. This result reveals a trade-off when picking set loss functions as training objectives for different types of datasets. Our framework improves over both baselines on both datasets, but more importantly, we do not observe a similar performance discrepancy between the two datasets.\nWe confirm in Figure 1 that both baselines handle the multi-modal data distribution, by interpolating between different target sets. The set loss choice can lead to implausible predictions. While in this case both datasets are designed to be simple for transparency reasons, the choice of set loss becomes a non-trivial trade-off for more complex datasets. Our approach prevents implicit interpolation and consequently does not incur the same trade-off cost. In contrast to purely deterministic optimizers, which always converge to the same local minimum, our stochastic version finds multiple energy minima. Figure 4 and Figure 5 in the appendix demonstrate the ability to cover different modes, where several predictions result in differently rotated polygons and distinctly shaped digits. Each prediction represents an independently sampled trajectory of transitions described in Equation 6. This experiment is tailored towards the special case, when there exist multiple plausible target sets and exemplifies both the short-comings of training with assignment-based set losses and the ability of our approach to predict multiple sets. Whether the results in this simplified experiment will also reflect the superiority of the proposed approach on a real-world problem remains to be tested." }, { "heading": "5.3 POINT-CLOUD AUTO-ENCODING", "text": "Point-cloud auto-encoding maps a variable sized point-cloud to a single fixed-size vector, with the requirement of being able to reconstruct the original point-cloud solely from that vector. Following the setup from Zhang et al. (2019), we convert MNIST (LeCun et al., 2010) into point-clouds, by thresholding pixel values and normalizing the coordinates of the remaining points to lie in [0, 1]. We compare against two variations of DSPN (Zhang et al., 2019): 1. Chamfer and 2. Hungarian loss based training. For a fair comparison, we use the energy function ESE defined in Equation 9, with the same padding scheme and hyper-parameters as Zhang et al. (2019). Both baselines average over all intermediate set losses, based on intermediate prediction steps. The padding scheme consists of zero-padding sets to a fixed maximum set size and adding a presence variable for each element, which indicates if the element is part of the set or not. Furthermore, we compare against C-DSPN and TSPN (Kosiorek et al., 2020) on their set size estimation task. They optimize for set size rootmean-squared error (RMSE), in combination with the Chamfer loss. We evaluate our approach based on a single prediction per example.\nResults Table 2 shows that our approach outperforms C-DSPN and TSPN (Kosiorek et al., 2020) on set size RMSE. We conjecture that this is caused by a conflict between the two objectives: 1. Set size RMSE and 2. Chamfer loss, under limited capacity. While the former requires a cardinality aware representation, the latter does not benefit from a precise cardinality estimation at all. In contrast, our approach does not treat set size as a variable separate from the constituents of the set. Table 3 shows that our approach outperforms both DSPN (Zhang et al., 2019) baselines, even when comparing against the same metric that is used for training the baselines. We explain the increased performance by an ambiguity during reconstruction, induced from the bottleneck in auto-encoding.\nAs we have observed in the previous experiment, the baselines handle underdefined problems by interpolation in the set space, leading to potentially unrealistic reconstructions. The results indicate that our approach is beneficial even when the underlying data is not explicitly multi-modal. Training the DSPN model with Hungarian loss, instead of Chamfer loss, deteriorates training stability and the reconstructed shape, but captures the set size and point density more faithfully. Augmenting the loss function with set size RMSE alleviates some of the issues with set size, but leads to decreases in shape reconstruction performance (Kosiorek et al., 2020). Our approach does not require any loss-engineering and performs well on all metrics.\nEffect of stochastic prediction We study the impact of the proportion of stochastic steps ST , as defined in Equation 6, on the reconstruction performance in Figure 2. Over all runs, the most common minimum energy is approximately at ST=0.8. All results reported for our method apply this 0.8 ratio during prediction. Notably, adding stochastic steps improves the energy optimization, in comparison to the fully deterministic gradient descent (ST=0). Furthermore, the high correlation between the energy value and the performance leads us to the conclusions that DESP learns more than a simple sampler and that optimization improvements result in increased performance. Our approach is able to produce different plausible predictions at no performance cost." }, { "heading": "5.4 OBJECT SET PREDICTION ON CLEVR", "text": "In terms of set prediction, object detection tasks the model with learning to predict the set of object locations in an image. Following the previous set prediction literature (Zhang et al., 2019; Kosiorek et al., 2020), we benchmark our method on the CLEVR dataset. While our specific contribution addresses stochastic and underdefined set prediction problems, our method is in principle not limited to those cases. We adopt the same neural network architecture, hyper-parameters and padding scheme as Zhang et al. (2019), to facilitate a fair comparison. The padding scheme is the same as in the previous experiment. The Relation Network (Santoro et al., 2017) in combination with FSPool (Zhang et al., 2020b) takes on the role of the set encoder for ESE, described in Equation 9. We compare against different variations of Chamfer and Hungarian loss based training. Our approach is evaluated based on a single prediction per image.\nResults Performance, as seen in Table 4, is measured in average precision (AP) for various intersection-over-union (IoU) thresholds. Similar to the previous experiments, we observe a large discrepancy between training with Chamfer and Hungarian loss. While the Chamfer loss based\ntraining generally outperforms the Hungarian loss for set auto-encoding, the reverse appears to be true for object detection. In comparison, our approach performs consistently on both tasks for all metrics, indicating suitability for general set prediction tasks, beyond multi-modal problems." }, { "heading": "5.5 SUBSET ANOMALY DETECTION", "text": "The objective here is to discover all anomalous faces in a set of images. We re-purpose CelebA (Liu et al., 2015) for subset anomaly detection, by training on randomly sampled sets of size 5 with at least 3 images constituting the inliers, by possessing two or more shared attributes. The set energy function is solely supervised by outlier subset detections, without direct access to attribute values. The challenge during inference lies with implicitly ascertaining the shared attributes, while simultaneously detecting the outliers, including the case where none are present. We examine our method specifically on ambiguous cases, constructed such that different attribute combinations may be considered distinctive for the inliers. Zaheer et al. (2017) consider a similar task, but assume exactly one outlier. Their method can be extended to subsets of variable sizes, by replacing the softmax with a sigmoid (Zaheer et al., 2017), yielding an F1 score of 0.63 on our task. Nonetheless, such an approach is limited to predicting element-wise probabilities, which ignores dependencies between individual predictions. Our approach of learning probabilities over sets is able to address this challenge, as demonstrated in Figure 3. Given the same set, our method produces multiple valid subset predictions, reflecting an implicit inference of different attribute pairs. This advantage allows DESP to considerably outperform the baseline with an F1 score of 0.76. Further details can be found in Appendix C." }, { "heading": "6 CONCLUSION", "text": "We introduced a new training & prediction framework for set prediction, based on a probabilistic formulation of the task. Our approach addresses the crucial problem of stochastic or underdefined set prediction tasks, where training with assignment-based set losses performs unfavourably. We demonstrated the ability of Deep Energy based Set Prediction (DESP) to learn and predict multiple plausible sets on synthetic data. On non-stochastic benchmarks our method is comparable to previous works, showcasing broad applicability to general set prediction tasks. Finally, we exemplify on the new task of subset anomaly detection the capacity to address tasks beyond those with unambiguous predictions." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is part of the research programme Perspectief EDL with project number P16-25 project 3, which is financed by the Dutch Research Council (NWO) domain Applied and Engineering Sciences (TTW)." }, { "heading": "A ASSIGNMENT-BASED SET LOSS", "text": "Assignment-based set losses compare the predicted set Ŷ ={ŷ1, . . . , ŷk} with the ground-truth set Y ={y1, . . . ,yl} by element-wise assignments:\nLA(Ŷ ,Y ) = ∑ i d(ŷi,yπ(i)) + ∑ j d(ŷσ(j),yj), (12)\nwhere d is a vector distance function and π : {k} 7→ {l}, σ : {k} 7→ {l} are the assignment functions, which map from one sets’ indices to the other. Due to the orderless nature of sets, it is not obvious which element ought to be compared with which. Different assignment-based set losses differ mainly in the assignment strategies, reflected in the choices for π and σ.\nThe Chamfer loss assigns every element in Ŷ to the closest element in Y and vice-versa and can be defined by π(i)= argminj d(ŷi,yj) and σ(j)= argmini d(ŷi,yj), resulting in:\nLC(Ŷ ,Y ) = ∑ i min j d(ŷi,yj) + ∑ j min i d(ŷi,yj) (13)\nFor the Hungarian loss π and σ constitute the inverse functions of each other, thus requiring equal set sizes n=m:\nLH(Ŷ ,Y ) = 1\n2 ( min π∈Sk ∑ i d(ŷi,yπ(i)) + min σ∈Sk ∑ i d(ŷσ(j),yj) ) (14)\n= min π∈Sk ∑ i d(ŷi,yπ(i)), (15)\nwhere Sk is the set of all permutations on sets of size k.\nThe differences in assignment strategies result in different metric spaces on sets, as illustrated in subsection 5.2. Both the Chamfer and the Hungarian loss exhibit distinct advantages and disadvantages. While the asymptotic compute cost for the Chamfer loss scales in O(kl) with a set sizes k, l, computing the Hungarian loss is much more expensive with a complexity in O(k3). The lack of one-to-one assignments for the Chamfer loss, puts it at a disadvantage when comparing multi-sets or sets with multiple similar (up to numerical precision) elements. On the other hand, the strict requirement for bijective assignments for the Hungarian loss disqualifies it when comparing sets of different sizes, i.e., k 6=l." }, { "heading": "B MULTI-MODAL PREDICTIONS", "text": "Both Figure 4 and Figure 5 display evidence for the ability of Deep Energy based Set Prediction (DESP) to learn and predict multiple sets for the same input. This ability is important, when we consider datasets with multi-modal target distributions, such as the varying rotation angle for Polygons (Figure 4a) or different writing styles for Digits (Figure 5a). The datasets are described in subsection 5.2." }, { "heading": "C SUBSET ANOMALY DETECTION", "text": "Dataset preparation Individual training instances are sampled from CelebA (Liu et al., 2015) with the following procedure: 1. Sample two attributes a, b 2. Sample 3-5 images that all have attributes a and b 3. Fill the remaining slots with images that do not have both a and b or skip if there are already 5 images. The result of the sampling procedure is an orderless set of 5 images, where the samples from the 2nd and 3rd step constitute the inliers and outliers, respectively. We qualify any subset of images as valid inliers, if they share at least two attributes, which are not possessed simultaneously by any outlier. In order to limit the amount of valid outlier subsets, we restrict the attributes to the following list: Bald, Bangs, Blond Hair, Double Chin, Eyeglasses, Goatee, Gray Hair, Male, No Beard, Wearing Hat, Wearing Necktie. Notably, the training data does not explicitly supervise for attributes and only exposes a single valid subset detection for each training instance.\nExperimental setup Each image in the input set is represented by a fixed 128-dimensional feature vector, extracted from the penultimate layer of a ResNet-34 (He et al., 2016) that is optimized for facial attribute detection. We forgo any image augmentation during training and treat the extracted features as a highly informative, albeit flawed, representation of the image. The representation has an accuracy of roughly ∼90%, as measured by a logistic regression model, for individual attributes and constitutes a source of uncertainty. In addition to each feature vector, we introduce indicator variables oi ∈ {−1,+1} that constitute the targets and signify if an image is an outlier or not. In order to apply our framework to the discrete targets, we optimize and sample over a convex relaxation of the target domain: oi ∈ [−1,+1]. We apply an instance of the energy function EDS (Equation 8) on the set of feature vectors, concatenated with the outlier indicator variables. Both g and f are instantiated as 2-layer MLPs with 256 hidden dimensions. FSPool (Zhang et al., 2020b) is applied as the permutation invariant aggregator. As part of finalizing the predictions, the outlier variables oi are rounded towards −1 or +1. As the baseline, we employ a 4-layer permutation equivariant DeepSets (Zaheer et al., 2017). We use the same extracted features as in our DESP setup as inputs and match the number of parameters. The model is trained with a binary cross-entropy loss that acts on the scalar outputs of the network.\nEvaluation The performance is measured on the CelebA test partition images (Liu et al., 2015). Each test example is associated with the full collection of valid subsets, exclusively for evaluation purposes. We consider two distinct test setups: 1. Test instances are sampled the same way as during training, resulting in both unambiguous and ambiguous instances, and 2. Only ambiguous instances are used. The first case results in an average number of valid target subsets of ∼1.7, including both ambiguous and unambiguous instances. The second case has an average number of valid target subsets of ∼2.3, with a minimum of 2 valid subset targets. We measure the proportion of correct predictions per test example as a frequency weighted precision. The frequency of each predicted subset corresponds to the number of appearances across all individual predictions. Recall measures the proportion of all valid subsets that the model manages to predict per test example. We approximate precision and recall in this experiment with 10 predictions, by leveraging the ability of DESP to output multiple subsets. The F1 score is based on the average precision and recall.\nResults The performance is reported in Table 5. When solely evaluating on ambiguous cases, the baseline exhibits an F1 performance drop of ∼26%, as opposed to only ∼13% for our method. This highlights the advantage of learning distributions over sets in combination with the ability to produce multiple predictions. Figure 8 and Figure 7 showcase a positive and negative example prediction of our model, respectively. These examples highlight the difficulty of simultaneously determining the shared commonality between the inliers and ascertaining a discrepancy of the outliers.\nWe examine the effect of varying proportions of stochastic steps in the prediction procedure in Figure 6. Similar to what we observe in the other experiments, ST=0.8 offers the best F1 score. For the fully deterministic case, every predicted set is identical to one another, which is reflected in the low recall score at ST=0.\nTable 5: Subset Anomaly Detection Performances for the two test setups: 1. Unambiguous + Ambiguous and 2. Ambiguous only. Our method outperforms the baseline, which we derived from DeepSets (Zaheer et al., 2017), on all metrics.\nAmbiguous Unambiguous + Ambiguous\nPrecision ↑ Recall ↑ F1 ↑ Precision ↑ Recall ↑ F1 ↑ Baseline 0.75±0.02 0.34±0.01 0.47±0.01 0.73±0.02 0.56±0.01 0.63±0.01 This paper 0.76±0.02 0.58±0.01 0.66±0.00 0.74±0.02 0.78±0.01 0.76±0.01\n0.0 0.2 0.4 0.6 0.8 1.0 S T\n0.66\n0.68\n0.70\n0.72\n0.74\n0.76\n0.78\n0.80\n0.82 F1 Precision Recall\nFigure 6: Anomaly detection ablation Test results on subset anomaly detection for different values of ST (Equation 6). Inclusion of stochasticity in the prediction procedure significantly improves the recall performance compared to the fully deterministic case (ST=0). The F1 score is highest at ST=0.8, representing the best trade-off point between recall and precision.\n(a) necktie and eyeglasses\n(b) male and necktie\n(c) hat and necktie\n(d) hat and eyeglasses\n(e) necktie and no beard\nFigure 7: Subset anomaly detection negative example (a)-(d) Four correct outlier subset detections, marked by blue dash-dotted frames, predicted by the model. The subset (e), marked by red dash-dotted frames, constitutes and error, because it is a valid prediction, that is missed by the model. Multiple subset possibilities showcase how challenging the subset anomaly detection task is." } ]
2,021
CONDITIONAL DENSITY ESTIMATION
SP:18fb9d26da8c96c91e9787d3b539c483f9fe4871
[ "The paper proposes the algorithm LearnRep that uses gradient descent methods to learn Lie algebras from structure constants, before obtaining the corresponding group representation through the exponential map. The algorithm is tested on SO(3), SO(2, 1), and SO(3, 1). In addition to this, the paper proposes SpaceTimeNet, a Poincaré-equivariant neural network architecture, and applies this architecture to an object-tracking task involving MNIST digits moving uniformly through space." ]
Recent work has constructed neural networks that are equivariant to continuous symmetry groups such as 2D and 3D rotations. This is accomplished using explicit group representations to derive the equivariant kernels and nonlinearities. We present two contributions motivated by frontier applications of equivariance beyond rotations and translations. First, we relax the requirement for explicit Lie group representations, presenting a novel algorithm that finds irreducible representations of noncommutative Lie groups given only the structure constants of the associated Lie algebra. Second, we demonstrate that Lorentz-equivariance is a useful prior for object-tracking tasks and construct the first object-tracking model equivariant to the Poincaré group.
[]
[ { "authors": [ "Brandon Anderson", "Truong Son Hy", "Risi Kondor" ], "title": "Cormorant: Covariant molecular neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Alexander Bogatskiy", "Brandon Anderson", "Jan T Offermann", "Marwah Roussi", "David W Miller", "Risi Kondor" ], "title": "Lorentz group equivariant neural network for particle physics", "venue": "arXiv preprint arXiv:2006.04780,", "year": 2020 }, { "authors": [ "Miranda CN Cheng", "Vassilis Anagiannis", "Maurice Weiler", "Pim de Haan", "Taco S Cohen", "Max Welling" ], "title": "Covariance in physics and convolutional neural networks", "venue": null, "year": 1906 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Learning the irreducible representations of commutative lie groups", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Taco Cohen", "Mario Geiger", "Jonas K ̈ohler", "Pim de Haan", "K.T. Sch ̈utt", "Benjamin K. Miller" ], "title": "URL https://github.com/AMLab-Amsterdam/lie_learn/ releases/tag/v1.0_b", "venue": "Lie learn,", "year": 2020 }, { "authors": [ "Taco S. Cohen", "Mario Geiger", "Jonas Köhler", "Max Welling" ], "title": "Spherical CNNs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Taco S Cohen", "Mario Geiger", "Maurice Weiler" ], "title": "A general theory of equivariant cnns on homogeneous spaces", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M De Montigny", "J Niederle", "AG Nikitin" ], "title": "Galilei invariant theories: I. constructions of indecomposable finite-dimensional representations of the homogeneous galilei group: directly and via contractions", "venue": "Journal of Physics A: Mathematical and General,", "year": 2006 }, { "authors": [ "Stephan Eismann", "Raphael JL Townshend", "Nathaniel Thomas", "Milind Jagota", "Bowen Jing", "Ron Dror" ], "title": "Hierarchical, rotation-equivariant neural networks to predict the structure of protein complexes", "venue": null, "year": 2006 }, { "authors": [ "Carlos Esteves", "Christine Allen-Blanchette", "Xiaowei Zhou", "Kostas Daniilidis" ], "title": "Polar transformer networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Richard P Feynman", "Robert B Leighton", "Matthew Sands" ], "title": "The Feynman lectures on physics, Vol. I: The new millennium edition: mainly mechanics, radiation, and heat, volume 1", "venue": "Basic books,", "year": 2011 }, { "authors": [ "Marc Finzi", "Samuel Stanton", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data", "venue": "arXiv preprint arXiv:2002.12880,", "year": 2020 }, { "authors": [ "Fabian B Fuchs", "Daniel E Worrall", "Volker Fischer", "Max Welling" ], "title": "Se (3)-transformers: 3d roto-translation equivariant attention networks", "venue": "arXiv preprint arXiv:2006.10503,", "year": 2020 }, { "authors": [ "Liyao Gao", "Yifan Du", "Hongshan Li", "Guang Lin" ], "title": "Roteqnet: Rotation-equivariant network for fluid systems with symmetric high-order tensors", "venue": "arXiv preprint arXiv:2005.04286,", "year": 2020 }, { "authors": [ "D.J. Griffiths", "P.D.J. Griffiths" ], "title": "Introduction to Quantum Mechanics. Pearson international edition", "venue": "URL https://books.google.com/ books?id=z4fwAAAAMAAJ", "year": 2005 }, { "authors": [ "D Gurarie" ], "title": "Symmetries and laplacians. introduction to harmonic analysis, group representations and applications", "venue": "North-Holland mathematics studies,", "year": 1992 }, { "authors": [ "Max Jaderberg", "Karen Simonyan", "Andrew Zisserman", "koray kavukcuoglu" ], "title": "Spatial transformer networks", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "J Robert Johansson", "Paul D Nation", "Franco Nori" ], "title": "Qutip 2: A python framework for the dynamics of open quantum systems", "venue": "Computer Physics Communications,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Risi Kondor" ], "title": "N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials", "venue": "arXiv preprint arXiv:1803.01588,", "year": 2018 }, { "authors": [ "Risi Kondor", "Zhen Lin", "Shubhendu Trivedi" ], "title": "Clebsch–gordan nets: a fully fourier space spherical convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John S Denker", "Donnie Henderson", "Richard E Howard", "Wayne Hubbard", "Lawrence D Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Jean-Marc Levy-Leblond" ], "title": "Galilei group and galilean invariance", "venue": "In Group theory and its applications,", "year": 1971 }, { "authors": [ "J Niederle", "AG Nikitin" ], "title": "Construction and classification of indecomposable finite-dimensional representations of the homogeneous galilei group", "venue": "Czechoslovak Journal of Physics,", "year": 2006 }, { "authors": [ "Garrick Orchard", "Ajinkya Jayawant", "Gregory K Cohen", "Nitish Thakor" ], "title": "Converting static image datasets to spiking neuromorphic datasets using saccades", "venue": "Frontiers in neuroscience,", "year": 2015 }, { "authors": [ "Didier Pinchon", "Philip E Hoggan" ], "title": "Rotation matrices for real spherical harmonics: general rotations of atomic orbitals in space-fixed axes", "venue": "Journal of Physics A: Mathematical and Theoretical,", "year": 2007 }, { "authors": [ "Rajesh PN Rao", "Daniel L Ruderman" ], "title": "Learning lie groups for invariant visual perception", "venue": "In Advances in neural information processing systems,", "year": 1999 }, { "authors": [ "Kai Sheng Tai", "Peter Bailis", "Gregory Valiant" ], "title": "Equivariant transformer networks", "venue": "arXiv preprint arXiv:1901.11399,", "year": 2019 }, { "authors": [ "Nathaniel Thomas", "Tess Smidt", "Steven Kearnes", "Lusann Yang", "Li Li", "Kai Kohlhoff", "Patrick Riley" ], "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds", "venue": "arXiv preprint arXiv:1802.08219,", "year": 2018 }, { "authors": [ "Marc AA Van Leeuwen", "Arjeh Marcel Cohen", "Bert Lisser" ], "title": "Lie: A package for lie group computations", "venue": null, "year": 1992 }, { "authors": [ "Maurice Weiler", "Gabriele Cesa" ], "title": "General e (2)-equivariant steerable cnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Maurice Weiler", "Mario Geiger", "Max Welling", "Wouter Boomsma", "Taco Cohen" ], "title": "3d steerable cnns: Learning rotationally equivariant features in volumetric data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Steven Weinberg" ], "title": "The quantum theory of fields. Vol. 1: Foundations", "venue": null, "year": 1995 }, { "authors": [ "Alex Zihao Zhu", "Ziyun Wang", "Kostas Daniilidis" ], "title": "Motion equivariant networks for event cameras", "venue": "Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "−iLy Jz" ], "title": "Lz, it may be easily checked thatKx,Ky, Jz satisfy the applicable commutation relations from equation 3. This reflects the physical intuition that time behaves like an imaginary dimension of space. The final Lie algebra for which we require explicit representation matrix formulas is so(3", "venue": null, "year": 1995 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many tasks in machine learning exactly or approximately obey a continuous symmetry such as 2D rotations. An ML model is said to be equivariant to such a symmetry if the model respects it automatically (without training). Equivariant models have been applied to tasks ranging from computer vision to molecular chemistry, leading to a generalization of equivariance techniques beyond 2D rotations to other symmetries such as 3D rotations. This is enabled by known mathematical results about each new set of symmetries. Specifically, explicit group representation matrices for each new symmetry group are required. For many important symmetries, formulae are readily available to produce these representations. For other symmetries we are not so lucky, and the representations may be difficult to find explicitly. In the worst cases, the classification of the group representations is an open problem in mathematics. For example, in the important case of the homogeneous Galilean group, which we define in section 2, the classification of the finite dimensional representations is a so-called “wild algebraic problem” for which we have only partial solutions (De Montigny et al., 2006; Niederle & Nikitin, 2006; Levy-Leblond, 1971).\nTo construct an equivariant network without prior knowledge of the group representations, novel approaches are needed. In this work, we propose an algorithm LearnRep that finds the representation matrices with high precision. We validate that LearnRep succeeds for the Poincaré group, a set of symmetries governing phenomena from particle physics to object tracking. We further validate LearnRep on two additional sets of symmetries where formulae are known. We apply the Poincaré group representations obtained by LearnRep to construct SpacetimeNet, a Poincaré-equivariant object-tracking model. As far as we are aware, LearnRep is the first automated solver which can find explicit representation matrices for sets of symmetries which form noncompact, noncommutative Lie groups Further, SpacetimeNet is the first object-tracking model with a rigorous guarantee of Poincaré group equivariance." }, { "heading": "1.1 GROUP REPRESENTATIONS AND EQUIVARIANT MACHINE LEARNING", "text": "Group theory provides the mathematical framework for describing symmetries and building equivariant ML models. Informally, a symmetry group G is a set of invertible transformations α, β ∈ G which can be composed together using a product operation αβ. We are interested in continuous symmetries for which G is a Lie group. In prior constructions of Lie group-equivariant models, group representations are required. For a group G, an n−dimensional (real) group representation ρ : G→ Rn×n is a mapping from each element α ∈ G to an n× n-dimensional matrix ρ(α), such that for any two elements α, β ∈ G, we have ρ(α)ρ(β) = ρ(αβ).\nTwo parallel techniques have been developed for implementing Lie group equivariant neural networks. The first approach was described in general by Cohen et al. (2019). For the latter approach taken by Thomas et al. (2018); Anderson et al. (2019); Bogatskiy et al. (2020), convolutions and nonlinearities are performed directly on the irreducible representations of the group, which we define in section 2.4. A common thread in these works has been to utilize existing formulas derived for the matrix elements of these irreducible representations. However, these formulas are only available for specific Lie groups where the representation theory is well-understood. A more convenient approach for extending equivariance to novel Lie groups would utilize an automated computational technique to obtain the required representations. The primary contribution of this work is such a technique." }, { "heading": "1.2 CONTRIBUTIONS", "text": "In this work, we automate the generation of explicit group representation matrices of Lie groups using an algorithm called LearnRep. LearnRep poses an optimization problem defined by the Lie algebra associated with a Lie group, whose solutions are the representations of the algebra. A penalty term is used to prevent the formation of trivial representations. Gradient descent of the resulting loss function produces nontrivial representations upon convergence. We apply LearnRep to three noncommutative Lie groups for which the finite-dimensional representations are well-understood, allowing us to verify that the representations produced are irreducible by computing their Clebsch-Gordan coefficients and applying Schur’s Lemma.\nOne of the Lie groups where LearnRep performs well is the Lorentz group of special relativity. Prior work has applied Lorentz-equivariant models to particle physics. In this work we explain that the Lorentz group along with the larger Poincaré group also governs everyday object-tracking tasks. We construct a Poincaré-equivariant neural network architecture called SpacetimeNet and demonstrate that it can learn to solve a 3D object-tracking task subject to “motion equivariance,” where the inputs are a time series of points in space.\nIn summary, our contributions are:\n• LearnRep, an algorithm which can find irreducible representations of a noncompact and noncommutative Lie group.\n• SpacetimeNet, a Poincaré group-equivariant neural network applied to object-tracking tasks.\nOur work contributes towards a general framework and toolset for building neural networks equivariant to novel Lie groups, and motivates further study of Lorentz equivariance for object tracking." }, { "heading": "1.3 ORGANIZATION", "text": "We summarize all necessary background and terminology in section 2. We describe the LearnRep algorithm in section 3.1 and SpacetimeNet in section 3.2. We summarize related work in section 4. We present our experimental results in section 5: our experiments in learning irreducible Lie group representations with LearnRep in section 5.1 and the performance of our Poincaré-equivariant SpacetimeNet model on a 3D object tracking task in section 5.2." }, { "heading": "2 TECHNICAL BACKGROUND", "text": "We explain the most crucial concepts here and defer to Appendix A.1 for a derivation of the representation theory of the Lorentz group.\n2.1 SYMMETRY GROUPS SO(n) AND SO(m,n)\nA 3D rotation may be defined as a matrix A :∈ R3×3 which satisfies the following properties, in which 〈~u,~v〉 = ∑3 i=1 uivi:\n(i) detA = 1 (ii) ∀~u,~v ∈ R3, 〈A~u,A~v〉 = 〈~u,~v〉; these imply the set of 3D rotations forms a group under matrix multiplication and this group is denoted SO(3). This definition directly generalizes to the n−dimensional rotation group SO(n). For\nn ≥ 3, the group SO(n) is noncommutative, meaning there are elements A,B ∈ SO(n) such that AB 6= BA. Allowing for rotations and translations of n dimensional space gives the n−dimensional special Euclidean group SE(n). SO(n) is generalized by a family of groups denoted SO(m,n), with SO(n) = SO(n, 0). For integers m,n ≥ 0, we define 〈~u,~v〉m,n = ∑m i=1 uivi − ∑m+n i=m+1 uivi. The group SO(m,n) is the set of matrices A ∈ R(m+n)×(m+n) satisfying (i-ii) below: (i) detA = 1 (ii) ∀~u,~v ∈ Rm+n, 〈A~u,A~v〉m,n = 〈~u,~v〉m,n;\nthese imply that SO(m,n) is also a group under matrix multiplication. While the matrices in SO(n) can be seen to form a compact manifold for any n, the elements of SO(m,n) form a noncompact manifold whenever n,m ≥ 1. For this reason SO(n) and SO(m,n) are called compact and noncompact Lie groups respectively. The representations of compact Lie groups are fairly well understood, see Bump (2004); Cartan (1930).\n2.2 ACTION OF SO(m,n) ON SPACETIME\nWe now explain the physical relevance of the groups SO(m,n) by reviewing spacetime. We refer to Feynman et al. (2011) (ch. 15) for a pedagogical overview. Two observers who are moving at different velocities will in general disagree on the coordinates {(ti, ~ui)} ⊂ R4 of some events in spacetime. Newton and Galileo proposed that they could reconcile their coordinates by applying a spatial rotation and translation (i.e., an element of SE(3)), a temporal translation (synchronizing their clocks), and finally applying a transformation of the following form:\nti 7→ ti ~ui 7→ ~ui + ~vti, (1) in which ~v is the relative velocity of the observers. The transformation equation 1 is called a Galilean boost. The set of all Galilean boosts along with 3D rotations forms the homogeneous Galilean group denoted HG(1, 3). Einstein argued that equation 1 must be corrected by adding terms dependent on ||~v||2/c, in which c is the speed of light and ||~v||2 is the `2 norm of ~v. The resulting coordinate transformation is called a Lorentz boost, and an example of its effect is shown in figure 1. The set of 3D rotations along with Lorentz boosts is exactly the group SO(3, 1). In the case of 2 spatial dimensions, the group is SO(2, 1). Including spacetime translations along with the Lorentz group SO(n, 1) gives the larger Poincaré group Pn with n spatial dimensions. The Poincaré group P3 is the group of coordinate transformations between different observers in special relativity.\nConsider an object tracking task with input data consisting of a spacetime point cloud with n dimensions of space and 1 of time, and corresponding outputs consisting of object class along with location and velocity vectors. A perfectly accurate object tracking model must respect the action of Pn on the input. That is, given the spacetime points in any observer’s coordinate system, the perfect model must give the correct outputs in that coordinate system. Therefore the model should be Pn-equivariant. For low velocities the symmetries of the homogeneous Galilean groups HG(n, 1) provide a good approximation to SO(n, 1) symmetries, so Galilean-equivariance may be sufficient for some tasks. Unfortunately the representations of HG(n, 1) are not entirely understood De Montigny et al. (2006); Niederle & Nikitin (2006); Levy-Leblond (1971)." }, { "heading": "2.3 LIE GROUPS AND LIE ALGEBRAS", "text": "Here we give an intuitive summary of Lie groups and Lie algebras, deferring to Bump (2004) for a rigorous technical background. A Lie group G gives rise to a Lie algebra A as its tangent space at the identity. This is a vector space V along with a bilinear product called the Lie bracket: [a, b] which must behave like1 the commutator for an associative ring R with multiplication operation ×R:\n[a, b] = a×R b− b×R a The Lie algebra for SO(3), denoted so(3), has a basis {J1, J2, J3} satisfying\n[Ji, Jj ] = ijkJk, (2)\nin which ijk ∈ {±1, 0} is the totally antisymmetric Levi-Civita symbol.2 Intuitively, the Lie bracket shows how group elements near the identity fail to commute. For example, the matrices Rx, Ry, Rz\n1Specifically, the Lie bracket must satisfy the Jacobi identity and [a, a] = 0. 2The symbol ijk simply expresses in equation 2 that [J1, J2] = J3, [J2, J3] = J1, [J3, J1] = J2.\nfor rotations about the x and y axes by a small angle θ satisfy RxRy −RyRx = Rz +O(θ2); more generally the Lie bracket of equation 2 is satisfied to first order in θ. The Lia algebra so(3, 1) of the Lorentz Group SO(3, 1) also satisfies equation 2 for the generators J1, J2, J3 of its subalgebra isomorphic to so(3). It has 3 additional generators denoted K1,K2,K3, which satisfy:\n[Ji,Kj ] = ijkKk [Ki,Kj ] = − ijkJk (3)\nThese Ki correspond to the Lorentz boosts in the same way that the Ji correspond to the rotations. In general, if A is a t-dimensional Lie algebra with generators T1, ..., Tt such that\n[Ti, Tj ] = t∑ k=1 AijkTk, (4)\nwe call the tensor Aijk the structure constants of A. For connected matrix Lie groups such as SO(m,n), the structure constants Aijk are easily obtained. For example, one may apply the matrix logarithm to several elements of the group to obtain elements of the algebra, then find a complete basis for the algebra and write the commutator of all basis elements in this basis." }, { "heading": "2.4 GROUP REPRESENTATIONS AND THE TENSOR PRODUCT", "text": "Let G be a Lie group and ρ : G→ Rn×n be a representation of G as defined in section 1.1. Then ρ defines a group action on Rn: given a vector ~u ∈ Rn and a group element α ∈ G, we can define\nα ∗ρ ~u := ρ(α)~u\nusing the matrix product. We then say that ρ is irreducible if it leaves no nontrivial subspace invariant – for every subspace V ⊂ Rn with 0 < dimV < n, there exists α ∈ G,~v ∈ V such that α ?ρ ~v /∈ V . Given two G-representations ρ1 : G→ Rn1×n1 , ρ2 : G→ Rn2×n2 , we define their tensor product as ρ1 ⊗ ρ2 : G→ Rn1n2×n1n2 (ρ1 ⊗ ρ2)(α) = ρ1(α)⊗ ρ2(α), in which ⊗ on the right hand side denotes the usual tensor product of matrices. It is easy to check that ρ1 ⊗ ρ2 is also a representation of G using the fact that for matrices A1, A2 ∈ Rn1×n1 and B1, B2 ∈ Rn2×n2 ,\n(A1 ⊗B1)(A2 ⊗B2) = (A1A2)⊗ (B1B2).\nFor ρ1, ρ2 as above we also define their direct sum as (ρ1 ⊕ ρ2)(α) = ( ρ1(α)\nρ2(α)\n) .\nFor two groups H,G we say that H is isomorphic to G and write H ∼= G if there exists a bijection f : H → G such that f(αβ) = f(α)f(β). For ρ1, ρ2 as above, their images ρi(G) form groups and we say that ρ1 and ρ2 are isomorphic and write ρ1 ∼= ρ2 if these groups are isomorphic, i.e. ρ1(G) ∼= ρ2(G). Some familiar representations of SO(3) act on scalars ∈ R, vectors ∈ R3, and tensors (e.g., the Cauchy stress tensor) – these representations are all nonisomorphic.\nFor many Lie groups such as SO(n, 1) and SO(n), a property called complete reducibility guarantees that any representation is either irreducible, or isomorphic to a direct sum of irreducible representations. For such groups it suffices to identify the irreducible representations to understand all other representations and construct equivariant models." }, { "heading": "2.5 CLEBSCH-GORDAN COEFFICIENTS AND TENSOR-PRODUCT NONLINEARITIES", "text": "Clebsch-Gordan Coefficients: Let G be a completely reducible Lie group, and let ρ1, ρ2, ρ3 be irreducible G-representations on the vector spaces Rn1 ,Rn2 ,Rn3 . Consider the tensor product representation ρ1 ⊗ ρ2. Since G is completely reducible, there exists a set S of irreducible representations such that ρ1 ⊗ ρ2 ∼= ⊕ ρ∈S ρ. Suppose that ρ3 ∈ S. Then there exists a matrix C ∈ Rn3×(n1n2) which projects the space of the n3-dimensional group representation ρ3 from the tensor product space Rn1 ⊗ Rn2 . That is,\n∀(α, ~u,~v) ∈ G× Rn1 × Rn2 , C(ρ1(α)⊗ ρ2(α))(~u⊗ ~v) = ρ3(α)C(~u⊗ ~v) ⇒ C(ρ1(α)⊗ ρ2(α)) = ρ3(α)C. (5)\nThe matrices C satisfying equation 5 for various ρ3 are called the Clebsch-Gordan coefficients. In equation 5 there are n1n2n3 linear constraints on C, and therefore this is a well-posed homogeneous linear program (LP) for C. The entries of C may be found numerically by sampling several distinct α ∈ G and concatenating the linear constraints (equation 5) to form the final LP. The solutions for C form a linear subspace of Rn3×(n1n2) given by the nullspace of some matrix we denote C[ρ1, ρ2, ρ3]. Tensor Product Nonlinearities: Tensor product nonlinearities, including norm nonlinearities, use the Clebsch-Gordan coefficients defined above to compute equivariant quadratic functions of multiple G-representations within the G-equivariant model. This was demonstrated for the case of SE(3) by Thomas et al. (2018); Kondor et al. (2018) and for SO(3, 1) by Bogatskiy et al. (2020)." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 LEARNING LIE GROUP REPRESENTATIONS", "text": "For a matrix M ∈ Rn×n we denote its Frobenius and L1 norms by |M |2F = ∑ 1≤i,j≤n |Mij |2, |M |1 = ∑ 1≤i,j≤n |Mij |.\nThe approach of LearnRep is to first learn a Lie algebra representation and then obtain its corresponding group representation through the matrix exponential. Fix a t-dimensional Lie algebra A with structure constants Aijk as defined in equation 4. Fix a positive integer n as the dimension of the representation of A. Then let the matrices T1, ..., Tt ∈ Rn×n be optimization variables, and define the following loss function on the Ti:\nL[T1, ..., Tt] = max (\n1, max 1≤i≤t\n1\n|Ti|2F ) ︸ ︷︷ ︸\nN [Ti]−1\n× ∑\n1≤i≤j≤t ∣∣∣∣∣[Ti, Tj ]−∑ k AijkTk ∣∣∣∣∣ 1 . (6)\nThis is the magnitude of violation of the structure constants of A, multiplied by a norm penalty term N [Ti]\n−1 (this penalty is plotted separately in figure 2). The purpose of the norm penalty is to avoid convergence to a solution where Ti = 0n×n for any i, which will act trivially when restricted to the nontrivial subgroup {etTi : t ∈ R}. We pose the optimization problem:\nmin T1,...,Tt∈Rn×n\nL[T1, ..., Tt].\nThe generators were initialized with entries from the standard normal distribution. Gradient descent was performed in PyTorch with the Adam optimizer (Kingma & Ba, 2014) with initial learning rate 0.1. The learning rate decreased exponentially when loss plateaued. The results are shown in figure 2." }, { "heading": "3.1.1 VERIFYING IRREDUCIBILITY OF LEARNED REPRESENTATIONS", "text": "Suppose we have converged to T1, . . . Tt such that L[Ti] = 0. Then the T1, ..., Tt are a nonzero n-dimensional representation of the Lie algebra A. The groups considered here are covered by the exponential map applied to their Lie algebras, so for each α ∈ G there exist b1, . . . , bt ∈ R such that\nρ(α) = exp [ t∑ i=1 biTi ] ,\nwhere ρ is any n−dimensional representation of G and exp is the matrix exponential. This ρ : G 7→ Rn×n is then a representation of the Lie group. Throughout this section, ρ denotes this representation. In general ρ may leave some nontrivial subspace invariant. In this case it is reducible and splits as the direct sum of lower-dimensional irreducible representations ρi as explained in 2.4:\nρ ∼= ρ1 ⊕ . . .⊕ ρ`.\nRecall that any representation may be obtained as such a direct sum of irreducible representations with dimensions n1, . . . , n` satisfying n = ∑` i=1 ni. If n is set to the minimum dimension of a nontrivial irreducible representation, the only permissible partitions of n have ` = 1 and ` = n – as the latter representation is trivial, equation 6 diverges, so LearnRep can only converge to an irreducible n dimensional representation.3 It is important to verify that the learned ρ is indeed irreducible with ` = 1. To validate that ρ is irreducible, LearnRep computes its tensor product structure and compares with the expected structure. Specifically, it computes the Clebsch-Gordan coefficients for the direct-sum decomposition of the tensor product of the learned representation ρ with several other known representations ρ1, ..., ρr. section 2.5 defines these coefficients and explains how they are computed from the nullspace of the matrix C = C[ρ, ρ1, ρ2], in which ρ2 appears in the decomposition of ρ⊗ ρ1. Let ρ1, ρ2 denote two other known representations, and consider the Clebsch-Gordan coefficients C such that Cρ ⊗ ρ1 = ρ3C. The dimension of the nullspace of C indicates the number of unique nonzero matrices C of Clebsch-Gordan coefficients. The singular values of C are denoted SV1(C) ≤ ... ≤ SV`(C). The ratio\nr(C) := SV2(C)/SV1(C) (7)\ndiverges only if the nullspace is one dimensional which therefore corresponds to a unique solution for C. The number of expected solutions is known (e.g., it may be computed using the same technique from the formulae for the irreducible representations). Therefore if r(C) diverges for exactly the choices of ρ1, ρ2 where the theory indicates that unique nonzero Clebsch-Gordan coefficients exist, then this is consistent with our having learned an irreducible representation of the group G.\nClearly the tensor product with the trivial representation ρ1 = 1 is ρ ⊗ 1 = ρ. In this case, the permissible C correspond to G−linear maps Rn → Rn2 . By a result of Schur (1905) (Schur’s Lemma), the only such (nonzero) maps are isomorphisms. Therefore a divergent value of r(C) when ρ1 = 1 indicates that ρ ∼= ρ2. This is shown in the top row of figure 3 and discussed further in section 5.1." }, { "heading": "3.1.2 STOPPING CONDITION", "text": "Similar to (Rao & Ruderman, 1999), LearnRep restarts gradient descent several times starting from random initialization points. A restart is triggered if loss plateaus and the learning rate is smaller than the loss by a factor of at most 10−4. The tensor product structure is computed upon convergence to loss under 10−9, a restart is triggered if the divergences of r(C) do not agree with the theoretical prediction, indicating a reducible representation." }, { "heading": "3.2 SPACETIMENET ARCHITECTURE", "text": "We obtain all Clebsch-Gordan coefficients through the procedure explained in section 2.5. We place them in a tensor: Cg,qr,ls,mt. This notation corresponds to taking the tensor product of an element of the lth group representation space indexed by s with an element of the mth group representation space indexed by t, and projecting it onto the qth group representation space indexed by r. The space\n3This applies to our experiments learning SO(3) representations, with n = 3.\nof possible Clebsch-Gordan coefficients can be multidimensional.4 We use an index g to carry the dimension within the space of Clebsch-Gordan coefficients.\nThe trainable weights in SpacetimeNet are complex-valued filter weights denoted fkqg and channelmixing weights denoted W kqcgd. Each layer builds a collection of equivariant convolutional filters F kxijqr from the geometry of the point cloud. Let q\n′ denote the index of the group representation in which the points are embedded. Let Xxir denote the point coordinates, in which x indexes the batch dimension, i indexes the points, and r indexes the q′ group representation space. Define the (globally) translation-invariant quantity ∆Xxijr := Xxjr −Xxir. The equivariant filters at layer k are:\nF kxijqr = δqq′∆Xxijr + ∑ s,t,g Cg,qr,q′s,q′tf k qg∆Xxijs∆Xxijt. (8)\nThe forward pass consists of tensor product nonlinearities between equivariant filters and activations. The input and activations for the kth layer of the network are defined on a tensor V kximct, where x is the batch dimension, i indexes the points, m is the group representation index, c is the channel index, t indexes the group representation space. Our mixing weights are then defined for the kth layer as W kqcgd with layer update rule:\nV k+1xiqcr = ∑\ng,l,s,m,t,d,j\nCg,qr,ls,mtF k xijlsV k xjmdtW k qcgd. (9)\nA proof that SpacetimeNet is Pn-equivariant is given in Appendix A.2." }, { "heading": "4 RELATED WORK", "text": "" }, { "heading": "4.1 LEARNING LIE GROUP REPRESENTATIONS", "text": "Several authors have investigated automated means of identifying Lie group representations. (Rao & Ruderman, 1999) used gradient descent with several starting points to find the Lie group generators, given many examples of data which had been transformed by the group. Applying the technique requires knowledge of how the group acts on a representation space. Here we know the Lie algebra structure but we do not know how to compute its representations. Tai et al. (2019) gave a closed-form solution for the canonical coordinates for Lie groups. But their formula only applies for Abelian oneparameter Lie groups, excluding SO(3),SO(2, 1), and SO(3, 1). Cohen & Welling (2014) devised a probabilistic model to learn representations of compact, commutative Lie groups from pairs of images related by group transformations. In the present work we demonstrate a new approach to handle noncompact and noncommutative groups such as SO(3),SO(2, 1), and SO(3, 1). Computer algebra software such as the LiE package developed by Van Leeuwen et al. (1992) automates some calculations related to completely reducible Lie groups. Unfortunately this limits us when considering novel Lie groups where the representation theory is less well-understood." }, { "heading": "4.2 EQUIVARIANT NEURAL NETWORKS", "text": "Beginning with the success of (approximately) translation-equivariant CNNs introduced by LeCun et al. (1989) for image recognition, a line of work has extended equivariance to additional continuous symmetry groups. Most relevant are the architectures for groups SE(2) (Worrall et al., 2017; Weiler & Cesa, 2019), SE(3) (Weiler et al., 2018; Cohen et al., 2019; Kondor et al., 2018; Thomas et al., 2018; Cohen et al., 2018; Kondor, 2018; Gao et al., 2020; Anderson et al., 2019; Fuchs et al., 2020; Eismann et al., 2020), and the group of Galilean boosts (Zhu et al., 2019).\nThe work by Thomas et al. (2018); Kondor et al. (2018); Anderson et al. (2019); Bogatskiy et al. (2020) used Clebsch-Gordan coefficients in their equivariant neural networks. Weiler et al. (2018), generalized by Cohen et al. (2019) showed all equivariant linear maps are convolutions whose kernels satisfy some linear constraints. In our work we obtain Clebsch-Gordan coefficients from similar linear constraints (equation 5) and use them to show that the learned representations are irreducible. We also use them in SpacetimeNet. Griffiths & Griffiths (2005) provide an introductory exposition of Clebsch-Gordan coefficients and Gurarie (1992) provides a more general exposition.\n4This is common if a group representation is itself obtained via tensor product.\nOne of the first constructions that addressed spatiotemporal symmetries was by Zhu et al. (2019). They introduce motion-equivariant networks to handle linear optical flow of an observer moving at a fixed speed. They use a canonical coordinate system in which optical flow manifests as a translation, as described for general one dimensional Lie groups by Tai et al. (2019). This allows them to use the translation equivariance of CNNs to produce Galilean boost-equivariance. However, this gives up equivariance to translation in the original coordinate system. To maintain approximate translation-equivariance, the authors apply a spatial transformer network (Jaderberg et al., 2015) to predict a landmark position in each example. This is similar to the work of Esteves et al. (2018), which achieved equivariance to 2D rotation and scale, and approximate equivariance to translation.\nThe first mention of Poincaré-equivariant networks appears to be a work by Cheng et al. (2019) on the link between covariance in ML and physics. Concurrently to our work, Bogatskiy et al. (2020) constructed a Lorentz-equivariant model which operated on irreducible representations of the Lorentz group, derived similarly to Appendix A.1. This work also made use of the Clebsch-Gordan coefficients, and the model was applied to experimental particle physics rather than object-tracking. Another work by Finzi et al. (2020) concurrent to our own proposed a framework for building models equivariant to arbitrary Lie groups. This work also made use of the exponential and logarithm maps between Lie algebra and group. It does not provide a technique for identifying the Lie algebra representations. Our ideas complement this line of work by providing an algorithm (LearnRep) that solves for the representations numerically." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 CONVERGENCE OF LEARNREP TO IRREDUCIBLE REPRESENTATIONS", "text": "We apply LearnRep to SO(3),SO(2, 1), and SO(3, 1) to learn 3, 3, and 4 dimensional irreducible representations respectively. The loss function converges arbitrarily close to 0 with the penalty term bounded above by a constant. We exponentiate the resulting algebra representation matrices to obtain group representations and calculate the tensor product structure as described in section 3.1.1 The details of this calculation are in Appendix A.4 and shown in figure 3. The results indicate that the learned representations are irreducible representations of the associated Lie algebras to within numerical error of about 10−6. Schur’s Lemma in the special case of the tensor product with the trivial representation indicates the isomorphism class of each learned group representation." }, { "heading": "5.2 POINCARÉ-EQUIVARIANT OBJECT-TRACKING NETWORKS", "text": "We created MNIST-Live, a benchmark dataset of spacetime point clouds sampled from digits from the MNIST dataset moving uniformly through space. Each sample consists of 64 points with uniformly random times t ∈ [−1/2, 1/2], and spatial coordinates sampled from a 2D probability density function proportional to the pixel intensity. Using instances of the 0 and 9 classes, we train on examples with zero velocity and evaluate on examples with random velocity and orientation. This dataset is analogous to data from an event camera (see (Orchard et al., 2015)) or LIDAR system. We train 3 layer SO(2, 1) and SO(3, 1)-equivariant SpacetimeNet models with 3 channels and batch size 16 on 4096 MNIST-Live examples and evaluate on a dev set of 124 examples. We obtain dev accuracy of 80± 5% as shown in figure 4 of the Appendix." }, { "heading": "5.3 CONCLUSION", "text": "We envision many applications of Poincaré-equivariant deep neural networks beyond the physics of particles and plasmas. SpacetimeNet can identify and track simple objects as they move through 3D space. This suggests that Lorentz-equivariance is a useful prior for object-tracking tasks. With a treatment of bandlimiting and resampling as in Worrall et al. (2017); Weiler et al. (2018), our work could be extended to build Poincaré-equivariant networks for volumetric data. More broadly, understanding the representations of noncompact and noncommutative Lie groups may enable the construction of networks equivariant to new sets of symmetries such as the Galilean group. Since the representation theory of these groups is not entirely understood, automated techniques such as LearnRep could play a beneficial role." }, { "heading": "A APPENDIX", "text": "A.1 ANALYTIC DERIVATION OF LORENTZ GROUP REPRESENTATIONS\nTo compare our learned group representations with those obtained through prior methods, we require analytical formulae for the Lie algebra representations for the algebras so(3), so(3, 1), and so(2, 1). The case of so(3) has a well-known solution (see Griffiths & Griffiths (2005)). If complex matrices are permissible the library QuTiP Johansson et al. (2013) has a function “jmat” that readily gives the representation matrices. A formula to obtain real-valued representation matrices is given in Pinchon & Hoggan (2007) and a software implementation is available at Cohen et al. (2020). The threedimensional Lie algebra so(2, 1) = span{Kx,Ky, Jz} has structure constants given by equation 3. In fact, these three generators Kx,Ky, Jz may be rescaled so that they satisfy equation 2 instead. This is due to the isomorphism so(3) ∼= so(2, 1). Specifically, leting {Lx, Ly, Lz} denote a Lie algebra representation of so(3), defining\nKx = −iLx Ky = −iLy Jz := Lz,\nit may be easily checked thatKx,Ky, Jz satisfy the applicable commutation relations from equation 3. This reflects the physical intuition that time behaves like an imaginary dimension of space.\nThe final Lie algebra for which we require explicit representation matrix formulas is so(3, 1). Following Weinberg (1995), we define new generators Ai, Bi as\nAi := 1\n2 (Ji + iKi) Bi :=\n1 2 (Ji − iKi), (10)\nwe see that the so(3, 1) commutators equation 2, equation 3 become\n[Ai, Aj ] = i ijkAk, [Bi, Bj ] = i ijkBk, [Ai, Bj ] = 0. (11)\nTherefore so(3, 1) ∼= so(3)⊕ so(3), and the irreducible algebra representations of so(3, 1) may be obtained as the direct sum of two irreducible algebra representations of so(3).\nA.2 PROOF THAT SPACETIMENET IS POINCARÉ-EQUIVARIANT\nConsider an arbitrary Poincaré group transformation α ∈ Pn, and write α = βt in which β ∈ SO(n, 1) and t is a translation. Suppose we apply this α to the inputs of equation 8 through the representations indexed by q: ρq(α)st, in which s, t index the representation matrices. Then since the translation t leaves ∆X invariant, the resulting filters will be\nF kxijqr = δqq′ ∑ r′ ρq′(β)rr′∆Xxijr′ + ∑ s,t,g Cg,qr,q′s,q′tf k qg ∑ s′,t′ ρq′(β)ss′∆Xxijs′ρq′(β)tt′∆Xxijt′\n= δqq′ ∑ r′ ρq′(β)rr′∆Xxijr′ + ∑ g,s′,t′ (∑ s,t Cg,qr,q′s,q′tρq′(β)ss′ρq′(β)tt′ ) fkqg∆Xxijs′∆Xxijt′\n= δqq′ ∑ r′ ρq′(β)rr′∆Xxijr′ + ∑ s,t,g,r′ (ρq(β)rr′Cg,qr′,q′s,q′t) f k qg∆Xxijs∆Xxijt = ∑ r′ ρq′(β)rr′ δqq′∆Xxijr′ + ∑ s,t,g,r′ Cg,qr′,q′s,q′tf k qg∆Xxijs∆Xxijt\n = ∑ r′ ρq′(β)rr′F k xijqr′ ,\nwhere we have used equation 5. The network will be equivariant if each layer update is equivariant. Recall the layer update rule of equation 9:\nV k+1xiqcr = ∑\ng,l,s,m,t,d,j\nCg,qr,ls,mtF k xijlsV k xjmdtW k qcgd.\nSuppose for the same transformation α = βt above, that V k and ∆X are transformed by α. Then because the activations associated with each point are representations of SO(n, 1), they are invariant to the global translation t of the point cloud and we have\nV k+1xiqcr = ∑\ng,l,s,m,t,d,j Cg,qr,ls,mt ∑ s′ ρm(β)ss′F k xijls′ ∑ t′ ρm(β)tt′V k xjmdt′W k qcgd\n= ∑ s′,t′ ∑ g,l,s,m,t,d,j (Cg,qr,ls,mtρm(β)ss′ρm(β)tt′)F k xijls′V k xjmdt′W k qcgd\n= ∑\ng,l,s,m,t,d,j,r′\n(ρm(β)rr′Cg,qr′,ls,mt)F k xijlsV k xjmdtW k qcgd\n= ∑ r′ ρm(β)rr′V k+1 xiqcr′ ,\nwhere again we applied equation 5.\nA.3 EQUIVARIANT CONVOLUTIONS\nConsider data on a point cloud consisting of a finite set of spacetime points {~xi} ⊂ R4, a representation ρ0 : SO(3, 1) → R4×4 of the Lorentz group defining its action upon spacetime, and feature maps {~ui} ⊂ Rm, {~vi} ⊂ Rn associated with representations ρu : SO(3, 1)→ Rm×m and ρv : SO(3, 1)→ Rn×n. A convolution of this feature map can be written as\n~u′i = ∑ j κ(~xj − ~xi)~uj\nin which κ(~x) : R4 → Rn×m, a matrix-valued function of spacetime, is the filter kernel. P3-equivariance dictates that for any α ∈ SO(3, 1),\nρv(α) ∑ j κ(~xj − ~xi)~uj = ∑ j κ(ρ0(α)(~xj − ~xi))ρu(α)~uj\n⇒ κ(∆~x) = ρv(α−1)κ(ρ0(α)∆~x)ρu(α) (12)\nTherefore a single kernel matrix in Rn×m may be learned for each coset of spacetime under the action of SO(3, 1). The cosets are indexed by the invariant\nt2 − x2 − y2 − z2.\nThe kernel may then be obtained at an arbitrary point ~x ∈ R4 from equation 12 by computing an α that relates it to the coset representative ~x0: ~x = ρ0(α)~x0. A natural choice of coset representatives for SO(3, 1) acting upon R4 is the set of points {(t, 0, 0, 0) : t ∈ R+} ∪ {(0, x, 0, 0) : x ∈ R+} ∪ {(t, ct, 0, 0) : t ∈ R+}.\nA.4 TENSOR PRODUCT STRUCTURE OF LEARNED SO(3),SO(2, 1),SO(3, 1) GROUP REPRESENTATIONS\nWe quantify the uniqueness of each set of Clebsch-Gordan coefficients in terms of the diagnostic ratio r(C) defined in equation 7. Recall that the value of r becomes large only if there is a nondegenerate nullspace corresponding to a unique set of Clebsch-Gordan coefficients. For SO(3) and SO(2, 1), the irreducible group representations are labeled by an integer which is sometimes called the spin. We label learned group representations with a primed (i′) integer. For the case of SO(3, 1) the irreducible group representations are obtained from two irreducible group representations of so(3) as explained in section A.1 and we label these representations with both spins i.e. (s1, s2). We\nagain label the learned group representations of SO(3, 1) with primed spins, i.e. (s′1, s ′ 2). The tensor product structures of the representations is shown in figure 3.\nWe have produced a software library titled Lie Algebraic Networks (LAN) built on PyTorch, which derives all Clebsch-Gordan coefficients and computes the forward pass of Lie group equivariant neural networks. LAN also deals with Lie algebra representations, allowing for operations such as taking the tensor product of multiple group representations. figure 5 demonstrates the LAN library. Starting from several representations for a Lie algebra, LAN can automatically construct a neural network equivariant to the associated Lie group with the desired number of layers and channels. We present our experimental results training SO(2, 1) and SO(3, 1)-equivariant object-tracking networks in section 5.2.\nA.5 SUPPLEMENTARY FIGURES" } ]
2,020
null
SP:f21bf18198261a5400f8aa437e305ea60b7695ac
[ "The paper introduces a geometric variational autoencoder for capturing protein structural ensembles, disentangling intrinsic and extrinsic geometry into separate latent spaces. The model is shown to accurately reconstruct protein structure, and the difference between the intrinsic and extrinsic latent spaces are explored. Finally, the model is tested in a transfer-learning setting, where it displays encouraging results." ]
Understanding the protein conformational landscape is critical, as protein function, as well as modulations thereof due to ligand binding or changes in environment, are intimately connected with structural variations. This work focuses on learning a generative neural network on a simulated ensemble of protein structures obtained using molecular simulation to characterize the distinct structural fluctuations of a protein bound to various drug molecules. Specifically, we use a geometric autoencoder framework to learn separate latent space encodings of the intrinsic and extrinsic geometries of the system. For this purpose, the proposed Protein Geometric AutoEncoder (ProGAE) model is trained on the length of the alpha-carbon pseudobonds and the orientation of the backbone bonds of the protein. Using ProGAE latent embeddings, we reconstruct and generate the conformational ensemble of a protein at or near the experimental resolution. Empowered by the disentangled latent space learning, the intrinsic latent embedding help in geometric error correction, whereas the extrinsic latent embedding is successfully used for classification or property prediction of different drugs bound to a specific protein. Additionally, ProGAE is able to be transferred to the structures of a different state of the same protein or to a completely different protein of different size, where only the dense layer decoding from the latent representation needs to be retrained. Results show that our geometric learning-based method enjoys both accuracy and efficiency for generating complex structural variations, charting the path toward scalable and improved approaches for analyzing and enhancing molecular simulations.
[]
[ { "authors": [ "Debsindhu Bhowmik", "Shang Gao", "Michael T Young", "Arvind Ramanathan" ], "title": "Deep clustering of protein folding simulations", "venue": "BMC bioinformatics,", "year": 2018 }, { "authors": [ "Luigi Bonati", "Yue-Yu Zhang", "Michele Parrinello" ], "title": "Neural networks-based variationally enhanced sampling", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Ricky TQ Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wei Chen", "Andrew L Ferguson" ], "title": "Molecular enhanced sampling with autoencoders: On-thefly collective variable discovery and accelerated free energy landscape exploration", "venue": "Journal of computational chemistry,", "year": 2018 }, { "authors": [ "Wei Chen", "Hythem Sidky", "Andrew L Ferguson" ], "title": "Nonlinear discovery of slow molecular modes using state-free reversible vampnets", "venue": "The Journal of chemical physics,", "year": 2019 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Luca Cosmo", "Antonio Norelli", "Oshri Halimi", "Ron Kimmel", "Emanuele Rodolà" ], "title": "Limp: Learning latent shape representations with metric preservation priors", "venue": "arXiv preprint arXiv:2003.12283,", "year": 2020 }, { "authors": [ "D.E. Shaw Research" ], "title": "Molecular dynamics simulations related to sars-cov-2", "venue": "http://www. deshawresearch.com/resources_sarscov2.html,", "year": 2020 }, { "authors": [ "Matteo T Degiacomi" ], "title": "Coupling molecular dynamics and deep learning to mine protein", "venue": "conformational space. Structure,", "year": 2019 }, { "authors": [ "Manfredo P Do Carmo" ], "title": "Differential geometry of curves and surfaces: revised and updated second edition", "venue": null, "year": 2016 }, { "authors": [ "P Gainza", "F Sverrisson", "F Monti", "E Rodola", "MM Bronstein", "BE Correia" ], "title": "Deciphering interaction fingerprints from protein molecular surfaces", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "Ross Girshick" ], "title": "Fast r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Liyu Gong", "Qiang Cheng" ], "title": "Exploiting edge features for graph neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jordan Graves", "Jacob Byerly", "Eduardo Priego", "Naren Makkapati", "S Vince Parish", "Brenda Medellin", "Monica Berrondo" ], "title": "A review of deep learning methods for antibodies", "venue": null, "year": 2020 }, { "authors": [ "Xiaojie Guo", "Sivani Tadepalli", "Liang Zhao", "Amarda Shehu" ], "title": "Generating tertiary protein structures via an interpretative variational autoencoder", "venue": "arXiv preprint arXiv:2004.07119,", "year": 2020 }, { "authors": [ "David R Hardoon", "Sandor Szedmak", "John Shawe-Taylor" ], "title": "Canonical correlation analysis: An overview with application to learning methods", "venue": "Neural computation,", "year": 2004 }, { "authors": [ "Pedro Hermosilla", "Marco Schäfer", "Matěj Lang", "Gloria Fackelmann", "Pere Pau Vázquez", "Barbora Kozlı́ková", "Michael Krone", "Tobias Ritschel", "Timo Ropinski" ], "title": "Proteinn: Intrinsic-extrinsic convolution and pooling for scalable deep protein", "venue": null, "year": 2020 }, { "authors": [ "John Ingraham", "Vikas Garg", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Generative models for graphbased protein design", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "John Ingraham", "Adam J Riesselman", "Chris Sander", "Debora S Marks" ], "title": "Learning protein structure with a differentiable simulator", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Mariusz Jaskolski", "Miroslaw Gilski", "Zbigniew Dauter", "Alexander Wlodawer" ], "title": "Stereochemical restraints revisited: how accurate are refinement targets and how much should protein structures be allowed to deviate from them", "venue": "Acta Crystallographica Section D: Biological Crystallography,", "year": 2007 }, { "authors": [ "Bowen Jing", "Stephan Eismann", "Patricia Suriana", "Raphael J.L. Townshend", "Ron Dror" ], "title": "Learning from protein structure with geometric vector perceptrons, 2020", "venue": null, "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Johannes Klicpera", "Janek Groß", "Stephan Günnemann" ], "title": "Directional message passing for molecular graphs", "venue": "arXiv preprint arXiv:2003.03123,", "year": 2020 }, { "authors": [ "Hyungro Lee", "Heng Ma", "Matteo Turilli", "Debsindhu Bhowmik", "Shantenu Jha", "Arvind Ramanathan" ], "title": "Deepdrivemd: Deep-learning driven adaptive molecular simulations for protein", "venue": null, "year": 2019 }, { "authors": [ "Andreas Mardt", "Luca Pasquali", "Hao Wu", "Frank Noé" ], "title": "Vampnets for deep learning of molecular kinetics", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Frank Noé", "Simon Olsson", "Jonas Köhler", "Hao Wu" ], "title": "Boltzmann generators: Sampling equilibrium states of many-body systems with deep", "venue": "learning. Science,", "year": 2019 }, { "authors": [ "Frank Noé", "Gianni De Fabritiis", "Cecilia Clementi" ], "title": "Machine learning for protein folding and dynamics", "venue": "Current Opinion in Structural Biology,", "year": 2020 }, { "authors": [ "Frank Noé", "Alexandre Tkatchenko", "Klaus-Robert Müller", "Cecilia Clementi" ], "title": "Machine learning for molecular simulation", "venue": "Annual review of physical chemistry,", "year": 2020 }, { "authors": [ "Venkata K. Ramaswamy", "Chris G. Willcocks", "Matteo T. Degiacomi" ], "title": "Learning protein conformational space by enforcing physics with convolutions and latent interpolations, 2019", "venue": null, "year": 2019 }, { "authors": [ "Venkata K. Ramaswamy", "Chris G. Willcocks", "Matteo T. Degiacomi" ], "title": "Learning protein conformational space by enforcing physics with convolutions and latent interpolations, 2020", "venue": null, "year": 2020 }, { "authors": [ "João Marcelo Lamim Ribeiro", "Pablo Bravo", "Yihang Wang", "Pratyush Tiwary" ], "title": "Reweighted autoencoded variational bayes for enhanced sampling (rave)", "venue": "The Journal of chemical physics,", "year": 2018 }, { "authors": [ "N. Joseph Tatro", "Stefan C. Schonsheck", "Rongjie Lai" ], "title": "Unsupervised geometric disentanglement for surfaces via cfan-vae", "venue": null, "year": 2020 }, { "authors": [ "Sun-Ting Tsai", "En-Jui Kuo", "Pratyush Tiwary" ], "title": "Learning molecular dynamics with simple language model built upon long short-term memory neural network, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yasemin Bozkurt Varolgüneş", "Tristan Bereau", "Joseph F Rudzinski" ], "title": "Interpretable embeddings from molecular simulations using gaussian mixture variational autoencoders", "venue": "Machine Learning: Science and Technology,", "year": 2020 }, { "authors": [ "Wujie Wang", "Simon Axelrod", "Rafael Gómez-Bombarelli" ], "title": "Differentiable molecular simulations for control and learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Wayne Wu", "Kaidi Cao", "Cheng Li", "Chen Qian", "Chen Change Loy" ], "title": "Disentangling content and style via unsupervised geometry distillation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jie Yang", "Kaichun Mo", "Yu-Kun Lai", "Leonidas J. Guibas", "Lin Gao" ], "title": "Dsm-net: Disentangled structured mesh net for controllable generation of fine geometry, 2020", "venue": null, "year": 2020 }, { "authors": [ "Jun Zhang", "Yi Isaac Yang", "Frank Noé" ], "title": "Targeted adversarial learning optimized sampling", "venue": "The journal of physical chemistry letters,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The complex and time-consuming calculations in molecular simulations have been significantly impacted by the application of machine learning techniques in recent years. In particular, deep learning has been applied to analysis and simulation of molecular trajectories to address diverse problems, such as estimating free energy surfaces, defining optimal reaction coordinates, constructing Markov State Models, and enhancing molecular sampling. For a comprehensive review of deep learning methods for analyzing and enhancing molecular simulations, see (Noé et al., 2020a) and (Noé et al., 2020b).\nSpecifically, there has been interest in modeling the underlying conformational space of proteins by using deep generative models, e.g. (Ramaswamy et al., 2020) and (Bhowmik et al., 2018; Guo et al., 2020; Varolgüneş et al., 2020). This line of work has mainly attempted to respect the domain geometry by using convolutional AEs on features extracted from 3D structures. In parallel, learning directly from 3D structure has recently developed into an exciting and promising application area for deep learning. In this work, we learn the protein conformational space from a set of protein simulations using geometric deep learning. We also investigate how the geometry of a protein itself can assist learning and improve latent conformational space interpretability. Namely, we consider the influence of intrinsic and extrinsic geometry, where intrinsic geometry is independent of 3D embedding and extrinsic is not. Intrinsic geometric protein properties can be thought to be robust to conformation. To this end, we propose a Protein Geometric Autoencoder model, named ProGAE, to separately encode intrinsic and extrinsic protein geometries.\nThe main contributions of this work are summarized:\n• Inspired by recent unsupervised geometric disentanglement learning works (Tatro et al., 2020; Wu et al., 2019; Yang et al., 2020), we propose a novel geometric autoencoder named ProGAE that directly learns from 3D protein structures via separately encoding intrinsic and extrinsic geometries into disjoint latent spaces used to generate protein structures.\n• We further propose a novel formulation, in which network intrinsic input is taken as the Cα-Cα pseudo-bond distances, and the extrinsic input is the backbone bond orientations.\n• Analysis shows that the learned extrinsic geometric latent space can be used for drug classification and drug property prediction, where the drug is bound to the given protein.\n• We find that the intrinsic geometric latent space, even with small variation in the intrinsic input signal, is important for reducing geometric errors in reconstructed proteins.\n• We also demonstrate that the learned ProGAE can be transferred to a trajectory of the protein in a different state or a trajectory of a different protein all-together." }, { "heading": "1.1 RELATED WORK", "text": "Recently, a body of work has used deep learning to learn from protein structures (Graves et al., 2020; Jing et al., 2020; Klicpera et al., 2020). For example, Gainza et al. (2019) uses geometric deep learning to predict docking sites for protein interactions. Ingraham et al. (2019a) solves the inverse folding problem using a graph transformer on the protein backbone. Degiacomi (2019) uses an AE to generate candidate proteins for docking. Hermosilla et al. (2020) leverages the notion of intrinsic and extrinsic geometry to define an architecture for a fold classification task.\nAdditionally, there has been focus on directly learning the temporal aspects of molecular dynamics from simulation trajectories, which is not directly related to the current work. Please see Appendix A.1 for a detailed discussion.\nThere is an existing body of recent works that use AE-based approaches for either analyzing and/or generating structures from from the latent space (Bhowmik et al., 2018; Guo et al., 2020; Ramaswamy et al., 2020; Varolgüneş et al., 2020), which are most closely related to this work. (Bhowmik et al., 2018) and (Guo et al., 2020) aim at learning from and generating protein contact maps, while ProGAE directly deals with 3D structures. Therefore a direct comparison of ProGAE with these methods is not possible. Ramaswamy et al. (2019) uses a 1D CNN autoencoder trained on backbone coordinates and uses a loss objective comprised of geometric MSE error and physicsbased (bond length, bond angle, etc.) error. Due to the unavailability of code or pre-trained model, we were unable to perform a direct comparison. Varolgüneş et al. (2020) uses a VAE with a Gaussian Mixture Prior for performing clustering of high-dimensional input configurations in the learned latent space. While the method works well on toy models and a standard Alanine Dipeptide benchmark, its performance drops as the size of the protein system grows to 15 amino acids, which is approximately an order smaller than the protein systems studied here. Also, their approach is likely not going to scale well to larger systems due to the use of fully-connected layers in the encoder.\nThese mentioned works have not considered explicit disentangling of intrinsic and extrinsic geometries. To our knowledge, this work is the first to propose an autoencoder for the unsupervised modeling of the geometric disentanglement of protein conformational space captured in molecular simulations. This representation provides better interpretability of the latent space, in terms of the physico-chemical and geometric attributes, results in more geometrically accurate protein conformations, as well as scales and transfers well to larger protein systems." }, { "heading": "2 PROGAE FOR PROTEIN CONFORMATIONAL SPACE", "text": "First, we introduce the input signal for our novel geometric autoencoder, ProGAE. We then discuss how ProGAE utilizes this signal to generate the conformational space of a protein.\nGeometric Features of Protein as Network Input ProGAE functions by separately encoding intrinsic and extrinsic geometry with the goal of achieving better latent space interpretability. We\nclarify these geometric notions. Mathematically, we can consider a manifold (i.e. surface) independent of its embedding in Euclidean space. Properties that do not depend on this embedding are known as intrinsic geometric properties, while properties that do are referred to as extrinsic. As an example, given two atoms of a protein, the intrinsic distance between them is the minimum sum of bond lengths in the bond path connecting them, whereas the extrinsic distance is their Euclidean distance in R3. For an in-depth review of geometry, we refer the reader to (Do Carmo, 2016).\nAs we will train ProGAE to learn the conformational space of a given protein, the protein primary structure is implicit. Then in treating it as a geometric object, we view the protein at the level of its backbone, which specifies its shape. Given primary structure, reconstructing the protein backbone is sufficient for reconstructing the entire protein. Of importance in the backbone are the Cα atoms, which are the centers of amino acids in the protein. Then a coarse-level description of the backbone is the Cα atoms connected linearly in terms of the protein sequence. This is known as the trace of the protein. We will use the backbone and trace as domains on which to define our signals.\nBoth the protein backbone and its trace can be viewed as polygonal chain in Euclidean space. They are depicted in Figure 1 with their geometric features as network input. We can see that a polygonal chain can be determined up to translation given both the length and orientation of its line segments. Then it follows that the protein backbone can be determined given the length and orientation of its bonds. Here the length of these bonds is intrinsic while the orientation is extrinsic. Thus, to decouple the intrinsic and extrinsic geometry, we can consider encoding these signals.\nThe length of covalent bonds undergo very little change during a simulation performed using an empirical force-field, like the simulations considered in this work. A standard deviation of less than 0.059Å from target bond lengths is common in PDB structures (Jaskolski et al., 2007). To this end, we instead consider intrinsic geometry at a coarse level, so that the resulting signal has more variability. Specifically, we use length of the Cα-Cα pseudobonds in the trace as a representative of the intrinsic protein geometry, where as backbone bond orientations capture extrinsic geometry.\nWe model the backbone by the graph, Gb = (Vb,Eb), and the backbone trace by the graph, Gt = (Vt,Et). Then our intrinsic and extrinsic signals, Int : Et → R and Ext : Eb → R3 are defined:\nInt(Eij) = ‖Eij‖2, Eij ∈ Et, Ext(Eij) = sgn(j − i) Eij ‖Eij‖ , Eij ∈ Eb. (1)\nNetwork Architecture With the network inputs defined, we discuss the architecture of ProGAE. The core idea is to create an intrinsic latent space, LI ∈ Rni , and an extrinsic latent space, LE ∈ Rne , via separately encoding the intrinsic and extrinsic signals. Consequently, our network contains\ntwo encoders, Enci and Ence where:\nEnci ◦ Int(Et) ∈ LI , Ence ◦ Ext(Eb) ∈ LE . (2) We then jointly decode these latent vectors to recover the coordinates of the atoms in the protein backbone. Thus, we formally define the decoder:\nDec : LI × LE → R|Vb|×3. (3)\nThis high level structure of ProGAE is depicted in Figure 1. We provide additional details on the encoders and decoders. As these edge-based signals are defined on a geometric domain, it is sensible to learn feature representations using geometric convolution that respects the geometry of the data. The intrinsic encoder is simple, as the signal is defined on the backbone trace, which corresponds to a set of discrete curves. Here each curve corresponds to a protein fragment. Then we define Enci to be a series of 1D convolutions operating on each protein fragment. Each convolution is taken to have a kernel size of 3 and a stride of 2, being followed with batch normalization layers and ReLU.\nIn contrast, the extrinsic encoder operates on the backbone, which we associate with a graph. So the layers of graph attention networks (GATs) introduced in (Veličković et al., 2017) are a natural tool to use, albeit with some modification. Since the input signal is defined only on the edges of the graph, Eb, we define a signal on the graph vertices, Vb, as the average value of its incident edges,\nf0(vi) := 1 |{j;Ei,· ∈ Eb}| ∑\nj;Ei,·∈Eb\nExt(Eij), vi ∈ Vb. (4)\nThen the first layer of the extrinsic encoder uses the edge-convolution operator of (Gong & Cheng, 2019) to map this graph signal to a signal defined exclusively on the graph vertices, Vb. The rest of the encoder contains successive graph attention layers with sparsity defined by a given neighborhood radius. At each layer, the signal is downsampled by a factor of two based on farthest point sampling. Given L layers, this defines a sequence of graphs, {Gb,i}Li=0, with increasing decimation. As with Enci, each layer is followed with batch normalization and ReLU. Summarily, for l = 1, 2, ..., L,\nfl = σ ◦BN ◦GAT (dl−1) where dl−1 = DS(fl−1; 2), f1(vi) := GAT (f0(Vb), Ext(Eb)). (5)\nGlobal average pooling is applied to the encoder outputs to introduce invariance to size of Vt and Vb. Dense layers then map each result to their respective latent spaces, LI and LE . The Tanh function is applied to bound the latent space. This produces the intrinsic and extrinsic latent codes, zi and ze.\nThe latent code z is taken as the concatenation of the two latent codes, [zi, ze]. A dense layer maps z to the a signal defined on the most decimated backbone graph, Gb,L. The structure of the decoder, Dec, is then analogous to Ence, though the convolutions are transposed. The output of Dec is the point cloud, P̂ , corresponding to the predicted coordinates of the backbone atoms, Vb ≈ P .\nLoss Function The first term in the loss function is a basic reconstruction loss, where P and P̂ are taken to be the true and predicted coordinates of the protein backbone atoms. Namely, we evaluate their difference using Smooth-L1 loss. This loss is defined, with δ = 2, as\nSmoothL1(x,y) := #x∑ i=1 zi, where zi = min ( δ2 2 (xi − yi)2, δ|xi − yi| − 1 2 ) , (6)\nThis loss function modifies L2 loss to be more robust to outliers (Girshick, 2015).\nAs the reconstruction loss depends on the embedding of the protein in Euclidean space, it may not best measure if intrinsic geometry is faithfully reconstructed. To address this, we consider two encoded proteins with latent codes, [zi,1, ze,1] and [zi,2, ze,2]. Then we form a new latent variable,\nẑi = (1− β)zi,1 + βzi,2, ẑe = ze,1, β ∼ U[0, 1]. (7)\nEach of these latent variable decodes to some point cloud P̂ . We let Int(Êt,β), Int(Êt,1), and Int(Êt,2) be the lengths of the Cα-Cα pseudobonds of the generated proteins from the interpolated latent code and the two given latent codes. We then introduce a bond length penalty given by,\nR(P̂1, P̂2) = Eβ ||Int(Êt,β)− ((1− β)Int(Êt,1) + βInt(Êt,2))||1, β ∈ U[0, 1]. (8)\nThis penalty can be viewed as promoting faithful reconstruction of the pseudobond length between Cα atoms, as well as a smooth interpolation of these lengths along paths in LI , that is independent of LE . This penalty is analogous to the metric preservation regularizer introduced in (Cosmo et al., 2020) for 3D meshes. Thus, the loss function L for ProGAE is,\nL((P̂1, P̂2), (P1,P2)) := 2∑ i=1 SmoothL1(P̂i,Pi) + λRR(P̂1, P̂2). (9)" }, { "heading": "3 EXPERIMENTAL SETUP", "text": "In this section, we describe the setup of our numerical experiments that confirm the usefulness of ProGAE in generating the protein conformational space. For each dataset, we train three models, each from a different random seed, and report both mean and standard deviation in our results. Details on the choice of network hyperparameters can be found in the Appendix A.3.\nDatasets Datasets used in this work are atomistic simulation trajectories 1 (D.E. Shaw Research, 2020). The two main datasets we use are simulations of proteins in presence of FDA approved or under-investigation molecules, as we aim to test the performance of ProGAE on capturing druginduced structural variations. These datasets are: (1) 50 independent trajectories, each simulating the SARS-CoV-2 trimeric spike protein (S protein) in the presence of a distinct drug for 2µs. The simulation is limited to 3 receptor binding domains (RBDs) of the protein, as well as a short region needed for the system to maintain a trimer assembly; (2) 75 independent trajectories, each simulating the ectodomain protein of human ACE2 (hACE2) in the presence of a distinct drug for 2µs.\nThe backbones of the S protein and the hACE2 protein contain 3,690 atoms and 2,386 atoms, respectively. The time resolution is 1,200 ps. We use the first 70% of frames from each trajectory to\n1available here: http://www.deshawresearch.com/resources_sarscov2.html\nform the training set. The next 10% and the last 20% of frames form the validation and test sets. The train and test sets are intentionally kept temporally disjoint to better assess generalization.\nFor transfer learning, we also consider two trajectories of the entire S protein containing 13,455 backbone atoms. One trajectory is initiated from a closed state, while the other from a partially open state. We use the first 2.5 µs of these 10 µs simulations, corresponding to 2,001 frames with a resolution of 1,200 ps. Additionally, we utilize the first 10 µs of a 100 µs simulation of the main Protease of SARS-CoV-2, a sequence of 10,001 frames with a 1,000 ps resolution." }, { "heading": "4 RESULTS", "text": "Structure Reconstruction Figure 2 displays the ability of ProGAE to accurately reconstruct protein conformations. The backbones are visible with atom-wise error in Figures 6a and 6b in the appendix. From the visualized atom-wise L2 reconstruction error, it is clear that our network can capture and reconstruct notable conformational changes of a protein. Figures 8a and 8b in the appendix display these reconstructions with color denoting fragment instead of L2 error for clarity. Consistent with the low RMSD error, visually the reconstructed structures appear consistent with ground truths, with larger RMSDs observed in the flexible loop and turn regions.\nTable 1 contains performance metrics of ProGAE on training and test sets. Generalization is measured by the L2 reconstruction error of the backbone atom coordinates, as well as RMSD (root mean square distance) after alignment. For hACE2, we achieve sub-Angstrom performance on the test set. In either case, the RMSD of the reconstruction is within the experimental resolution of the associated PDB files; 6VXX/6VW1 for the S protein and 6VW1 for hACE2. Additionally, the average error in the length of the pseudobonds is also sub-Angstrom. Thus, it is evident that ProGAE is able to reconstruct proteins within meaningful resolution.\nUtility of the Extrinsic Latent Space With the reconstruction capabilities of ProGAE verified, we consider the benefit of having separate intrinsic and extrinsic latent spaces. First, we explore the statistical relationship between the learned intrinsic latent space and the extrinsic latent space. Canonical correlation analysis (CCA) is a natural approach to assess if a linear relationship exists (Hardoon et al., 2004). We include background on CCA in Appendix A.2.\nTable 2 includes the leading correlation between the intrinsic and extrinsic latent spaces for each dataset. Note this correlation is very low, implying that there is a negligible linear relationship be-\ntween the intrinsic and extrinsic latent spaces. Learning a disentangled latent representation is often desired for better interpretability, which is typically measured using a generalization of mutual information such as total correlation as in (Chen et al., 2018). While structural conditions may prevent the intrinsic and extrinsic embeddings from being completely independent, Table 2 indicates absence of a linear relationship between intrinsic and extrinsic latent vectors in our learned model, confirming a notable level of disentanglement that has been explicitly encoded in our model architecture.\nAs stated earlier, each simulation trajectory in the dataset corresponds to the S or hACE2 protein bound to a specific drug. Then it is natural to investigate if this distinct drug information is encoded in the two disentangled latent spaces. Table 2 contains the performance of a linear classifier trained on the different latent spaces to classify the drug present in each frame. It is clear that the drug molecule can be almost perfectly classified in the extrinsic latent space, while such classification is random in the intrinsic latent space. Figures 3a and 3b display the embeddings of the test set in the latent spaces, projected to the first two canonical components. In these figures, color denotes the identity of the drug that the protein is bound to. Even in the 2D projection of the extrinsic latent space, clustering by the drug identity is apparent, which is not the case for the intrinsic embedding.\nNext, we consider if this linear separation is chemically meaningful. We train a linear regression model on the extrinsic latent space to predict physico-chemical properties of a drug binding to a protein. Table 3 displays the performance of the model at predicting the properties of molecular weight, hydrogen bond donor count, and topological polar surface area. For comparison to our latent embedding, we train a linear regression model on the first ne principal component scores of the PCA of the extrinsic signal on each element of the test dataset. The latent regression outperforms that of PCA, indicating that the the extrinsic latent embedding captures more physico-chemical information about the bound drug. We believe this linear regression is appropriate as it prevents overfitting.\nUtility of the Intrinsic Latent Space Having confirmed the utility of a separate extrinsic latent space, we weigh the benefits of including the intrinsic latent space in the model. This is nontrivial, as the mean pseudo-bond length is 3.86Å with a deviation of 0.06. We find the inclusion of the intrinsic latent space improves the geometric validity of the reconstructed protein as seen in the\nfollowing ablation study. We trained a model that only encodes the extrinsic signal to reconstruct the protein. While it was comparable in performance regarding L2 error, we found this extrinsiconly model resulted in a higher percentage of erroneous bonds. This is shown in Table 4. Here we define a erroneous bond, if the bond length deviates by more than 10% from the minimum of the ground truth distribution, as such deviations will result in steric clashes. We also trained a model on only the intrinsic signal, but did not analyze the model further due to poor performance (> 3.0Å). These results are found within the appendix in Table 6.\nGeometry of the Generated Structural Ensemble As a generative model, it is important to consider if paths in the latent space meet our expectations. One way by which we measure this is how the pairwise distance matrix between non-bonded atoms in a generated protein changes as the latent variables change. Specifically,we are interested in the distance between non-bonded atoms.\nIn Figures 4a and 4b, we plot the norm of the difference between these distance matrices for the generated protein and that of the protein at its initiation. We sample the latent space along two principal components, while setting others to 0. So in these figures 4a and 4b, we see three different cross sections of this space. It is immediately apparent that a change in the extrinsic latent code dominates changes in the non-bonded distance. This is a desired behavior of a disentangled geometric\nautoencoder, where large-scale changes are captured in the extrinsic latent space, while small-scale changes in local distance are controlled by the intrinsic latent space.\nWe also evaluate the performance of linear interpolations in the learned latent space. Given two protein conformations from different trajectories (i.e. in the context of two different drugs), we generate a path between them by generating the linear interpolation of their latent codes. This provides a path of structural variation that does not exist in the training data. The results of this interpolation in terms of RMSD is shown in Figure 5. As expected, we see a smooth exchange in the RMSD error of the generated protein from the first protein and from the second protein.\nTransfer Learning– Extension to Different Proteins To check the generalization of ProGAE, we investigate transfer learning to simulations of different proteins. We begin with models trained on the S protein comprised of the 3 RBDs and on hACE2. These results are summarized in Table 5. We transfer learned ProGAE models to trajectories of the closed and partially open state of the entire S protein, as well as SARS-COV-2 main Protease, which provides insight into the generalization capability of the convolutional filters that we have learned. As a result, six scenarios of the transfer learning, in addition to three random baselines, are reported in Table 5. When transferring the model trained on the 3 RBDs of the S protein to the S protein in the closed state, we are transferring the model learned on a partial structure to the entire protein that is much larger in size. Model transfer to the S protein in the partially open state deals with a scenario where the conformational state of the protein is notably different (closed vs partially open). Transferring the model trained on hACE2 to the S protein datasets studies the knowledge transfer to an entirely different protein, but one which hACE2 is known to interact with. Finally, transferring both S protein and hACE2 models to the main Protease simulation allows us to study the transfer of the models to a completely different protein without notable interaction with the source protein. Given performing long time-scale simulations of large protein systems at high resolution is a computationally expensive process, our method appears beneficial, as ProGAE transfers well to non-related proteins of larger size.\nThe only incompatible layer is the dense layer mapping from the latent spaces. To investigate transfer learning, we train just this dense layer for 10 epochs. As a baseline, we train the same layer of a randomly initialized model. In all cases, the transferred model performs better than the baseline. Thus the learned filters generalize to trajectories of completely different protein systems." }, { "heading": "5 CONCLUSION", "text": "In this work we introduce a novel geometric autoencoder named ProGAE for learning meaningful disentangled representations of the protein conformational space. Our model accurately reconstructs protein structure. The autoencoder separately encodes intrinsic and extrinsic geometries to ensure better latent space interpretability. The extrinsic latent space can classify the protein structures with respect to the bound drug molecules, as well as can predict the drug properties. The intrinsic latent space assists in improving the validity of the bond geometry in the reconstructions. We also show that the geometric convolutional filters learned in training can be successfully transferred to trajectories of different protein systems, irrespective of system size, conformational state, or presence of protein-protein interaction. These results on learning, predicting, and generating protein conformations suggest that the proposed framework can serve as the first step towards bridging geometric deep learning with molecular simulations." } ]
2,020
PROGAE: A GEOMETRIC AUTOENCODER-BASED GENERATIVE MODEL FOR DISENTANGLING PROTEIN CONFORMATIONAL SPACE
SP:637780028802e048cce8c2a18cbaaa851e915b38
[ "This paper develops an efficient streaming algorithm to approximate the optimal importance sampling weights for variance reduction in finite-sum SGD. The optimal weights are proportional to each sample's gradient norm; this work uses AMS-like moment estimation to sketch gradient norms which take the form of a bounded-degree polynomial, in time linear in the input sparsity and polynomial in the dimension d, iteration count T, and the log of the number of the samples n. A second-order analogue is derived for approximating optimal importance weights for sampling the Hessian. Some experiments are shown with more simplistic importance weight estimators (not the proposed algorithm), to demonstrate the advantage over uniform sampling." ]
We study sampling algorithms for variance reduction methods for stochastic optimization. Although stochastic gradient descent (SGD) is widely used for large scale machine learning, it sometimes experiences slow convergence rates due to the high variance from uniform sampling. In this paper, we introduce an algorithm that approximately samples a gradient from the optimal distribution for a common finite-sum form with n terms, while just making a single pass over the data, using input sparsity time, and Õ (Td) space. Our algorithm can be implemented in big data models such as the streaming and distributed models. Moreover, we show that our algorithm can be generalized to approximately sample Hessians and thus provides variance reduction for second-order methods as well. We demonstrate the efficiency of our algorithm on large-scale datasets.
[]
[ { "authors": [ "Noga Alon", "Yossi Matias", "Mario Szegedy" ], "title": "The space complexity of approximating the frequency moments", "venue": "J. Comput. Syst. Sci.,", "year": 1999 }, { "authors": [ "Alexandr Andoni", "Robert Krauthgamer", "Krzysztof Onak" ], "title": "Streaming algorithms via precision sampling", "venue": "In IEEE 52nd Annual Symposium on Foundations of Computer Science,", "year": 2011 }, { "authors": [ "Guillaume Bouchard", "Théo Trouillon", "Julien Perez", "Adrien Gaidon" ], "title": "Online learning to sample", "venue": "CoRR, abs/1506.09016,", "year": 2015 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "LIBSVM: A library for support vector machines", "venue": "ACM Transactions on Intelligent Systems and Technology,", "year": 2011 }, { "authors": [ "Moses Charikar", "Kevin C. Chen", "Martin Farach-Colton" ], "title": "Finding frequent items in data streams", "venue": "Theor. Comput. Sci.,", "year": 2004 }, { "authors": [ "Kenneth L. Clarkson", "David P. Woodruff" ], "title": "Low rank approximation and regression in input sparsity time", "venue": "In Symposium on Theory of Computing Conference,", "year": 2013 }, { "authors": [ "Hadi Daneshmand", "Aurélien Lucchi", "Thomas Hofmann" ], "title": "Starting small - learning with adaptive sample sizes", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Aaron Defazio", "Francis R. Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Petros Drineas", "Malik Magdon-Ismail", "Michael W. Mahoney", "David P. Woodruff" ], "title": "Fast approximation of matrix coherence and statistical leverage", "venue": "J. Mach. Learn. Res.,", "year": 2012 }, { "authors": [ "Roy Frostig", "Rong Ge", "Sham M. Kakade", "Aaron Sidford" ], "title": "Competing with the empirical risk minimizer in a single pass", "venue": "In Proceedings of The 28th Conference on Learning Theory, COLT,", "year": 2015 }, { "authors": [ "Siddharth Gopal" ], "title": "Adaptive sampling for SGD by exploiting side information", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems 26,", "year": 2013 }, { "authors": [ "Tyler B. Johnson", "Carlos Guestrin" ], "title": "Training deep models faster with robust, approximate importance sampling", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS,", "year": 2018 }, { "authors": [ "Ellango Jothimurugesan", "Ashraf Tahmasbi", "Phillip B. Gibbons", "Srikanta Tirthapura" ], "title": "Variancereduced stochastic gradient descent on streaming data", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS,", "year": 2018 }, { "authors": [ "Hossein Jowhari", "Mert Saglam", "Gábor Tardos" ], "title": "Tight bounds for lp samplers, finding duplicates in streams, and related problems", "venue": "In Proceedings of the 30th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems,", "year": 2011 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sepideh Mahabadi", "Ilya P. Razenshteyn", "David P. Woodruff", "Samson Zhou" ], "title": "Non-adaptive adaptive sampling on turnstile streams", "venue": "In Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2020 }, { "authors": [ "Raghu Meka" ], "title": "Cs289ml: Algorithmic machine learning notes, 2017", "venue": "URL https://raghumeka. github.io/CS289ML/gdnotes.pdf", "year": 2017 }, { "authors": [ "Deanna Needell", "Nathan Srebro", "Rachel Ward" ], "title": "Stochastic gradient descent, weighted sampling, and the randomized kaczmarz", "venue": "algorithm. Math. Program.,", "year": 2016 }, { "authors": [ "Jelani Nelson", "Huy L. Nguyen" ], "title": "OSNAP: faster numerical linear algebra algorithms via sparser subspace embeddings", "venue": "In 54th Annual IEEE Symposium on Foundations of Computer Science,", "year": 2013 }, { "authors": [ "Arkadi Nemirovski", "Anatoli B. Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "Arkadi Semenovich Nemirovsky", "David Borisovich Yudin" ], "title": "Problem complexity and method efficiency in optimization", "venue": null, "year": 1983 }, { "authors": [ "Xun Qian", "Peter Richtárik", "Robert M. Gower", "Alibek Sailanbayev", "Nicolas Loizou", "Egor" ], "title": "Shulgin. SGD with arbitrary sampling: General analysis and improved rates", "venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML,", "year": 2019 }, { "authors": [ "Sashank J. Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabás Póczos", "Alexander J. Smola" ], "title": "On variance reduction in stochastic gradient descent and its asynchronous variants", "venue": "In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Farbod Roosta-Khorasani", "Michael W. Mahoney" ], "title": "Sub-sampled newton methods I: globally convergent algorithms", "venue": "CoRR, abs/1601.04737,", "year": 2016 }, { "authors": [ "Farbod Roosta-Khorasani", "Michael W. Mahoney" ], "title": "Sub-sampled newton methods II: local convergence rates", "venue": "CoRR, abs/1601.04738,", "year": 2016 }, { "authors": [ "Farbod Roosta-Khorasani", "Kees Van Den Doel", "Uri Ascher" ], "title": "Stochastic algorithms for inverse problems involving pdes and many measurements", "venue": "SIAM Journal on Scientific Computing,", "year": 2014 }, { "authors": [ "Nicolas Le Roux", "Mark Schmidt", "Francis R. Bach" ], "title": "A stochastic gradient method with an exponential convergence rate for finite training sets", "venue": "In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems.,", "year": 2012 }, { "authors": [ "Farnood Salehi", "Patrick Thiran", "L. Elisa Celis" ], "title": "Coordinate descent with bandit sampling", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS,", "year": 2018 }, { "authors": [ "Sebastian U. Stich", "Anant Raj", "Martin Jaggi" ], "title": "Safe adaptive importance sampling", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Peng Xu", "Jiyan Yang", "Farbod Roosta-Khorasani", "Christopher Ré", "Michael W. Mahoney" ], "title": "Subsampled newton methods with non-uniform sampling", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Peng Xu", "Fred Roosta", "Michael W Mahoney" ], "title": "Newton-type methods for non-convex optimization under inexact hessian information", "venue": "Mathematical Programming,", "year": 2019 }, { "authors": [ "Peng Xu", "Fred Roosta", "Michael W. Mahoney" ], "title": "Second-order optimization for non-convex machine learning: an empirical study", "venue": "In Proceedings of the 2020 SIAM International Conference on Data Mining,", "year": 2020 }, { "authors": [ "Peilin Zhao", "Tong Zhang" ], "title": "Stochastic optimization with importance sampling for regularized loss minimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning ICML,", "year": 2015 }, { "authors": [ "Alon" ], "title": "◦An. The generalization to a L2 polynomial inner product sampler follows immediately. Notably, our data structure can be built simply given access to A, and will still sample from the correct distribution when x is given as a post-processing vector. We first describe in Section A.1.1 some necessary subroutines that our sampler requires. These subroutines are natural generalizations of the well-known frequency moment estimation algorithm", "venue": null, "year": 1999 }, { "authors": [ "Charikar" ], "title": "We then give the L1,2,d sampler in full in Section A.1.2", "venue": null, "year": 2004 } ]
[ { "heading": null, "text": "We study sampling algorithms for variance reduction methods for stochastic optimization. Although stochastic gradient descent (SGD) is widely used for large scale machine learning, it sometimes experiences slow convergence rates due to the high variance from uniform sampling. In this paper, we introduce an algorithm that approximately samples a gradient from the optimal distribution for a common finite-sum form with n terms, while just making a single pass over the data, using input sparsity time, and Õ (Td) space. Our algorithm can be implemented in big data models such as the streaming and distributed models. Moreover, we show that our algorithm can be generalized to approximately sample Hessians and thus provides variance reduction for second-order methods as well. We demonstrate the efficiency of our algorithm on large-scale datasets." }, { "heading": "1 INTRODUCTION", "text": "There has recently been tremendous progress in variance reduction methods for stochastic gradient descent (SGD) methods for the standard convex finite-sum form optimization problem min\nx∈Rd F (x) :=\n1 n ∑n i=1 fi(x), where f1, . . . , fn : Rd → R is a set of convex functions that commonly represent loss functions. Whereas gradient descent (GD) performs the update rule xt+1 = xt − ηt∇F (xt) on the iterative solution xt at iterations t = 1, 2, . . ., SGD (Robbins & Monro, 1951; Nemirovsky & Yudin, 1983; Nemirovski et al., 2009) picks it ∈ [n] in iteration t with probability pit and performs the update rule xt+1 = xt − ηtnpit∇fit(xt), where ∇fit is the gradient (or a subgradient) of fit and ηt is some predetermined learning rate. Effectively, training example it is sampled with probability pit and the model parameters are updated using the selected example.\nThe SGD update rule only requires the computation of a single gradient at each iteration and provides an unbiased estimator to the full gradient, compared to GD, which evaluates n gradients at each iteration and is prohibitively expensive for large n. However, since SGD is often performed with uniform sampling so that the probability pi,t of choosing index i ∈ [n] at iteration t is pi,t = 1n at all times, the variance introduced by the randomness of sampling a specific vector function can be a bottleneck for the convergence rate of the iterative process. Thus the subject of variance reduction beyond uniform sampling has been well-studied in recent years (Roux et al., 2012; Johnson & Zhang, 2013; Defazio et al., 2014; Reddi et al., 2015; Zhao & Zhang, 2015; Daneshmand et al., 2016; Needell et al., 2016; Stich et al., 2017; Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Salehi et al., 2018; Qian et al., 2019).\nA common technique to reduce variance is importance sampling, where the probabilities pi,t are chosen so that vector functions with larger gradients are more likely to be sampled. Thus for Var(v) := E [ ‖v‖22 ] − ‖E [v]‖22, for a random vector v, then pi,t = 1 n for uniform sampling implies\nσ2t = Var\n( 1\nnpit,t ∇fit\n) = 1\nn2\n( n\nn∑ i=1\n‖∇fi(xt)‖2 − n2 · ‖∇F (xt)‖2 ) ,\nwhereas importance sampling with pi,t = ‖∇fi(xt)‖∑n j=1‖∇fj(xt)‖ gives\nσ2t = Var\n( 1\nnpit,t ∇fit\n) = 1\nn2 ( n∑ i=1 ‖∇fi(xt)‖ )2 − n2 · ‖∇F (xt)‖2 , which is at most 1n2 ( n ∑ ‖∇fi(xt)‖2 − n2 · ‖∇F (xt)‖2 ) , by the Root-Mean Square-Arithmetic Mean Inequality, and can be significantly less. Hence the variance at each step is reduced, possibly substantially, e.g., Example 1.3 and Example 1.4, by performing importance sampling instead of uniform sampling. In fact, it follows from the Cauchy-Schwarz inequality that the above importance sampling probability distribution is the optimal distribution for variance reduction. However, computing the probability distribution for importance sampling requires computing the gradients in each round, which is too expensive in the first place.\nSecond-Order Methods. Although first-order methods such as SGD are widely used, they do sometimes have issues such as sensitivity to the choice of hyperparameters, stagnation at high training errors, and difficulty in escaping saddle points. By considering second-order information such as curvature, second-order optimization methods are known to be robust to several of these issues, such as ill-conditioning. For example, Newton’s method can achieve a locally super-linear convergence rate under certain conditions, independent of the problem. Although naı̈ve second-order methods are generally too slow compared to common first-order methods, stochastic Newton-type methods such as Gauss-Newton have shown to be scalable in the scientific computing community (Roosta-Khorasani et al., 2014; Roosta-Khorasani & Mahoney, 2016a;b; Xu et al., 2019; 2020).\nOur Contributions. We give a time efficient algorithm that provably approximates the optimal importance sampling using a small space data structure. Remarkably, our data structure can be implemented in big data models such as the streaming model, which just takes a single pass over the data, and the distributed model, which requires just a single round of communication between parties holding each loss function. For ∇F = 1n ∑ ∇fi(x), where each ∇fi = f(〈ai,x〉) · ai for some polynomial f and vector ai ∈ Rd, let nnz(A) be the number of nonzero entries of A := a1◦ . . .◦an1. Thus for T iterations, where d T n, GD has runtime Õ (T · nnz(A)) while our algorithm has runtime T · poly(d, log n) + Õ (nnz(A)), where we use Õ (·) to suppress polylogarithmic terms.\nTheorem 1.1 Let ∇F = 1n ∑ ∇fi(x), where each ∇fi = f(〈ai,x〉) · ai for some polynomial f and vector ai ∈ Rd and let nnz(A) be the number of nonzero entries of A := a1 ◦ . . . ◦ an. For d T n, there exists an algorithm that performs T steps of SGD and at each step samples a gradient within a constant factor of the optimal probability distribution. The algorithm requires a single pass over A and uses Õ (nnz(A)) pre-processing time and Õ (Td) space.\nTheorem 1.1 can be used to immediately obtain improved convergence guarantees for a class of functions whose convergence rate depends on the variance σ2t , such as µ-smooth functions or strongly convex functions. Recall that SGD offers the following convergence guarantees for smooth functions:\nTheorem 1.2 (Nemirovski et al., 2009; Meka, 2017) Let F be a µ-smooth convex function and xopt = argminF (x). Let σ2 be an upper bound for the variance of the unbiased estimator across all iterations and xk = x1+...+xkk . Let each step-size ηt be η ≤ 1 µ . Then for SGD with initial position x0,\nE [F (xk)− F (xopt)] ≤ 1\n2ηk ‖x0 − xopt‖22 +\nησ2\n2 , so that k = O ( 1 2 ( σ2 + µ ‖x0 − xopt‖22 )2) iterations suffices to obtain an -approximate optimal\nvalue by setting η = 1√ k .\nIn the convergence guarantees of Theorem 1.2, we obtain a constant factor approximation to the variance σ = σopt from optimal importance sampling, which can be significantly better than the\n1We use the notation a ◦ b to denote the vertical concatenation [ a b ] .\nvariance σ = σuniform from uniform sampling in standard SGD. We first show straightforward examples where uniform sampling an index performs significantly worse than importance sampling. For example, if∇fi(x) = 〈ai,x〉 · ai, then for A = a1 ◦ . . . ◦ an:\nExample 1.3 When the nonzero entries of the input A are concentrated in a small number of vectors ai, uniform sampling will frequently sample gradients that are small and make little progress, whereas importance sampling will rarely do so. In an extreme case, the A can contain exactly one nonzero vector ai and importance sampling will always output the full gradient whereas uniform sampling will only find the nonzero row with probability 1n .\nExample 1.4 It may be that all rows of A have large magnitude, but x is nearly orthogonal to most of the rows of A and heavily in the direction of row ar. Then 〈ai,x〉 · ai is small in magnitude for most i, but 〈ar,x〉 · ar is large so uniform sampling will often output small gradients while importance sampling will output 〈ar,x〉 · ar with high probability.\nThus Example 1.3 shows that naı̈ve SGD with uniform sampling can suffer up to a multiplicative n factor loss in the convergence rate of Theorem 1.2 compared to that of SGD with importance sampling whereas Example 1.4 shows a possible additive n factor loss.\nUnlike a number of previous variance reduction methods, we do not require distributional assumptions (Bouchard et al., 2015; Frostig et al., 2015; Gopal, 2016; Jothimurugesan et al., 2018) or offline access to the data (Roux et al., 2012; Johnson & Zhang, 2013; Defazio et al., 2014; Reddi et al., 2015; Zhao & Zhang, 2015; Daneshmand et al., 2016; Needell et al., 2016; Stich et al., 2017; Johnson & Guestrin, 2018; Katharopoulos & Fleuret, 2018; Salehi et al., 2018; Qian et al., 2019). On the other hand, for applications such as neural nets in which the parameters in the loss function can change, we can use a second-order approximation for a number of iterations, then reread the data to build a new second-order approximation when necessary.\nWe complement our main theoretical result with empirical evaluations comparing our algorithm to SGD with uniform sampling for logistic regression on the a9a Adult dataset collected by UCI and retrieved from LibSVM (Chang & Lin, 2011). Our evaluations demonstrate that for various step-sizes, our algorithm has significantly better performance than uniform sampling across both the number of SGD iterations and surprisingly, wall-clock time.\nWe then show that our same framework can also be reworked to approximate importance sampling for the Hessian, thereby performing variance reduction for second-order optimization methods. (Xu et al., 2016) reduce the bottleneck of many second-order optimization methods to the task of sampling s rows of A = a1 ◦ . . . ◦ an so that a row ai is sampled with probability ‖f(〈ai,x〉)·a>i ai‖2F∑n i=1‖f(〈ai,x〉)·a>i ai‖2F ,\nfor some fixed function f so that the Hessian H has the form H := ∇2F = 1n ∑ ∇f(〈ai,x〉)a>i ai. (Xu et al., 2016) show that this finite-sum form arises frequently in machine learning problems such as logistic regression with least squares loss.\nTheorem 1.5 Let∇2F = 1n ∑ ∇fi(x), where each∇fi = f(〈ai,x〉) · a>i ai for some polynomial f and vector ai ∈ Rd and let nnz(A) be the number of nonzero entries of A := a1 ◦ . . . ◦ an. For d T n, there exists an algorithm that subsamples T Hessians within a constant factor of the optimal probability distribution. The algorithm requires a single pass over A and uses Õ (nnz(A)) pre-processing time and Õ (Td) space." }, { "heading": "2 SGD ALGORITHM", "text": "We first introduce a number of algorithms that will be used in our final SGD algorithm, along with their guarantees. We defer all formal proofs to the appendix.\nL2 polynomial inner product sketch. For a fixed polynomial f , we first require a constant-factor approximation to ‖ ∑n i=1 f(〈ai,x〉) · ai‖2 for any query x ∈ R\nd; we call such an algorithm an L2 polynomial inner product sketch and give such an algorithm with the following guarantee:\nTheorem 2.1 For a fixed > 0 and polynomial f , there exists a data structure ESTIMATOR that outputs a (1+ )-approximation to ∑n i=1 ‖f(〈ai,x〉) · ai‖ 2 2 for any query x ∈ Rd. The data structure\nrequires a single pass over A = a1 ◦ . . . ◦ an (possibly through turnstile updates2), can be built in Õ ( nnz(A) + d 2 ) time and Õ ( d 2 ) space, uses query time poly ( d, 1 , log n ) , and succeeds with probability 1− 1poly(n) .\nThe L2 polynomial inner product sketch ESTIMATOR is a generalization of AMS variants Alon et al. (1999); Mahabadi et al. (2020) and is simple to implement. For intuition, observe that for d = 1 and the identity function f , the matrix A ∈ Rn×d reduces to a vector of length n so that estimating ∑n i=1 ‖f(〈ai,x〉) · ai‖ 2 2 is just estimating the squared norm of a vector in sublinear space.\nFor a degree p polynomial f , ESTIMATOR generates random sign matrices S0,S1, . . . ,Sp with Õ ( 1 2 ) rows and maintains S0A, . . . ,SpA. To estimate ∑n i=1 ‖αq · (〈ai,x〉)q · aq‖ 2 2 for an integer q ∈ [0, p] and scalar αq on a given query x, ESTIMATOR creates the q-fold tensor Y = y⊗q for each row y of SqA and the (q − 1)-fold tensor X = x⊗(q−1). Note that X and Y can be refolded into dimensions Rdq−1 and Rd×dq−1 so that YX ∈ Rd and ‖αq ·YX‖22 is an unbiased estimator of ∑n i=1 ‖αq · (〈ai,x〉)q · aq‖ 2 2. We give this algorithm in full in Algorithm 1. Thus, taking the\naverage over O ( 1 2 ) instances of the sums of the tensor products for rows y across the sketches\nS0A, . . . ,SpA gives a (1 + )-approximation to ∑n i=1 ‖f(〈ai,x〉) · ai‖ 2 2 with constant probability. The success probability of success can then be boosted to 1 − 1poly(n) by taking the median of O (log n) such outputs.\nAlgorithm 1 Basic algorithm ESTIMATOR that outputs (1 + )-approximation to∑n i=1 ‖(〈ai,x〉)p · ai‖ 2 2, where x is a post-processing vector\nInput: Matrix A = a1 ◦ . . . ◦ ∈ Rn×d, post-processing vector x ∈ Rd, integer p ≥ 0, constant parameter > 0.\nOutput: (1 + )-approximation to ∑n i=1 ‖(〈ai,x〉)p · ai‖ 2 2.\n1: r ← Θ(log n) with a sufficiently large constant. 2: b← Ω ( 1 2 ) with a sufficiently large constant. 3: Let T be an r× b table of buckets, where each bucket stores an Rd vector, initialized to the zeros vector. 4: Let si ∈ {−1,+1} be 4-wise independent for i ∈ [n]. 5: Let hi : [n]→ [b] be 4-wise independent for i ∈ [r]. 6: Let ui,j be the all zeros vector for each i ∈ [r], j ∈ [b]. 7: for each j = 1 to n do 8: for each i = 1 to r do 9: Add sjaj to the vector in bucket hi(j) of row i.\n10: Let vi,j be the vector in row i, bucket j of T for i ∈ [r], j ∈ [b]. 11: Process x: 12: for i ∈ [r], j ∈ [b] do 13: ui,j ← v⊗pi,j x⊗(p−1)\n14: return mediani∈[r] 1b ∑ j∈[b] ‖ui,j‖ 2 2.\nL2 polynomial inner product sampler. Given a matrix A = a1 ◦ . . . ◦ an ∈ Rn×d and a fixed function f , a data structure that takes query x ∈ Rd and outputs an index i ∈ [n] with probability roughly\n‖f(〈ai,x〉) · ai‖22∑n i=1 ‖f(〈ai,x〉) · ai‖ 2 2\nis called an L2 polynomial inner product sampler. We give such a data structure in Section A.1:\nTheorem 2.2 For a fixed > 0 and polynomial f , there exists a data structure SAMPLER that takes any query x ∈ Rd and outputs an index i ∈ [n] with probability (1± )·‖f(〈ai,x〉)·ai‖ 2 2∑n\ni=1‖f(〈ai,x〉)·ai‖ 2 2 + 1poly(n) , along with a vector u := f(〈ai,x〉) · ai + v, where E [v] = 0 and ‖v‖2 ≤ · ‖f(〈ai,x〉) · ai‖2. The data\n2Turnstile updates are defined as sequential updates to the entries of A.\nstructure requires a single pass over A = a1 ◦ . . . ◦ an (possibly through turnstile updates), can be built in Õ ( nnz(A) + d 2 ) time and Õ ( d 2 ) space, uses query time poly ( d, 1 , log n ) , and succeeds with probability 1− 1poly(n) .\nWe remark that T independent instances of SAMPLER provide an oracle for T steps of SGD with importance sampling, but the overall runtime would be T · nnz(A) so it would be just as efficient to run T iterations of GD. The subroutine SAMPLER is significantly more challenging to describe and analyze, so we defer its discussion to Section A.1, though it can be seen as a combination of ESTIMATOR and a generalized CountSketch Charikar et al. (2004); Nelson & Nguyen (2013); Mahabadi et al. (2020) variant and is nevertheless relatively straightforward to implement.\nLeverage score sampler. Although SAMPLER outputs a (noisy) vector according to the desired probability distribution, we also require an algorithm that automatically does this for indices i ∈ [n] that are likely to be sampled multiple times across the T iterations. Equivalently, we require explicitly storing the rows with high leverage scores, but we defer the formal discussion and algorithmic presentation to Section A.2. For our purposes, the following suffices:\nTheorem 2.3 There exists an algorithm LEVERAGE that returns all indices i ∈ [n] such that (1± )·‖f(〈ai,x〉)·ai‖22∑n\ni=1‖f(〈ai,x〉)·ai‖ 2 2\n≥ 1200Td for some x ∈ R n, along with a vector ui := f(〈ai,x〉) ·ai+vi, where\n‖vi‖2 ≤ · ‖f(〈ai,x〉) · ai‖2. The algorithm requires a single pass over A = a1 ◦ . . . ◦an (possibly through turnstile updates), uses Õ ( nnz(A) + d ω\n2\n) runtime (where ω denotes the exponent of square\nmatrix multiplication) and Õ ( d 2 ) space, and succeeds with probability 1− 1poly(n) ." }, { "heading": "2.1 SGD ALGORITHM AND ANALYSIS", "text": "For the finite-sum optimization problem min x∈Rd\nF (x) := 1n ∑n i=1 fi(x), where each∇fi = f(〈ai,x〉) ·\nai, recall that we could simply an instance of SAMPLER as an oracle for SGD with importance sampling. However, naı̈vely running T SGD steps requires T independent instances, which uses T · nnz(A) runtime by Theorem 2.2. Thus we use a two level data structure by first implicitly partition the rows of matrix A = a1 ◦ . . . ◦ an into β := Θ(Td) buckets B1, . . . , Bβ and creating an instance of ESTIMATOR and SAMPLER for each bucket. The idea is that for a given query xt in SGD iteration t ∈ [T ], we first query xt to each of the ESTIMATOR data structures to estimate∑ i∈Bj ‖f(〈ai,x〉) · ai‖ 2 2 for each j ∈ [β]. We then sample index j ∈ [β] among the buckets\nB1, . . . , Bβ with probability roughly ∑ i∈Bj\n‖f(〈ai,xt〉)·ai‖22∑n i=1‖f(〈ai,xt〉)·ai‖ 2 2\n. Once we have sampled index j, it would seem that querying the instance SAMPLER corresponding to Bj simulates SGD, since SAMPLER now performs importance sampling on the rows in Bj , which gives the correct overall probability distribution for each row i ∈ [n]. Moreover, SAMPLER has runtime proportional to the sparsity of Bj , so the total runtime across the β instances of SAMPLER is Õ (nnz(A)). However, an issue arises when the same bucket Bj is sampled multiple times, as we only create a single instance of SAMPLER for each bucket. We avoid this issue by explicitly accounting for the buckets that are likely to be sampled multiple times. Namely, we show that if ‖f(〈ai,xt〉)·ai‖ 2 2∑n\ni=1‖f(〈ai,xt〉)·ai‖ 2 2 < 1 200Td for all t ∈ [T ] and i ∈ [n], then by Bernstein’s inequality, the probability that no bucket Bj is sampled multiple times is at least 99100 . Thus we use LEVERAGE to separate all such rows ai that violate this property from their respective buckets and explicitly track the SGD steps in which these rows are sampled. We give the algorithm in full in Algorithm 2.\nThe key property achieved by Algorithm 2 in partitioning the rows and removing the rows that are likely to be sampled multiple times is that each of the SAMPLER instances are queried at most once.\nLemma 2.4 With probability at least 98100 , each t ∈ [T ] uses a different instance of SAMPLERj .\nProof of Theorem 1.1: Consider Algorithm 2. By Lemma 2.4, each time t ∈ [T ] uses a fresh instance of SAMPLERj , so that independent randomness is used. A possible concern is that each instance ESTIMATORj is not using fresh randomness, but we observe that ESTIMATOR procedures\nAlgorithm 2 Approximate SGD with Importance Sampling\nInput: Matrix A = a1 ◦ . . . ◦ an ∈ Rn×d, parameter T for number of SGD steps. Output: T gradient directions.\n1: Preprocessing Stage: 2: β ← Θ(Td) with a sufficiently large constant. 3: Let h : [n]→ [β] be a uniformly random hash function. 4: Let Bj be the matrix formed by the rows ai of A with h(i) = j, for each j ∈ [β]. 5: Create an instance ESTIMATORj and SAMPLERj for each Bj with j ∈ [β] with = 12 . 6: Run LEVERAGE to find a set L0 of row indices and corresponding (noisy) vectors. 7: Gradient Descent Stage: 8: Randomly pick starting location x0 9: for t = 1 to T do\n10: Let qi be the output of ESTIMATORj on query xt−1 for each i ∈ [β]. 11: Sample j ∈ [β] with probability pj = qj∑\ni∈[β] qi .\n12: if there exists i ∈ L0 with h(i) = j then 13: Use ESTIMATORj , LEVERAGE, and SAMPLERj to sample gradient wt = ∇̂fit(xt) 14: else 15: Use SAMPLERj to sample gradient wt = ∇̂fit(xt) 16: p̂i,t ←\n‖wt‖22∑ j∈[β] qj\n17: xt+1 ← xt − ηtnp̂i,t ·wt\nis only used in sampling a bucket j ∈ [β] as an L2 polynomial inner product sketch; otherwise the sampling uses fresh randomness whereas the sampling is built into each instance of SAMPLERj . By Theorem 2.2, each index i is sampled with probability within a factor 2 of the importance sampling probability distribution. By Theorem 2.1, we have that p̂i,t is within a factor 4 of the probability pi,t induced by optimal importance sampling SGD. Note that wt = ∇̂fi(xt) is an unbiased estimator of∇fi(xt) and ‖wt‖ is a 2-approximation to ‖∇fi(xt)‖ by Theorem 2.2. Hence, the variance at each time t ∈ [T ] of Algorithm 2 is within a constant factor of the variance σ2 = ( ∑ ‖∇fi(xt)‖)2 − ‖∇F (xt)‖2 of optimal importance sampling SGD.\nBy Theorem 2.1, Theorem 2.2, and Theorem 2.3, the preprocessing time is Õ (nnz(A)) + T · poly(d, log n) due to the choices of = O (1) and β = Θ(Td), but partitioning the nonzero entries of A across the β buckets. Similarly, the space used by the algorithm is Õ (Td). Once the gradient descent stage of Algorithm 2 begins, it takes poly(d) time in each step t ∈ [T ] to query the β = Θ(Td) instances of SAMPLER and ESTIMATOR, for total time T · poly(d, log n). 2" }, { "heading": "3 SECOND-ORDER OPTIMIZATION", "text": "In this section, we repurpose our data stucture that performs importance sampling for SGD to instead perform importance sampling for second-order optimization. Given a second-order optimization algorithm that requires a sampled Hessian Ht, possibly along with additional inputs such as the current iterate xt and the gradient gt of F , we model the update rule by an oracle O(Ht), suppressing other inputs to the oracle in the notation. For example, the oracle O corresponding to the canonical second-order algorithm Newton’s method can be formulated as\nxt+1 = O(xt) := xt − [Ht]−1gt.\nBy black-boxing the update rule of any second-order optimization algorithm into the oracle, we can focus our attention to the running time of sampling a Hessian with nearly the optimal probability distribution. Thus we prove generalizations of the L2 polynomial inner product sketch, the L2 polynomial inner product sampler, and the leverage score sampler for Hessians.\nTheorem 3.1 For a fixed > 0 and polynomial f , there exists a data structure HESTIMATOR that outputs a (1 + )-approximation to ∑n i=1 ∥∥f(〈ai,x〉) · a>i ai∥∥2F for any query x ∈ Rd. The data\nstructure requires a single pass over A = a1 ◦ . . . ◦ an (possibly through turnstile updates), can be built in Õ ( nnz(A) + d 2 ) time and Õ ( d 2 ) space, uses query time poly ( d, 1 , log n ) , and succeeds with probability 1− 1poly(n) .\nTheorem 3.2 For a fixed > 0 and polynomial f , there exists a data structure HSAMPLER that takes any query x ∈ Rd and outputs an index i ∈ [n] with probability (1± )·‖f(〈ai,x〉)·a>i ai‖2F∑n\ni=1‖f(〈ai,x〉)·a>i ai‖2F +\n1 poly(n) , along with a matrix U := f(〈ai,x〉) · a > i ai + V, where E [V] = 0 and ‖V‖F ≤ ·∥∥f(〈ai,x〉) · a>i ai∥∥F . The data structure requires a single pass over A = a1 ◦ . . . ◦ an (possibly\nthrough turnstile updates), can be built in Õ ( nnz(A) + d 2 ) time and Õ ( d 2 ) space, uses query time\npoly ( d, 1 , log n ) , and succeeds with probability 1− 1poly(n) .\nTheorem 3.3 There exists an algorithm HLEVERAGE that returns all indices i ∈ [n] such that (1± )·‖f(〈ai,x〉)·a>i ai‖2F∑n\ni=1‖f(〈ai,x〉)·a>i ai‖2F ≥ 1200Td for some x ∈ R n, along with a matrix Ui := f(〈ai,x〉)·a>i ai+Vi, where ‖Vi‖F ≤ · ∥∥f(〈ai,x〉) · a>i ai∥∥F . The algorithm uses requires a single pass over A =\na1 ◦ . . . ◦ an (possibly through turnstile updates), uses Õ ( nnz(A) + d ω 2 ) runtime (where ω denotes\nthe exponent of square matrix multiplication) and Õ ( d 2 ) space, and succeeds with probability 1− 1poly(n) .\nWe remark that HSAMPLER and LEVERAGE are generalizations of ESTIMATOR and SAMPLER that simply return an outer product of a noisy vector rather than the noisy vector itself.\nAs before, observe that we could simply run an instance of HSAMPLER to sample a Hessian through importance sampling, but sampling T Hessians requires T independent instances, significantly increasing the total runtime. We thus use the same two level data structure that partitions the rows of matrix A = a1 ◦ . . . ◦ an into β := Θ(Td) buckets B1, . . . , Bβ . We then create an instance of HESTIMATOR and HSAMPLER for each bucket. For an iterate xt, we sample j ∈ [β] among the\nbuckets B1, . . . , Bβ with probability roughly ∑ i∈Bj‖f(〈ai,xt〉)·a > i ai‖2F∑n\ni=1‖f(〈ai,xt〉)·a>i ai‖2F using HESTIMATOR and then\nquerying HSAMPLERj at xt to sample a Hessian among the indices partitioned into bucket Bj . As before, this argument fails when the same bucket Bj is sampled multiple times, due to dependencies in randomness, but this issue can be avoided by using HLEVERAGE to decrease the probability that each bucket is sampled. We give the algorithm in full in Algorithm 3.\nWe remark that Algorithm 3 can be generalized to handle oracles O corresponding to second-order methods that require batches of subsampled Hessians in each iteration. For example, if we want to run T iterations of a second-order method that requires s subsampled Hessians in each batch, we can simply modify Algorithm 3 to sample s Hessians in each iteration as input to O and thus Ts Hessians in total." }, { "heading": "4 EMPIRICAL EVALUATIONS", "text": "Our primary contribution is the theoretical design of a nearly input sparsity time algorithm that approximates optimal importance sampling SGD. In this section we implement a scaled-down version of our algorithm and compare its performance on large-scale real world datasets to SGD with uniform sampling on logistic regression. We also consider both linear regression and support-vector machines (SVMs) in the supplementary material. Because most rows have roughly uniformly small leverage scores in real-world data, we assume that no bucket contains a row with a significantly large leverage score and thus the implementation of our importance sampling algorithm does not create multiple samplers for any buckets. By similar reasoning, our implementation uniformly samples a number of indices i and estimates ∑n i=1 ‖f(〈ai,x〉) · ai‖ 2 2 by rescaling. Observe that although these simplifications to our algorithm decreases the wall-clock running time and the total space used by our algorithm, they only decrease the quality of our solution for each SGD iteration. We also consider two hybrid SGD sampling algorithms; the first takes the better gradient obtained at each iteration from both uniform sampling and importance sampling while the second performs 25 iterations of\nAlgorithm 3 Second-Order Optimization with Importance Sampling\nInput: Matrix A = a1 ◦ . . . ◦ an ∈ Rn×d, parameter T for number of sampled Hessians, oracle O that performs the update rule. Output: T approximate Hessians. 1: Preprocessing Stage: 2: β ← Θ(Td) with a sufficiently large constant. 3: Let h : [n]→ [β] be a uniformly random hash function. 4: Let Bj be the matrix formed by the rows ai of A with h(i) = j, for each j ∈ [β]. 5: Create an instance HESTIMATORj and HSAMPLERj for each Bj with j ∈ [β] with = 12 . 6: Run HLEVERAGE to find a set L0 of row indices and corresponding (noisy) outer products. 7: Second-Order Optimization Stage: 8: Randomly pick starting location x0 9: for t = 1 to T do 10: Let qi be the output of HESTIMATORj on query xt−1 for each i ∈ [β]. 11: Sample j ∈ [β] with probability pj = qj∑\ni∈[β] qi .\n12: if there exists i ∈ L0 with h(i) = j then 13: Use HESTIMATORj , HLEVERAGE, and HSAMPLERj to sample Hessian Ht. 14: else 15: Use HSAMPLERj to sample Hessian Ht = ∇̂fit(xt). 16: p̂i,t ←\n‖Ht‖2F∑ j∈[β] qj\n17: xt+1 ← O (\n1 np̂i,t Ht\n)\n0 50 100 150 200 250 0\n3\n6\n9 ·104\nSGD Iterations\nA ve\nra ge\nO bj\nec tiv\ne V\nal ue\n(a) Step size 0.1\n0 50 100 150 200 250 0\n2\n4\n6\n8 ·104\nSGD Iterations\nA ve\nra ge\nO bj\nec tiv\ne V\nal ue\n(b) Step size 0.01\n0 50 100 150 200 250 0\n2.5\n5\n7.5 ·104\nSGD Iterations\nA ve\nra ge\nO bj\nec tiv\ne V\nal ue\n(c) Step size 0.001\n0 50 100 150 200 250 0\n0.4\n0.8\n1.2\n1.6\n2\n2.4\nSGD Iterations\nA ve\nra ge\nTo ta\nlT im\ne\n(d) Step size 0.1\n0 50 100 150 200 250 0\n0.4\n0.8\n1.2\n1.6\n2\n2.4\nSGD Iterations\nA ve\nra ge\nTo ta\nlT im\ne\n(e) Step size 0.01\n0 50 100 150 200 250 0\n0.4\n0.8\n1.2\n1.6\n2\n2.4\nSGD Iterations\nA ve\nra ge\nTo ta\nlT im\ne\n(f) Step size 0.001\nFig. 1: Comparison of objective values and runtimes for importance sampling (in blue squares), uniform sampling (in red triangles), hybrid sampling that chooses the better gradient at each step (in purple circles), and hybrid sampling that performs 25 steps of importance sampling followed by uniform sampling (in teal X’s) over various step-sizes for logistic regression on a9a Adult dataset from UCI, across 250 iterations, averaged over 10 repetitions.\nimportance sampling before using uniform sampling for the remaining iterations. Surprisingly, our SGD importance sampling implementation not only significantly improves upon SGD with uniform sampling, but are also competitive with the two hybrid algorithms. We do not consider other SGD variants due to either their distributional assumptions or lack of known flexibility to big data models. The experiments were performed in Python 3.6.9 on an Intel Core i7-8700K 3.70 GHz CPU with\n12 cores and 64GB DDR4 memory, using a Nvidia Geforce GTX 1080 Ti 11GB GPU. Our code is publicly available at https://github.com/SGD-adaptive-importance/code.\nLogistic Regression. We performed logistic regression on the a9a Adult data set collected by UCI and retrieved from LibSVM (Chang & Lin, 2011). The features correspond to responses from the 1994 Census database and the prediction task is to determine whether a person makes over 50K USD a year. We trained using a data batch of 32581 points and 123 features and tested the performance on a separate batch of 16281 data points. For each evaluation, we generated 10 random initial positions shared for importance sampling and uniform sampling. We then ran 250 iterations of SGD for each of the four algorithms, creating only 250 buckets for the importance sampling algorithm and computed the average performance on each iteration across these 10 separate instances. The relative average performance of all algorithms was relatively robust to the step-size. Although uniform sampling used significantly less time overall, our importance sampling SGD algorithm actually had better performance when considering either number of iterations or wall-clock time across all tested step-sizes. For example, uniform sampling had average objective value 20680 at iteration 250 using 0.0307 seconds with step-size 0.1, but importance sampling had average objective value 12917 at iteration 5 using 0.025 seconds. We give our results for logistic regression in Figure 1. We repeat our experiments in Figure 2 to explicitly compare the objective value of each algorithm with respect to wall-clock time, rather than SGD iterations. Thus our results in Figure 2 empirically demonstrate the advantages of our algorithm across the most natural metrics. For additional experiments, see Section B." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We have given variance reduction methods for both first-order and second-order stochastic optimization. Our algorithms require a single pass over the data, which may even arrive implicitly in the form of turnstile updates, and use input sparsity time and Õ (Td) space. Our algorithms are also amenable to big data models such as the streaming and distributed models and are supported by empirical evaluations on large-scale datasets. We believe there are many interesting future directions to explore. For example, can we generalize our techniques to show provable guarantees for other SGD variants and accelerated methods? A very large-scale empirical study of these methods would also be quite interesting." }, { "heading": "A DISCUSSION, FULL ALGORITHMS, AND PROOFS", "text": "For the sake of presentation, we consider the case where p = 2; higher dimensions follow from the same approach, using tensor representation instead of matrix representation. Instead of viewing the input matrix A = a1 ◦ . . . ◦ an ∈ Rn×d as a number of rows, we instead view the matrix A = A1 ◦ . . . ◦An ∈ Rnd×d, where each matrix Ai = ai ⊗ ai is the outer product of the row ai with itself." }, { "heading": "A.1 L2 POLYNOMIAL INNER PRODUCT SAMPLER", "text": "For ease of discussion, we describe in this section a data structure that allows sampling an index i ∈ [n] with probability approximately ‖Aix‖1,2,d‖Ax‖1,2,d in linear time and sublinear space, where for a matrix A ∈ Rnd×d, we use ‖Ax‖1,2,d to denote ∑n i=1 ‖Aix‖2, where each Ai ∈ Rd×d and A = A1◦. . .◦An. The generalization to a L2 polynomial inner product sampler follows immediately. Notably, our data structure can be built simply given access to A, and will still sample from the correct distribution when x is given as a post-processing vector. We first describe in Section A.1.1 some necessary subroutines that our sampler requires. These subroutines are natural generalizations of the well-known frequency moment estimation algorithm of Alon et al. (1999) and heavy hitter detection algorithm of Charikar et al. (2004). We then give the L1,2,d sampler in full in Section A.1.2." }, { "heading": "A.1.1 FREQUENCY MOMENT AND HEAVY HITTER GENERALIZATIONS", "text": "We first recall a generalization to the frequency moment estimation algorithm by Alon et al. (1999) that also supports post-processing multiplication by any vector x ∈ Rd.\nLemma A.1 (Mahabadi et al., 2020) Given a constant > 0, there exists a one-pass streaming algorithm AMS that takes updates to entries of a matrix A ∈ Rn×d, as well as query access to postprocessing vectors x ∈ Rd and v ∈ Rd that arrive after the stream, and outputs a quantity F̂ such that (1 − ) ‖Ax− v‖2 ≤ F̂ ≤ (1 + ) ‖Ax− v‖2. The algorithm uses O ( d 2 ( log2 n+ log 1δ\n)) bits of space and succeeds with probability at least 1− δ.\nAlgorithm 4 Basic algorithm COUNTSKETCH that outputs heavy submatrices of ‖Ax‖1,2,d, where x is a post-processing vector\nInput: Matrix A ∈ Rnd×d, post-processing vector x ∈ Rd, constant parameter > 0. Output: Slight perturbations of the vector Aix for which ‖Aix‖2 ≥ ‖Ax‖1,2,d.\n1: r ← Θ(log n) with a sufficiently large constant. 2: b← Ω ( 1 2 ) with a sufficiently large constant. 3: Let T be an r × b table of buckets, where each bucket stores an Rd×d matrix, initialized to the zeros matrix. 4: Let si ∈ {−1,+1} be 4-wise independent for i ∈ [n]. 5: Let hi : [n]→ [b] be 4-wise independent for i ∈ [r]. 6: Process A: 7: Let A = A1 ◦ . . . ◦An, where each Ai ∈ Rd×d. 8: for each j = 1 to n do 9: for each i = 1 to r do\n10: Add sjAj to the matrix in bucket hi(j) of row i. 11: Let Mi,j be the matrix in row i, bucket j of T for i ∈ [r], j ∈ [b]. 12: Process x: 13: for i ∈ [r], j ∈ [b] do 14: Mi,j ←Mi,jx 15: On query k ∈ [n], report mediani∈[r]\n∥∥Mi,hi(k)∥∥2. Let A1, . . . ,An ∈ Rd×d and A = A1 ◦ . . . ◦An ∈ Rnd×d. Let x ∈ Rd×1 be a post-processing vector that is revealed only after A has been completely processed. For a given > 0, we say a block Ai with i ∈ [n] is heavy if ‖Aix‖2 ≥ ‖Ax‖1,2,d. We show in Algorithm 4 an algorithm that\nprocesses A into a sublinear space data structure and identifies the heavy blocks of A once x is given. Moreover, for each heavy block Ai, the algorithm outputs a vector y that is a good approximation to Aix. The algorithm is a natural generalization of the CountSketch heavy-hitter algorithm introduced by Charikar et al. (2004).\nFor a vector v ∈ Rnd×1, we use vtail(b) to denote v with the b blocks of d rows of v with the largest `2 norm set to zeros.\nLemma A.2 There exists an algorithm that uses O ( 1 2 d 2 ( log2 n+ log 1δ )) space that outputs\na vector yi for each index i ∈ [n] so that | ‖yi‖2 − ‖Aix‖2 | ≤ ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 ≤\n∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 1,2,d\nwith probability at least 1 − δ. Moreover if Y = y1 ◦ . . . ◦ yn, then∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 ≤ ∥∥∥Ax− Ŷ∥∥∥ 2 ≤ 2 ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 with probability at least 1 − δ, where Ŷ = Y −Ytail( 2 2 ) denotes the top 2 2 blocks of Y by `2 norm.\nProof : Fix an index i ∈ [n]. Consider the estimate of ‖Aix‖2 in row α of the CountSketch table T . Then hα(i) is the bucket of T in row α to which Aix hashes. Let E1 be the event that the 2 2 blocks of size d of Ax with the largest `2 norm are not hashed to hα(i). Observe that for b = Ω ( 1 2 ) with sufficiently large constant, E1 occurs with probability at least 112 by a union bound. Let v be the sum of the vectors representing the blocks that are hashed to bucket hα(i) excluding Aix, so that v is the noise for the estimate of Aix in row α. Conditioned on E1, we can bound the expected squared norm of the noise in bucket hα(i) for sufficiently large b by E [ ‖v‖22 ] ≤ 2 9 ∥∥∥(Ax)tail( 2 2 ) ∥∥∥2 2 . Hence we have Var(‖vi‖2) ≤ 2 9 ∥∥∥(Ax)tail( 2 2 ) ∥∥∥2 2 . Thus from Jensen’s inequality, Chebyshev’s inequality and conditioning on E1,\nPr [ ‖vi‖2 ≥ ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 ] ≤ 1 4 + 1 12 = 2 3 .\nThe first claim then follows from the observation that ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 ≤ ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 1,2,d and noting that we can boost the probability of success to 1 − 1poly(n) by repeating for each of the r = Θ(log n) rows and taking the median.\nFinally, observe that ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 ≤ ∥∥∥Ax− Ŷ∥∥∥ 2 , since Ŷ has at most 2 2 nonzero blocks, while\n(Ax)tail( 2 2 ) has all zeros in the 2 2 blocks of Ax with the largest `2 norm. Since Ax− Ŷ alters at most 2 2 rows of Ax, each by at most ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 , then\n∥∥∥Ax− Ŷ∥∥∥ 2 ≤ √√√√2/ 2∑ i=1 ( ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 )2 = 2 ∥∥∥(Ax)tail( 2 2 ) ∥∥∥ 2 .\n2" }, { "heading": "A.1.2 SAMPLING ALGORITHM", "text": "Our approach is similar to `p sampling techniques in Andoni et al. (2011); Jowhari et al. (2011), who consider sampling indices in vectors, and almost verbatim to Mahabadi et al. (2020), who consider sampling rows in matrices given post-processing multiplication by a vector.\nThe high level idea is to note that if ti ∈ [0, 1] is chosen uniformly at random, then\nPr [‖Ai‖1,2,d ti ≥ ‖A‖1,2,d ] = ‖Ai‖1,2,d ‖A‖1,2,d .\nThus if Bi = Aiti and there exists exactly one index i such that ‖Bi‖1,2,d ≥ ‖A‖1,2,d, then the task would reduce to outputting Bj that maximizes ‖Bj‖1,2,d over all j ∈ [n]. In fact, we can show that\nBi is an O ( )-heavy hitter of B with respect to the L1,2,d norm. Hence, we use a generalization of COUNTSKETCH to identify the heavy hitters of B, approximate the maximum index i, and check whether ‖Bi‖1,2,d is at least (an estimate of) ‖A‖1,2,d.\nUnfortunately, this argument might fail due to several reasons. Firstly, there might exists zero or multiple indices i such that ‖Bi‖1,2,d ≥ ‖A‖1,2,d. Then the probability distribution that an index i satisfies ‖Bi‖1,2,d ≥ ‖A‖1,2,d and that ‖Bi‖1,2,d > ‖Bj‖1,2,d for all other j ∈ [n] is not the same as the desired distribution. Fortunately, we show that this only happens with small probability, slightly perturbing the probability of returning each i ∈ [n]. Another possibility is that the error in COUNTSKETCH is large enough to misidentify whether ‖Bi‖1,2,d ≥ ‖A‖1,2,d. Using a statistical test, this case can usually be identified and so the algorithm will be prevented from outputting a sample in this case. Crucially, the probability that the algorithm is aborted by the statistical test is roughly independent of which index achieves the maximum. As a result, the probability of returning each i ∈ [n] is within a (1 ± ) factor of ‖Ai‖1,2,d‖A‖1,2,d when the algorithm does not abort.\nWe show that the probability that algorithm succeeds is Θ( ) so then running O ( log 1 ) instances of the algorithm suffices to output some index from the desired distribution with constant probability, or abort otherwise. Because the underlying data structure is a linear sketch, then it is also robust to post-processing multiplication by any vector x ∈ Rd. Finally, we note that although our presentation refers to the scaling factors ti as independent random variables, our analysis shows that they only need to be O (1)-wise independent and thus we can generate the scaling factors in small space in the streaming model. We give the L1,2,d sampler in Algorithm 5.\nAlgorithm 5 L1,2,d Sampler\nInput: Matrix A ∈ Rnd×d with A = A1 ◦ . . . ◦An, where each Ai ∈ Rd×d, vector x ∈ Rd×1 that arrives after processing A, constant parameter > 0. Output: Noisy Aix of Ax sampled roughly proportional to ‖Aix‖2. 1: Pre-processing Stage: 2: b← Ω ( 1 2 ) , r ← Θ(log n) with sufficiently large constants\n3: For i ∈ [n], generate independent scaling factors ti ∈ [0, 1] uniformly at random. 4: Let B be the matrix consisting of matrices Bi = 1tiAi. 5: Let ESTIMATOR and AMS track the L1,2,d norm of Ax and Frobenius norm of Bx, respectively. 6: Let COUNTSKETCH be an r × b table, where each entry is a matrix in Rd×d. 7: for each submatrix Ai do 8: .Process A: 9: Update COUNTSKETCH with Bi = 1tiAi.\n10: Update linear sketch ESTIMATOR with Ai. 11: Update linear sketch AMS with Bi = 1tiAi. 12: Post-process x in AMS, COUNTSKETCH, and ESTIMATOR. .Process x: 13: Sample a submatrix: 14: Use ESTIMATOR to compute F̂ with ‖Ax‖1,2,d ≤ F̂ ≤ 2 ‖Ax‖1,2,d. 15: Extract the 2 2 (noisy) blocks of d rows of Bx with the largest estimated `2 norms by\nCOUNTSKETCH. 16: Let M ∈ Rnd×1 be the 2 2 -block sparse matrix consisting of these top (noisy) block. 17: Use AMS to compute Ŝ with ‖Bx−M‖2 ≤ Ŝ ≤ 2 ‖Bx−M‖2. 18: Let ri be the (noisy) block of d rows in COUNTSKETCH with the largest norm.\n19: if Ŝ > F̂ √\nlog 1 or ‖ri‖2 < 1 F̂ then\n20: Return FAIL. 21: else 22: Return r = tiri.\nWe first show that the probability that Algorithm 5 returns FAIL is independent of which index i ∈ [n] achieves argmaxi∈[n] 1 ti ‖Aix‖2.\nLemma A.3 Let i ∈ [n] and fix a value of ti ∈ [0, 1] uniformly at random. Then conditioned on the value of ti,\nPr [ Ŝ > F̂ √ log 1 ] = O ( ) + 1\npoly(n) .\nProof : We first observe that if we upper bound Ŝ by 4 ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2 and lower bound F̂ by\n‖Ax‖1,2,d, then it suffices to show that the probability of 4 ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2 > √\nlog 1 ‖Ax‖1,2,d is small. Thus we define E1 as the event that:\n(1) ‖Ax‖1,2,d ≤ F̂ ≤ 2 ‖Ax‖1,2,d\n(2) ‖Bx−M‖F ≤ Ŝ ≤ 2 ‖Bx−M‖F (3) ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2 ≤ ‖Bx−M‖F ≤ 2 ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2\nNote that by Theorem 2.1, Lemma A.2 and Lemma A.1, E1 holds with high probability. Let U = ‖Ax‖1,2,d. For each block Ajx, we define yj to be the indicator variable for whether the scaled block Bjx is heavy, so that yj = 1 if ‖Bjx‖2 > U and yj = 0 otherwise. We also define zj ∈ [0, 1] as a scaled random variable for whether Bjx is light and how much squared mass it contributes, zj = 1U2 ‖Bjx‖ 2 2 (1−yj). Let Y = ∑ j 6=i yj be the total number of heavy blocks besides\nBix and Z = ∑ j 6=i zj be the total scaled squared mass of the small rows. Let h ∈ Rnd be the vector that contains the heavy blocks so that coordinates (j − 1)d+ 1 through jd of h correspond to Bjx if yj = 1 and they are all zeros otherwise. Hence, h contains at most Y + 1 nonzero blocks and thus\nat most (Y + 1)d nonzero entries. Moreover, U2Z = ‖Bx− h‖22 and ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2 ≤ U √ Z unless Y ≥ 2 2 .\nThus if we define E2 to be the event that Y ≥ 2 2 and E3 to be the event thatZ ≥ 1 16U2 log 1 ‖Ax‖ 2 1,2,d, then ¬E2 ∧ ¬E3 implies 4 ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ 2 ≤ √\nlog 1 ‖Ax‖1,2,d, so it suffices to bound the probability of the events E2 and E3 byO ( ). Intuitively, if the number of heavy rows is small (¬E2) and the total contribution of the small rows is small (¬E3), then the tail estimator is small, so the probability of failure due to the tail estimator is small.\nTo analyze E2, note that yj = 1 if and only if 1tj ‖Ajx‖2 > U , so E [yi] = ‖Ajx‖2\nU and thus E [Y ] ≤ 1 since Y = ∑ j 6=i yj and U = ‖Ax‖1,2,d = ∑ j ‖Ajx‖2. We also have Var(Y ) ≤ 1 so that Pr [E2] = O ( ) for sufficiently small , by Chebyshev’s inequality.\nTo analyze E3, recall that zj = 1U2 ‖Bjx‖ 2 2 (1 − yj). Thus zj > 0 only if yj = 0 or equivalently, ‖Bjx‖2 ≤ U . Since Bjx = 1 tj Ajx, then zj > 0 only if tj ≥ ‖Ajx‖2 ‖Ax‖1,2,d . Therefore,\nE [zj ] ≤ ∫ ∞ ‖Ajx‖2/‖Ax‖1,2,d zj dtj = ∫ ∞ ‖Ajx‖2/‖Ax‖1,2,d 1 t2j 1 U2 ‖Ajx‖22 dtj ≤ ‖Ajx‖2 ‖Ax‖1,2,d .\nSince Z = ∑ j 6=i zj , then E [Z] ≤ 1 and similarly Var(Z) ≤ 1. Hence by Bernstein’s inequality,\nPr [ Z > 116 log 1 ] = O ( ), so then Pr [E3] = O ( ). Thus Pr [¬E1 ∨ E2 ∨ E3] = O ( ) + 1poly(n) , as desired. 2\nWe now show that Algorithm 5 outputs a noisy approximation to Aix, where i ∈ [n] is drawn from approximately the correct distribution, i.e., the probability of failure does not correlate with the index that achieves the maximum value.\nLemma A.4 For a fixed value of F̂ , the probability that Algorithm 5 outputs (noisy) submatrix Aix is (1±O ( )) ‖Aix‖2\nF̂ + 1poly(n) .\nProof : Let E be the event that ti < ‖AiP‖2 F̂ so that Pr [E ] = ‖Aix‖2 F̂ . Let E1 be the event that COUNTSKETCH, AMS, or ESTIMATOR fails so that Pr [E1] = 1poly(n) by Lemma A.2, Lemma A.1,\nand Theorem 2.1. Let E2 be the event that Ŝ > F̂ √\nlog 1 so that Pr [E2] = O ( ) by Lemma A.3. Let E3 be the event that multiple rows Bjx exceeding the threshold are observed in the CountSketch data structure and E4 be the event that ‖Bix‖2 exceeds the threshold but is not reported due to noise in the CountSketch data structure. Observe that E3 and E4 are essentially two sides of the same coin, where error is incurred due to the inaccuracies of CountSketch.\nTo analyze E3, note that row j 6= i can be reported as exceeding the threshold if ‖Bjx‖2 ≥ 1 F̂ − F̂ √ log 1 , which occurs with probability at most O ( ‖Ajx‖2\nF̂\n) . By a union bound over all\nrows j ∈ [n] with j 6= i, then Pr [E3] = O ( ). To analyze E4, we first condition on ¬E1 and ¬E2, so that ‖Bx−M‖2 ≤ Ŝ ≤ F̂ √ log 1 . Then by\nLemma A.2, the estimate B̂ix for Bix output by the sampler satisfies∣∣∣‖Bix‖2 − ∥∥∥B̂ix∥∥∥ 2 ∣∣∣ ≤ ∥∥∥(Bx)tail( 2 2 ) ∥∥∥ F ≤ ‖Bx−M‖F ≤ Ŝ ≤ F̂ √ log 1 . Hence, E4 can only occur for 1\nF̂ ≤ ‖Bix‖2 ≤\n1\nF̂ + F̂ √ log 1 ,\nwhich occurs with probability at most O ( 2 ) .\nTo put things together, E occurs with probability ‖Aix‖2 F̂\n, in which case the L1,2,d sampler should output Aix. However, this may not happen due to any of the events E1, E2, E3, or E4. Since Pr [E2 ∨ E3 | E ] = O ( ) and Pr [E4] = O ( 2 ) , then we have Pr [E4 | E ] = O ( ). Moreover, Pr [E1] = 1poly(n) so that index i is sampled with probability (1 + O ( )) ‖Aix‖2 F̂ . Finally by\nLemma A.2, ∣∣∣‖Bix‖2 − ∥∥∥B̂ix∥∥∥\n2 ∣∣∣ ≤ F̂√log 1 and ∥∥∥B̂ix∥∥∥ 2 ≥ 1 F̂ . Hence ∥∥∥B̂ix∥∥∥ 2 is a (1 + )\napproximation to ‖Bix‖2 and therefore, ti ∥∥∥B̂ix∥∥∥\n2 is a (1 + ) approximation to ‖Aix‖2. 2\nThus we have the following full guarantees for our L1,2,d sampler.\nTheorem A.5 Given > 0, there exists an algorithm that takes a matrix A ∈ Rnd×d, which can be written as A = A1 ◦ . . . ◦An, where each Ai ∈ Rd×d. After A is processed, the algorithm is given a query vector x ∈ Rd and outputs a (noisy) vector Aix with probability (1±O ( ))\n‖Aix‖2 ‖Ax‖1,2,d\n+ 1\npoly(n) . The algorithm uses log 1 δ ·nnz(A)+poly\n( d, 1 , log n ) time,O ( d ( poly ( 1 , log n ) + log 1δ )) bits of space, and succeeds with probability at least 1− δ.\nProof : By Lemma A.4 and Theorem 2.1, then ‖AP‖1,2,d ≤ F̂ ≤ 2 ‖AP‖1,2,d with high probability and so each vector Aix is sampled with probability (1 + )\n‖Aix‖2 ‖Ax‖1,2,d + 1poly(n) , conditioned\non the sampler succeeding. The probability that the sampler succeeds is Θ( ), so the sampler can be repeated O ( 1 log n ) times to obtain probability of success at least 1 − 1poly(n) . Since each\ninstance of AMS, ESTIMATOR, and COUNTSKETCH use log 1δ · nnz(A) + poly ( d, 1 , log n ) time\nand O ( d ( poly ( 1 , log n ) + log 1δ )) bits of space, then the total time and space complexity follow. 2\nBy adding sketches corresponding to different polynomial degrees, Theorem A.5 implies Theorem 2.2 and Theorem 3.2." }, { "heading": "A.2 LEVERAGE SCORE SAMPLER", "text": "Our starting point is the input sparsity time algorithm of (Nelson & Nguyen, 2013) for approximating the leverage scores, which is in turn a modification of (Drineas et al., 2012; Clarkson & Woodruff,\n2013) Given an input matrix A, (Nelson & Nguyen, 2013) randomly samples a sparse matrix Π1 with Õ ( d 2 ) rows and Õ ( 1 ) signs per column, setting the remaining entries to be zero. (Nelson & Nguyen, 2013) maintains Π1A and post-processing, computes R−1 so that Π1AR−1 has orthonormal columns. Previous work of (Drineas et al., 2012) had shown that the squared row norms of AR−1 are (1 + )-approximations to the leverage scores of A. Hence for a JL matrix Π2 that gives (1 + )-approximations to the row norms of AR−1, we can compute A(R−1Π2) and output the row norms of ARΠ2 as the approximate leverage scores for each row. Due to the sparsity of Π1 and Π2, the total runtime is Õ ( 1 2 · nnz(A) ) . Computing R−1 takes additional Õ ( dω 2 ) runtime.\nNow since the squared row norms of AR−1 are (1 + )-approximations to the leverage scores of A, it suffices to take the rows of AR−1 with large squared norms. To that effect, we randomly sample a CountSketch matrix T and maintain TA. Once R−1 is computed, we can post-processing right multiply to obtain TAR−1, similar to Algorithm 5. It follows that any row of TAR−1 that is at least\n1 200Td -heavy (with respect to squared Frobenius norm) has leverage score at least 1 100Td . Thus we can obtain these rows by querying the CountSketch data structure while using space Õ (Td). Due to the sparsity of the CountSketch matrix, the total runtime is Õ ( nnz(A) + d ω\n2\n) . Finally, (Mahabadi\net al., 2020) show that the error guarantee on each reported heavy row required by Theorem 2.3. By reporting the outer products of each of the heavy rows rather than the heavy rows, we obtain Theorem 3.3." }, { "heading": "A.3 APPROXIMATE SGD WITH IMPORTANCE SAMPLING", "text": "Proof of Lemma 2.4: For any t ∈ [T ] and i ∈ [n], ‖Aixt‖22 ≥ 1 100Td ‖Axt‖ 2 F only if there exists a row in Ai whose leverage score is at least 1100Td , since there are d rows in Ai. Algorithm 2 calculates a 2-approximation to each leverage score and maintains T separate instances of the L1,2,d samplers for any matrix containing a row with approximate leverage score at least 1100Td . Thus for these indices i ∈ [n], we maintain T separate instances of the L1,2,d samplers for Ai by explicitly maintaining the heavy row.\nOtherwise, for all j ∈ [β] so that h(i) 6= j for any index i ∈ [n] such that ‖Aixt‖22 < 1 100Td ‖Axt‖ 2 F ,\nwe have ∑ i:h(i)=j ‖Aixt‖22 ≤ 1 100T ‖Axt‖2F ,\nwith probability at least 99100 by Bernstein’s inequality and a union bound over j ∈ [β] for β = Θ(Td) with sufficiently high constant. Intuitively, by excluding the hash indices containing “heavy” matrices, the remaining hash indices contain only a small fraction of the mass with high probability. Then the probability that any j ∈ [β] with ∑ i:h(i)=j ‖Aixt‖2 ≤ 1 10T ‖Axt‖1,2,d is sampled more than once is at most 1100T for any t ∈ [T ] provided there is no row in any Ai with h(i) = j whose `2 leverage score is at least 1100Td . Thus, the probability that some bucket j ∈ [β] is sampled twice across T steps is at most β(100T )2 ≤ 1 100 .\nIn summary, we maintain T separate instances of L1,2,d samplers for the heavy matrices and one L1,2,d sampler for each hash index that does not contain a heavy matrix. With probability at least 98 100 , any hash index not containing a heavy matrix is sampled only once, so each time t ∈ [T ] has access to a fresh L1,2,d sampler. 2" }, { "heading": "B EMPIRICAL EVALUATIONS", "text": "We again emphasize that our primary contribution is the theoretical design of a nearly input sparsity time streaming algorithm that simulates the optimal importance sampling distribution for variance reduction in stochastic gradient descent without computing the full gradient. Thus our theory is optimized to minimize the number of SGD iterations without asymptotic wall-clock time penalties; we do not attempt to further optimize wall-clock runtimes. Nevertheless, in this section we implement a scaled-down version of our algorithm and compare its performance across multiple iterations on large-scale real world data sets to SGD with uniform sampling on both linear regression and\nsupport-vector machines (SVMs). Because most rows have roughly uniformly small leverage scores in real-world data, we assume that no bucket contains a row with a significantly large leverage score and thus the implementation of our importance sampling algorithm does not create multiple samplers for any buckets. By similar reasoning, our implementation uniformly samples a number of indices i and estimates ‖Ax‖1,2,d = ∑ j ‖Ajx‖1,2,d by scaling up ‖Aix‖1,2,d. Observe that although these simplifications to our algorithm decreases the wall-clock running time and the total space used by our algorithm, they only decrease the quality of our solution for each SGD iteration. Nevertheless, our implementations significantly improve upon SGD with uniform sampling. The experiments in this section were performed on a Dell Inspiron 15-7579 device with an Intel Core i7-7500U dual core processor, clocked at 2.70 GHz and 2.90 GHz, in contrast to the logistic regression experiments that were performed on a GPU.\nLinear Regression. We performed linear regression on the CIFAR-10 dataset to compare the performance of our importance sampling algorithm to the uniform sampling SGD algorithm. We trained using a data batch of 100000 points and 3072 features and tested the performance on a separate batch of data points. We aggregated the objective values across 10 separate instances. Each instance generated a random starting location as an initial position for both importance sampling and uniform sampling. We then ran 40 iterations of SGD for each algorithm and observed the objective value on the test data for each of these iterations. Finally, we computed the average performance on each iteration across these 10 separate instances. As we ran our algorithm for 40 iterations, we created 1600 buckets that partitioned the data values for the importance sampling algorithm.\nThe sampled gradients were generally large in magnitude for both importance sampling and uniform sampling and thus we required small step-size. For step-sizes η = 1× 10−13, η = 5× 10−12, and η = 1 × 10−12, the objective value of the solution output by our importance sampling algorithm quickly and significantly improved over the objective value of the solution output by uniform sampling. Our algorithm performance is much more sensitive to the choice of larger step-sizes, as choices of step-sizes larger than 5 × 10−11 generally caused the importance sampling algorithm to diverge, while the uniform sampling algorithm still slowly converged. We give our results in Figure 3.\nSupport-Vector Machines. We also compared the performance of our importance sampling algorithm to the uniform sampling SGD algorithm using support-vector machines (SVM) on the a9a Adult data set collected by UCI and retrieved from LibSVM (Chang & Lin, 2011). The features correspond to responses from the 1994 Census database and the prediction task is to determine whether a person makes over 50K USD a year. We trained using a data batch of 32581 points and 123 features and tested the performance on a separate batch of 16281 data points. We assume the data is not linearly separable and thus use the hinge loss function so that we aim to minimize 1 n ∑n i=1 max(0, 1− yi(w ·Xi − b)) + λ ‖w‖ 2 2, where X is the data matrix, yi is the corresponding label, and w is the desired maximum-margin hyperplane. For each evaluation, we generated 10 random initial positions shared for both importance sampling and uniform sampling. We then ran 75 iterations of SGD for each algorithm, creating 1125 buckets for the importance sampling algorithm and computed the average performance on each iteration across these 5 separate instances.\nThe sampled gradients were generally smaller than those from linear regression on CIFAR-10 and thus we were able to choose significantly larger step-sizes. Nevertheless, our algorithm performance was sensitive to both the step-size and the regularization parameter. For step-sizes η = 0.25, η = 0.5 and regularization parameters λ = 0, λ = 0.001 and λ = 0.0001, the objective value of the solution output by our importance sampling algorithm quickly and significantly improved over the objective value of the solution output by uniform sampling. We give our results in Figure 5. Our algorithm performance degraded with larger values of λ, as well step-sizes larger than η = 1.\nWe also compared step-size η = 1 and regularization parameters λ = 0, λ = 0.001 and λ = 0.0001 with a hybrid sampling scheme that selects the better gradient between importance sampling and uniform sampling at each step, as well as a hybrid sampling scheme that uses a few steps of importance sampling, followed by uniform sampling in the remaining steps. Our experiments show that the hybrid sampling algorithms perform better at the beginning and thus our importance sampling algorithm may be used in conjunction with existing techniques in offline settings to accelerate SGD. Surprisingly, the hybrid sampling algorithms do not necessarily remain better than our importance sampling algorithm thus indicating that even if uniform sampling were run for a significantly larger number of iterations, its performance may not exceed our importance sampling algorithm. We give our results in Figure 6.\nFinally, we compare wall-clock times of each of the aforementioned sampling schemes with step-size η = 1 and regularization 0 across 100 iterations. Our results in Figure 4 show that as expected, uniform sampling has the fastest running time. However, each iteration of importance sampling takes about 15 iterations of uniform sampling, which empirically shows that even using wall-clock times for comparison, rather than total number of SGD iterations, the performance of our importance sampling algorithm still surpasses that of uniform sampling. Moreover, the runtime experiments reveal the main bottleneck of our experiments: each of the 100 iterations took approximately 70 seconds on average after including the evaluation of the objective on each gradient step." } ]
2,020
null
SP:f7c98dd7ab57f9ffc12e7d462ac5d2ae04504504
[ "This paper primarily deals with learning Granger-causal relationships in multivariate time series in the nonlinear dynamics setting. The core method uses vector autoregressive modeling with sparsity inducing regularizers (elastic net and smoothness based fused lasso) along with the recently proposed with self-explaining neural networks (for interpretability). The authors also augment the framework by learning Granger-causal structures that are stable on original and time-reversed data. Exhaustive empirical analysis is done with recent GC baselines. Some of my concerns with the paper are the following" ]
Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality.
[ { "affiliations": [], "name": "Ričards Marcinkevičs" }, { "affiliations": [], "name": "Julia E. Vogt" } ]
[ { "authors": [ "D. Alvarez-Melis", "T. Jaakkola" ], "title": "Towards robust interpretability with self-explaining neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "M.O. Appiah" ], "title": "Investigating the multivariate Granger causality between energy consumption, economic growth and CO2 emissions in Ghana", "venue": "Energy Policy,", "year": 2018 }, { "authors": [ "N. Bacaër" ], "title": "Lotka, Volterra and the predator–prey system (1920–1926)", "venue": "In A Short History of Mathematical Population Dynamics,", "year": 2011 }, { "authors": [ "A. Ben-Hur", "A. Elisseeff", "I. Guyon" ], "title": "A stability based method for discovering structure in clustered data", "venue": "Pacific Symposium on Biocomputing,", "year": 2002 }, { "authors": [ "Y. Benjamini", "Y. Hochberg" ], "title": "Controlling the false discovery rate: A practical and powerful approach to multiple testing", "venue": "Journal of the Royal Statistical Society. Series B (Methodological),", "year": 1995 }, { "authors": [ "K.H. Brodersen", "C.S. Ong", "K.E. Stephan", "J.M. Buhmann" ], "title": "The balanced accuracy and its posterior distribution", "venue": "In 2010 20th International Conference on Pattern Recognition,", "year": 2010 }, { "authors": [ "A.K. Charakopoulos", "G.A. Katsouli", "T.E. Karakasidis" ], "title": "Dynamics and causalities of atmospheric and oceanic data identified by complex networks and Granger causality analysis", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2018 }, { "authors": [ "C.W.J. Granger" ], "title": "Investigating causal relations by econometric models and cross-spectral methods", "venue": null, "year": 1969 }, { "authors": [ "S. Haufe", "V.V. Nikulin", "G. Nolte" ], "title": "Alleviating the influence of weak data asymmetries on Granger-Causal analyses", "venue": "In Latent Variable Analysis and Signal Separation,", "year": 2012 }, { "authors": [ "K. Inoue", "A. Doncescu", "H. Nabeshima" ], "title": "Hypothesizing about causal networks with positive and negative effects by meta-level abduction", "venue": "In Inductive Logic Programming,", "year": 2011 }, { "authors": [ "A. Karimi", "M.R. Paul" ], "title": "Extensive chaos in the Lorenz-96 model", "venue": "Chaos: An interdisciplinary journal of nonlinear science,", "year": 2010 }, { "authors": [ "S. Khanna", "V.Y.F. Tan" ], "title": "Economy statistical recurrent units for inferring nonlinear Granger causality", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "T. Kipf", "E. Fetaya", "K.-C. Wang", "M. Welling", "R. Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "M. Kolar", "L. Song", "A. Ahmed", "E.P. Xing" ], "title": "Estimating time-varying networks", "venue": "Annals of Applied Statistics, 4(1):94–123,", "year": 2010 }, { "authors": [ "T. Lange", "M.L. Braun", "V. Roth", "J.M. Buhmann" ], "title": "Stability-based model selection", "venue": "In Advances in Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "E.N. Lorenz" ], "title": "Predictability: a problem partly solved", "venue": "In Seminar on Predictability,", "year": 1995 }, { "authors": [ "S. Löwe", "D. Madras", "R. Zemel", "M. Welling" ], "title": "Amortized causal discovery: Learning to infer causal graphs from time-series data, 2020", "venue": null, "year": 2006 }, { "authors": [ "H. Lütkepohl" ], "title": "New Introduction to Multiple Time Series Analysis", "venue": null, "year": 2007 }, { "authors": [ "D. Marinazzo", "M. Pellicoro", "S. Stramaglia" ], "title": "Kernel method for nonlinear Granger causality", "venue": "Physical Review Letters,", "year": 2008 }, { "authors": [ "J.M. McCracken" ], "title": "Exploratory causal analysis with time series data", "venue": "Synthesis Lectures on Data Mining and Knowledge Discovery,", "year": 2016 }, { "authors": [ "N. Meinshausen", "P. Bühlmann" ], "title": "Stability selection", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2010 }, { "authors": [ "A. Montalto", "S. Stramaglia", "L. Faes", "G. Tessitore", "R. Prevete", "D. Marinazzo" ], "title": "Neural networks with non-uniform embedding and explicit validation phase to assess Granger causality", "venue": "Neural Networks,", "year": 2015 }, { "authors": [ "K.P. Murphy", "S. Russell" ], "title": "Dynamic Bayesian networks: representation, inference and learning", "venue": null, "year": 2002 }, { "authors": [ "M. Nauta", "D. Bucur", "C. Seifert" ], "title": "Causal discovery with attention-based convolutional neural networks", "venue": "Machine Learning and Knowledge Extraction,", "year": 2019 }, { "authors": [ "W.B. Nicholson", "D.S. Matteson", "J. Bien" ], "title": "VARX-L: Structured regularization for large vector autoregressions with exogenous variables", "venue": "International Journal of Forecasting,", "year": 2017 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Causal inference on time series using restricted structural equation models", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "J. Peters", "D. Janzing", "B. Schölkopf" ], "title": "Elements of Causal Inference – Foundations and Learning Algorithms", "venue": null, "year": 2017 }, { "authors": [ "D. Quesada" ], "title": "dbnR: Dynamic bayesian network learning and inference, 2020", "venue": "URL https:// CRAN.R-project.org/package=dbnR. R package (v", "year": 2020 }, { "authors": [ "R R Core Team" ], "title": "A language and environment for statistical computing, 2020", "venue": "URL https: //www.R-project.org/", "year": 2020 }, { "authors": [ "W. Ren", "B. Li", "M. Han" ], "title": "A novel Granger causality method based on HSIC-Lasso for revealing nonlinear relationship between multivariate time series", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2020 }, { "authors": [ "M.M. Rinschen", "J. Ivanisevic", "M. Giera", "G. Siuzdak" ], "title": "Identification of bioactive metabolites using activity metabolomics", "venue": "Nature Reviews Molecular Cell Biology,", "year": 2019 }, { "authors": [ "A. Roebroeck", "E. Formisano", "R. Goebel" ], "title": "Mapping directed influence over the brain using Granger causality and fMRI", "venue": null, "year": 2005 }, { "authors": [ "S. Seabold", "J. Perktold" ], "title": "statsmodels: Econometric and statistical modeling with python", "venue": "In 9th Python in Science Conference,", "year": 2010 }, { "authors": [ "S.M. Smith", "K.L. Miller", "G. Salimi-Khorshidi", "M. Webster", "C.F. Beckmann", "T.E. Nichols", "J.D. Ramsey", "M.W. Woolrich" ], "title": "Network modelling methods for FMRI", "venue": null, "year": 2011 }, { "authors": [ "L. Song", "M. Kolar", "E. Xing" ], "title": "Time-varying dynamic Bayesian networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "W. Sun", "J. Wang", "Y. Fang" ], "title": "Consistent selection of tuning parameters via variable selection stability", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "A. Tank", "I. Covert", "N. Foti", "A. Shojaie", "E. Fox" ], "title": "Neural Granger causality for nonlinear time series, 2018", "venue": null, "year": 2018 }, { "authors": [ "I. Tsamardinos", "L.E. Brown", "C.F. Aliferis" ], "title": "The max-min hill-climbing Bayesian network structure learning algorithm", "venue": "Machine Learning,", "year": 2006 }, { "authors": [ "Y. Wang", "K. Lin", "Y. Qi", "Q. Lian", "S. Feng", "Z. Wu", "G. Pan" ], "title": "Estimating brain connectivity with varying-length time lags using a recurrent neural network", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2018 }, { "authors": [ "I. Winkler", "D. Panknin", "D. Bartz", "K.-R. Muller", "S. Haufe" ], "title": "Validity of time reversal for testing Granger causality", "venue": "IEEE Transactions on Signal Processing,", "year": 2016 }, { "authors": [ "T. Wu", "T. Breuel", "M. Skuhersky", "J. Kautz" ], "title": "Discovering nonlinear relations with minimum predictive information regularization, 2020", "venue": null, "year": 2001 }, { "authors": [ "H. Xing-Chen", "Q. Zheng", "T. Lei", "S. Li-Ping" ], "title": "Research on structure learning of dynamic Bayesian networks by particle swarm optimization", "venue": "IEEE Symposium on Artificial Life, pp", "year": 2007 }, { "authors": [ "L.A. Zager", "G.C. Verghese" ], "title": "Graph similarity scoring and matching", "venue": "Applied Mathematics Letters,", "year": 2008 }, { "authors": [ "H. Zou", "T. Hastie" ], "title": "Regularization and variable selection via the elastic net", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2005 }, { "authors": [ "Nicholson" ], "title": "Different penalties induce different sparsity patterns in coefficient matrices", "venue": null, "year": 2017 }, { "authors": [ "Marinazzo" ], "title": "leverage reproducing kernel Hilbert spaces to infer linear Granger causality in an appropriate transformed feature space. Ren et al. (2020) introduce a kernel-based GC inference technique that relies on regularisation – Hilbert–Schmidt independence criterion (HSIC) Lasso GC. Neural Networks with Non-uniform Embedding", "venue": null, "year": 2008 }, { "authors": [ "Wang" ], "title": "2018) extend the NUE by replacing MLPs with LSTMs", "venue": "Neural Granger Causality. Tank et al", "year": 2018 }, { "authors": [ "Nauta" ], "title": "minimum predictive information regularisation that encourages the corruption of predictor time series. Similarly to the approaches of Tank et al", "venue": null, "year": 2018 }, { "authors": [ "2018 Tank et al", "2019 Nauta et al", "Khanna", "2020 Tan", "Wu" ], "title": "2020), which in this setting, have to be retrained separately for each replicate, the NRI is trained on the pooled dataset, leveraging shared dynamics. C PROPERTIES OF SELF-EXPLAINING NEURAL NETWORKS As defined by Alvarez-Melis", "venue": null, "year": 2018 }, { "authors": [ "D small" ], "title": "ABLATION STUDY OF THE LOSS FUNCTION We inspect hyperparameter tuning results for the GVAR model on Lorenz 96 (see Section 4.1.1) and synthetic fMRI time series (Smith et al., 2011) (see Section 4.1.2) as an ablation study for the loss function proposed (see Equation 6). Figures 4 and 5 show heat maps of BA scores", "venue": null, "year": 2011 }, { "authors": [ "eSRU (Khanna", "Tan" ], "title": "2020) has three different penalties weighted by λ1:3. For the stability-based thresholding (see Algorithm 1) in GVAR, we used Q = 20 equally spaced values in [0, 1] as sequence ξ5. For Lorenz 96 and fMRI experiments, grid search results are plotted in Figures 4, 8, and 5. Figure 9 contains GVAR grid search results for the Lotka–Volterra experiment", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Granger causality (GC) (Granger, 1969) is a popular practical approach for the analysis of multivariate time series and has become instrumental in exploratory analysis (McCracken, 2016) in various disciplines, such as neuroscience (Roebroeck et al., 2005), economics (Appiah, 2018), and climatology (Charakopoulos et al., 2018). Recently, the focus of the methodological research has been on inferring GC under nonlinear dynamics (Tank et al., 2018; Nauta et al., 2019; Wu et al., 2020; Khanna & Tan, 2020; Löwe et al., 2020), causal structures varying across replicates (Löwe et al., 2020), and unobserved confounding (Nauta et al., 2019; Löwe et al., 2020).\nTo the best of our knowledge, the latest powerful techniques for inferring GC do not target the effect sign detection (see Section 2.1 for a formal definition) or exploration of effect variability with time and, thus, have limited interpretability. This drawback defeats the purpose of GC analysis as an exploratory statistical tool. In some nonlinear interactions, one variable may have an exclusively positive or negative effect on another if it consistently drives the other variable up or down, respectively. Negative and positive causal relationships are common in many real-world systems, for example, gene regulatory networks feature inhibitory effects (Inoue et al., 2011) or in metabolomics, certain compounds may inhibit or promote synthesis of other metabolites (Rinschen et al., 2019). Differentiating between the two types of interactions would allow inferring and understanding such inhibition and promotion relationships in real-world dynamical systems and would facilitate a more comprehensive and insightful exploratory analysis. Therefore, we see a need for a framework capable of inferring nonlinear GC which is more amenable to interpretation than previously proposed methods (Tank et al., 2018; Nauta et al., 2019; Khanna & Tan, 2020). To this end, we introduce a novel method for detecting nonlinear multivariate Granger causality that is interpretable, in the sense that it allows detecting effect signs and exploring influences among variables throughout time. The main contributions of the paper are as follows:\n1. We extend self-explaining neural network models (Alvarez-Melis & Jaakkola, 2018) to time series analysis. The resulting autoregressive model, named generalised vector autore-\ngression (GVAR), is interpretable and allows exploring GC relations between variables, signs of Granger-causal effects, and their variability through time.\n2. We propose a framework for inferring nonlinear multivariate GC that relies on a GVAR model with sparsity-inducing and time-smoothing penalties. Spurious associations are mitigated by finding relationships that are stable across original and time-reversed (Winkler et al., 2016) time series data.\n3. We comprehensively compare the proposed framework and the powerful baseline methods of Tank et al. (2018), Nauta et al. (2019), and Khanna & Tan (2020) on a range of synthetic time series datasets with known Granger-causal relationships. We evaluate the ability of the methods to infer the ground truth GC structure and effect signs." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 GRANGER CAUSALITY", "text": "Granger-causal relationships are given by a set of directed dependencies within multivariate time series. The classical definition of Granger causality is given, for example, by Lütkepohl (2007). Below we define nonlinear multivariate GC, based on the adaptation by Tank et al. (2018). Consider a time series with p variables: {xt}t∈Z+ = {( x1t x 2 t ... x p t )>} t∈Z+ . Assume that causal relationships between variables are given by the following structural equation model:\nxit := gi ( x11:(t−1), ..., x j 1:(t−1), ..., x p 1:(t−1) ) + εit, for 1 ≤ i ≤ p, (1)\nwhere xj1:(t−1) is a shorthand notation for x j 1, x j 2, ..., x j t−1; ε i t are additive innovation terms; and gi(·) are potentially nonlinear functions, specifying how the future values of variable xi depend on the past values of x. We then say that variable xj does not Granger-cause variable xi, denoted as xj 6−→ xi, if and only if gi(·) is constant in xj1:(t−1).\nDepending on the form of the functional relationship gi(·), we can also differentiate between positive and negative Granger-causal effects. In this paper, we define the effect sign as follows: if gi(·) is increasing in all xj1:(t−1), then we say that variable x\nj has a positive effect on xi, if gi(·) is decreasing in xj1:(t−1), then x\nj has a negative effect on xi. Note that an effect may be neither positive nor negative. For example, xj can ‘contribute’ both positively and negatively to the future of xi at different delays, or, for instance, the effect of xj on xi could depend on another variable.\nGranger-causal relationships can be summarised by a directed graph G = (V, E), referred to as summary graph (Peters et al., 2017), where V = {1, ..., p} is a set of vertices corresponding to variables, and E = { (i, j) : xi −→ xj } is a set of edges corresponding to Granger-causal relationships. Let A ∈ {0, 1}p×p denote the adjacency matrix of G. The inference problem is then to estimateA from observations {xt}Tt=1, where T is the length of the time series observed. In practice, we usually fit a time series model that explicitly or implicitly infers dependencies between variables. Consequently, a statistical test for GC is performed. A conventional approach (Lütkepohl, 2007) used to test for linear Granger causality is the linear vector autoregression (VAR) (see Appendix A)." }, { "heading": "2.2 RELATED WORK", "text": "" }, { "heading": "2.2.1 TECHNIQUES FOR INFERRING NONLINEAR GRANGER CAUSALITY", "text": "Relational inference in time series has been studied extensively in statistics and machine learning. Early techniques for inferring undirected relationships include time-varying dynamic Bayesian networks (Song et al., 2009) and time-smoothed, regularised logistic regression with time-varying coefficients (Kolar et al., 2010). Recent approaches to inferring Granger-causal relationships leverage the expressive power of neural networks (Montalto et al., 2015; Wang et al., 2018; Tank et al., 2018; Nauta et al., 2019; Khanna & Tan, 2020; Wu et al., 2020; Löwe et al., 2020) and are often based on regularised autoregressive models, reminiscent of the Lasso Granger method (Arnold et al., 2007).\nTank et al. (2018) propose using sparse-input multilayer perceptron (cMLP) and long short-term memory (cLSTM) to model nonlinear autoregressive relationships within time series. Building on this, Khanna & Tan (2020) introduce a more sample efficient economy statistical recurrent unit (eSRU) architecture with sparse input layer weights. Nauta et al. (2019) propose a temporal causal discovery framework (TCDF) that leverages attention-based convolutional neural networks to test for GC. Appendix B contains further details about these and other relevant methods.\nApproaches discussed above (Tank et al., 2018; Nauta et al., 2019; Khanna & Tan, 2020) and in Appendix B (Marinazzo et al., 2008; Ren et al., 2020; Montalto et al., 2015; Wang et al., 2018; Wu et al., 2020; Löwe et al., 2020) focus almost exclusively on relational inference and do not allow easily interpreting signs of GC effects and their variability through time. In this paper, we propose a more interpretable inference framework, building on self explaining-neural networks (Alvarez-Melis & Jaakkola, 2018), that, as shown by experiments, performs on par with the techniques described herein." }, { "heading": "2.2.2 STABILITY-BASED SELECTION PROCEDURES", "text": "The literature on stability-based model selection is abundant (Ben-Hur et al., 2002; Lange et al., 2003; Meinshausen & Bühlmann, 2010; Sun et al., 2013). For example, Ben-Hur et al. (2002) propose measuring stability of clustering solutions under perturbations to assess structure in the data and select an appropriate number of clusters. Lange et al. (2003) propose a somewhat similar approach. Meinshausen & Bühlmann (2010) introduce the stability selection procedure applicable to a wide range of high-dimensional problems: their method guides the choice of the amount of regularisation based on the error rate control. Sun et al. (2013) investigate a similar procedure in the context of tuning penalised regression models." }, { "heading": "2.2.3 SELF-EXPLAINING NEURAL NETWORKS", "text": "Alvarez-Melis & Jaakkola (2018) introduce self-explaining neural networks (SENN) – a class of intrinsically interpretable models motivated by explicitness, faithfulness, and stability properties. A SENN with a link function g(·) and interpretable basis concepts h(x) : Rp → Rk follows the form\nf(x) = g (θ(x)1h(x)1, ..., θ(x)kh(x)k) , (2) where x ∈ Rp are predictors; and θ(·) is a neural network with k outputs. We refer to θ(x) as generalised coefficients for data point x and use them to ‘explain’ contributions of individual basis concepts to predictions. In the case of g(·) being sum and concepts being raw inputs, Equation 2 simplifies to\nf(x) = p∑ j=1 θ(x)jxj . (3)\nAppendix C lists additional properties SENNs need to satisfy, as defined by Alvarez-Melis & Jaakkola (2018).\nA SENN is trained by minimising the following gradient-regularised loss function, which balances performance with interpretability:\nLy(f(x), y) + λLθ (f(x)) , (4) where Ly(f(x), y) is a loss term for the ground classification or regression task; λ > 0 is a regularisation parameter; and Lθ(f(x)) = ∥∥∇xf(x)− θ(x)>Jhx (x)∥∥2 is the gradient penalty, where Jhx is the Jacobian of h(·) w.r.t. x. This penalty encourages f(·) to be locally linear." }, { "heading": "3 METHOD", "text": "We propose an extension of SENNs (Alvarez-Melis & Jaakkola, 2018) to autoregressive time series modelling, which is essentially a vector autoregression (see Equation 11 in Appendix A) with generalised coefficient matrices. We refer to this model as generalised vector autoregression (GVAR). The GVAR model of order K is given by\nxt = K∑ k=1 Ψθk (xt−k)xt−k + εt, (5)\nwhere Ψθk : Rp → Rp×p is a neural network parameterised by θk. For brevity, we omit the intercept term here and in following equations. No specific distributional assumptions are made on the additive innovation terms εt. Ψθk (xt−k) is a matrix whose components correspond to the generalised coefficients for lag k at time step t. In particular, the component (i, j) of Ψθk (xt−k) corresponds to the influence of xjt−k on x i t. In our implementation, we use K MLPs for Ψθk(·) with p input units and p2 outputs each, which are then reshaped into an Rp×p matrix. Observe that the model defined in Equation 5 takes on a form of SENN (see Equation 3) with future time series values as the response, past values as basis concepts, and sum as a link function.\nRelationships between variables x1, ..., xp and their variability throughout time can be explored by inspecting generalised coefficient matrices. To mitigate spurious inference in multivariate time series, we train GVAR by minimising the following penalised loss function with the mini-batch gradient descent:\n1\nT −K T∑ t=K+1 ‖xt − x̂t‖22 + λ T −K T∑ t=K+1 R (Ψt) + γ T −K − 1 T−1∑ t=K+1 ‖Ψt+1 −Ψt‖22 , (6) where {xt}Tt=1 is a single observed replicate of a p-variate time series of length T ; x̂t =∑K k=1 Ψθ̂k (xt−k)xt−k is the one-step forecast for the t-th time point by the GVAR model; Ψt is\na shorthand notation for the concatenation of generalised coefficient matrices at the t-th time point:[ Ψθ̂K (xt−K) Ψθ̂K−1 (xt−K+1) ... Ψθ̂1 (xt−1) ] ∈ Rp×Kp; R (·) is a sparsity-inducing penalty term; and λ, γ ≥ 0 are regularisation parameters. The loss function (see Equation 6) consists of three terms: (i) the mean squared error (MSE) loss, (ii) a sparsity-inducing regulariser, and (iii) the smoothing penalty term. Note, that in presence of categorically-valued variables the MSE term can be replaced with e.g. the cross-entropy loss.\nThe sparsity-inducing term R(·) is an appropriate penalty on the norm of the generalised coefficient matrices. Examples of possible penalties for the linear VAR are provided in Table 4 in Appendix A. These penalties can be easily adapted to the GVAR model. In the current implementation, we employ the elastic-net-style penalty term (Zou & Hastie, 2005; Nicholson et al., 2017) R(Ψt) = α ‖Ψt‖1 + (1− α) ‖Ψt‖ 2 2, with α = 0.5.\nThe smoothing penalty term, given by 1T−K−1 ∑T−1 t=K+1 ‖Ψt+1 −Ψt‖ 2 2, is the average norm of the difference between generalised coefficient matrices for two consecutive time points. This penalty term encourages smoothness in the evolution of coefficients w.r.t. time and replaces the gradient penalty Lθ (f (x)) from the original formulation of SENN (see Equation 4). Observe that if the term is constrained to be 0, then the GVAR model behaves as a penalised linear VAR on the training data: coefficient matrices are invariant across time steps.\nfor the loss function in Appendix D." }, { "heading": "3.1 INFERENCE FRAMEWORK", "text": "Once neural networks Ψθ̂k , k = 1, ...,K, have been trained, we quantify strengths of Grangercausal relationships between variables by aggregating matrices Ψθ̂k (xt) across all time steps into\nsummary statistics. We aggregate the obtained generalised coefficients into matrix S ∈ Rp×p as follows:\nSi,j = max 1≤k≤K\n{ medianK+1≤t≤T (∣∣∣∣(Ψθ̂k (xt))i,j ∣∣∣∣)} , for 1 ≤ i, j ≤ p. (7)\nIntuitively, Si,j are statistics that quantify the strength of the Granger-causal effect of xi on xj using magnitudes of generalised coefficients. We expect Si,j to be close to 0 for non-causal relationships and Si,j 0 if xi → xj . Note that in practice S is not binary-valued, as opposed to the ground truth adjacency matrix A, which we want to infer, because the outputs of Ψθ̂k(·) are not shrunk to exact zeros. Therefore, we need a procedure deciding for which variable pairs Si,j are significantly different from 0.\nTo infer a binary matrix of GC relationships, we propose a heuristic stability-based procedure that relies on time-reversed Granger causality (TRGC) (Haufe et al., 2012; Winkler et al., 2016). The intuition behind time reversal is to compare causality scores obtained from original and time-reversed data: we expect relationships to be flipped on time-reversed data (Haufe et al., 2012; Winkler et al., 2016). Winkler et al. (2016) prove the validity of time reversal for linear finite-order autoregressive processes. In our work, time reversal is leveraged for inferring stable dependency structures in nonlinear time series.\nAlgorithm 1 summarises the proposed stability-based thresholding procedure. During inference, two separate GVAR models are trained: one on the original time series data, and another on timereversed data (lines 3-4 in Algorithm 1). Consequently, we estimate strengths of GC relationships with these two models, as in Equation 7, and choose a threshold for matrixS which yields the highest agreement between thresholded GC strengths estimated on original and time-reversed data (lines 5-9 in Algorithm 1). A sequence ofQ thresholds, given by ξ = (ξ1, ..., ξQ), is considered where the i-th threshold is an ξi-quantile of values in S. The agreement between inferred thresholded structures is measured (line 7 in Algorithm 1) using balanced accuracy score (Brodersen et al., 2010), denoted by BA (·, ·), equal-to the average of sensitivity and specificity, to reflect both sensitivity and specificity of the inference results. Other measures can be used for quantifying the agreement, for example, graph similarity scores (Zager & Verghese, 2008). In this paper, we utilise BA, because considered time series have sparse GC summary graphs and BA weighs positives and negatives equally. In practice, trivial solutions, such as inferring no causal relationships, only self-causal links or all possible causal links, are very stable. The agreement for such solutions is set to 0. Thus, the procedure assumes that the true causal structure is different from these trivial cases. Figure 6 in Appendix E contains an example of stability-based thresholding applied to simulated data.\nAlgorithm 1: Stability-based thresholding for inferring Granger causality with GVAR.\nInput: One replicate of multivariate time series {xt}Tt=1; regularisation parameters λ and γ ≥ 0; model order K ≥ 1; sequence ξ = (ξ1, ..., ξQ), 0 ≤ ξ1 < ξ2 < ... < ξQ ≤ 1.\nOutput: Estimate  of the adjacency matrix of the GC summary graph. 1 Let {x̃t}Tt=1 be the time-reversed version of {xt} T t=1, i.e. {x̃1, ..., x̃T } ≡ {xT , ...,x1} . 2 Let τ (X, χ) be the elementwise thresholding operator. For each component ofX , τ (Xi,j , χ) = 1, if |Xi,j | ≥ χ, and τ (Xi,j , χ) = 0, otherwise. 3 Train an order K GVAR with parameters λ and γ by minimising loss in Equation 6 on {xt}Tt=1 and compute S as in Equation 7. 4 Train another GVAR on {x̃t}Tt=1 and compute S̃ as in Equation 7. 5 for i = 1 to Q do 6 Let κi = qξi(S) and κ̃i = qξi(S̃), where qξ(X) denotes the ξ-quantile of X . 7 Evaluate agreement\nςi = 1 2 [ BA ( τ (S, κi) , τ ( S̃>, κ̃i )) + BA ( τ ( S̃>, κ̃i ) , τ (S, κi) )] .\n8 end 9 Let i∗ = argmax1≤i≤Q ςi and ξ∗ = ξi∗ .\n10 Let  = τ (S, qξ∗(S)). 11 return Â.\nTo summarise, this procedure attempts to find a dependency structure that is stable across original and time-reversed data in order to identify significant Granger-causal relationships. In Section 4, we demonstrate the efficacy of this inference framework. In particular, we show that it performs on par with previously proposed approaches mentioned in Section 2.2." }, { "heading": "3.1.1 COMPUTATIONAL COMPLEXITY", "text": "Our inference framework differs from the previously proposed cMLP, cLSTM (Tank et al., 2018), TCDF (Nauta et al., 2019), and eSRU (Khanna & Tan, 2020) w.r.t. computational complexity. Mentioned methods require training p neural networks, one for each variable separately, whereas our inference framework trains 2K neural networks. A clear disadvantage of GVAR is its memory complexity: GVAR has many more parameters, since every MLP it trains has p2 outputs. Appendix F provides a comparison between training times on simulated datasets with p ∈ {4, 15, 20}. In practice, for a moderate order K and a larger p, we observe that training a GVAR model is faster than a cLSTM and eSRU." }, { "heading": "4 EXPERIMENTS", "text": "The purpose of our experiments is twofold: (i) to compare methods in terms of their ability to infer the underlying GC structure; and (ii) to compare methods in terms of their ability to detect signs of GC effects. We compare GVAR to 5 baseline techniques: VAR with F -tests for Granger causality1 and the Benjamini-Hochberg procedure (Benjamini & Hochberg, 1995) for controlling the false discovery rate (FDR) (at q = 0.05); cMLP and cLSTM (Tank et al., 2018)2; TCDF (Nauta et al., 2019)3; and eSRU (Khanna & Tan, 2020)4. We particularly focus on the baselines that, similarly to GVAR, leverage sparsity-inducing penalties, namely cMLP, cLSTM, and eSRU. In addition, we provide a comparison with dynamic Bayesian networks (Murphy & Russell, 2002) in Appendix I. The code is available in the GitHub repository: https://github.com/i6092467/GVAR.\n4.1 INFERRING GRANGER CAUSALITY\nWe first compare methods w.r.t. their ability to infer GC relationships correctly on two synthetic datasets. We evaluate inferred dependencies on each independent replicate/simulation separately against the adjacency matrix of the ground truth GC graph, an example is shown in Figure 2. Each method is trained only on one sequence. Unless otherwise mentioned, we use accuracy (ACC) and balanced accuracy (BA) scores to evaluate thresholded inference results. For cMLP, cLSTM, and eSRU, the relevant weight norms are compared to 0. For TCDF, thresholding is performed within the framework based on the permutation test described by Nauta et al. (2019). For GVAR, thresholded matrices are obtained by applying Algorithm 1. In addition, we look at the continuously-valued inference results: norms of relevant weights, scores, and strengths of GC relationships (see Equation 7). We compare these scores against the true structure using areas under receiver operating characteristic (AUROC) and precision-recall (AUPRC) curves. For all evaluation metrics, we only consider offdiagonal elements of adjacency matrices, ignoring self-causal relationships, which are usually the easiest to infer. Note that our evaluation approach is different from those of Tank et al. (2018) and Khanna & Tan (2020); this partially explains some deviations from their re-\nsults. Relevant hyperparameters of all models are tuned to maximise the BA score or AUPRC (if a model fails to shrink any weights to zeros) by performing a grid search (see Appendix H for details about hyperparameter tuning). In Appendix M, we compare the prediction error of all models on held-out data.\n1As implemented in the statsmodels library (Seabold & Perktold, 2010). 2https://github.com/iancovert/Neural-GC. 3https://github.com/M-Nauta/TCDF. 4https://github.com/sakhanna/SRU_for_GCI." }, { "heading": "4.1.1 LORENZ 96 MODEL", "text": "A standard benchmark for the evaluation of GC inference techniques is the Lorenz 96 model (Lorenz, 1995). This continuous time dynamical system in p variables is given by the following nonlinear differential equations:\ndxi dt = ( xi+1 − xi−2 ) xi−1 − xi + F, for 1 ≤ i ≤ p, (8)\nwhere x0 := xp, x−1 := xp−1, and xp+1 := x1; and F is a forcing constant that, in combination with p, controls the nonlinearity of the system (Tank et al., 2018; Karimi & Paul, 2010). As can be seen from Equation 8, the true causal structure is quite sparse. Figure 2 shows the adjacency matrix of the summary graph for this dataset (for other datasets, adjacency matrices are visualised in Appendix G). We numerically simulate R = 5 replicates with p = 20 variables and T = 500 observations under F = 10 and F = 40. The setting is similar to the experiments of Tank et al. (2018) and Khanna & Tan (2020), but includes more variables.\nTable 1 summarises the performance of the inference techniques on the Lorenz 96 time series under F = 10 and F = 40. For F = 10, all of the methods apart from TCDF are very successful at inferring GC relationships, even linear VAR. On average, GVAR outperforms all baselines, although performance differences are not considerable. For F = 40, the inference problem appears to be more difficult (Appendix J investigates performance of VAR and GVAR across a range of forcing constant values). In this case, TCDF and cLSTM perform surprisingly poorly, whereas cMLP, eSRU, and GVAR achieve somewhat comparable performance levels. GVAR attains the best combination of accuracy and BA scores, whereas cMLP has the highest AUROC and AUPRC. Thus, on Lorenz 96 data, the performance of GVAR is competitive with the other methods." }, { "heading": "4.1.2 SIMULATED FMRI TIME SERIES", "text": "Another dataset we consider consists of rich and realistic simulations of blood-oxygen-leveldependent (BOLD) time series (Smith et al., 2011) that were generated using the dynamic causal modelling functional magnetic resonance imaging (fMRI) forward model. In these time series, variables represent ‘activity’ in different spatial regions of interest within the brain. Herein, we consider R = 5 replicates from the simulation no. 3 of the original dataset. These time series contain p = 15 variables and only T = 200 observations. The ground truth causal structure is very sparse (see Appendix G). Details about hyperparameter tuning performed for this dataset can be found in Appendix H.2. This experiment is similar to one presented by Khanna & Tan (2020).\nTable 2 provides a comparison of the inference techniques. Surprisingly, TCDF outperforms other methods by a considerable margin (cf. Table 1). It is followed by our method that, on average, outperforms cMLP, cLSTM, and eSRU in terms of both AUROC and AUPRC. GVAR attains a BA score comparable to cLSTM. Importantly, eSRU fails to shrink any weights to exact zeros, thus, hindering the evaluation of accuracy and balanced accuracy scores (marked as ‘NA’ in Table 2). This\nexperiment demonstrates that the proximal gradient descent (Parikh & Boyd, 2014), as implemented by eSRU (Khanna & Tan, 2020), may fail to shrink any weights to 0 or shrinks all of them, even in relatively simple datasets. cMLP seems to provide little improvement over simple VAR w.r.t. AUROC or AUPRC. In general, this experiment promisingly shows that GVAR performs on par with the techniques proposed by Tank et al. (2018) and Khanna & Tan (2020) in a more realistic and data-scarce scenario than the Lorenz 96 experiment." }, { "heading": "4.2 INFERRING EFFECT SIGN", "text": "So far, we have only considered inferring GC relationships, but not the signs of Granger-causal effects. Such information can yield a better understanding of relations among variables. To this end, we consider the Lotka–Volterra model with multiple species ( Bacaër (2011) provides a definition of\nthe original two-species system ) , given by the following differential equations:\ndxi\ndt = αxi − βxi ∑ j∈Pa(xi) yj − η ( xi )2 , for 1 ≤ i ≤ p, (9)\ndyj\ndt = δyj ∑ k∈Pa(yj) xk − ρyj , for 1 ≤ j ≤ p, (10)\nwhere xi correspond to population sizes of prey species; yj denote population sizes of predator species; α, β, η, δ, ρ > 0 are fixed parameters controlling strengths of interactions; and Pa(xi), Pa(yj) are sets of Granger-causes of xi and yj , respectively. According to Equations 9 and 10,\nFigure 3: Simulated two-species Lotka–Volterra time series (top) and generalised coefficients (bottom). Prey have a positive effect on predators, and vice versa.\nthe population size of each prey species xi is driven down by ∣∣Pa(xi)∣∣ predator species (negative effects), whereas each\npredator species yj is driven up by ∣∣Pa(yj)∣∣ prey populations (positive effects).\nWe simulate the multi-species Lotka–Volterra system numerically. Appendix K contains details about simulations and the summary graph of the time series. To infer effect directions, we inspect signs of median generalised coefficients for trained GVAR models. For cMLP, cLSTM, TCDF, and eSRU, we inspect signs of averaged weights in relevant layers. For VAR, we examine coefficient signs. For the sake of fair comparison, we restrict all models to a maximum lag of K = 1 (where applicable). In this experiment, we focus on BA scores for positive ( BApos ) and negative ( BAneg ) relationships. Appendix L provides another example of detecting effect signs with GVAR, on a trivial benchmark with linear dynamics.\nTable 3 shows the results for this experiment. Linear VAR does not perform well at inferring the GC structure, however, its coefficient signs are strongly associated with true signs of relationships. cMLP provides a considerable improvement in GC inference, and surprisingly its input weights are informative about the signs of GC effects. cLSTM fails to shrink any of the relevant weights to zero; furthermore, the signs of its weights are not associated with the true signs. Although eSRU performs better than VAR at inferring the summary graph, its weights are not associated with effect signs at\nall. TCDF performs poorly in this experiment, failing to infer any relationships apart from selfcausation. Our model considerably outperforms all baselines in detecting effect signs, achieving nearly perfect scores: it infers more meaningful and interpretable parameter values than all other models.\nThese results are not surprising, because the baseline methods, apart from linear VAR, rely on interpreting weights of relevant layers that, in general, do not need to be associated with effect signs and are only informative about the presence or absence of GC interactions. Since the GVAR model follows a form of SENNs (see Equation 2), its generalised coefficients shed more light into how the future of the target variable depends on the past of its predictors. This restricted structure is more intelligible and yet is sufficiently flexible to perform on par with sparse-input neural networks.\nIn addition to inferring the summary graph, GVAR allows inspecting variability of generalised coefficients. Figure 3 provides an example of generalised coefficients inferred for a two-species Lotka– Volterra system. Although coefficients vary with time, GVAR consistently infers that the predator population is driven up by prey and the prey population is driven down by predators. For the multispecies system used to produce the quantitative results, inferred coefficients behave similarly (see Figure 12 in Appendix K)." }, { "heading": "5 CONCLUSION", "text": "In this paper, we focused on two problems: (i) inferring Granger-causal relationships in multivariate time series under nonlinear dynamics and (ii) inferring signs of Granger-causal relationships. We proposed a novel framework for GC inference based on autoregressive modelling with selfexplaining neural networks and demonstrated that, on simulated data, its performance is promisingly competitive with the related methods of Tank et al. (2018) and Khanna & Tan (2020). Proximal gradient descent employed by cMLP, cLSTM, and eSRU often does not shrink weights to exact zeros and, thus, prevents treating the inference technique as a statistical hypothesis test. Our framework mitigates this problem by performing a stability-based selection of significant relationships, finding a GC structure that is stable on original and time-reversed data. Additionally, proposed GVAR model is more amenable to interpretation, since relationships between variables can be explored by inspecting generalised coefficients, which, as we showed empirically, are more informative than input layer weights. To conclude, the proposed model and inference framework are a viable alternative to previous techniques and are better suited for exploratory analysis of multivariate time series data.\nIn future research, we plan a thorough investigation of the stability-based thresholding procedure (see Algorithm 1) and of time-reversal for inferring GC. Furthermore, we would like to facilitate a more comprehensive comparison with the baselines on real-world data sets. It would also be interesting to consider better-informed link functions and basis concepts (see Equation 2). Last but not least, we plan to tackle the problem of inferring time-varying GC structures with the introduced framework." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Djordje Miladinovic and Mark McMahon for valuable discussions and inputs. We also acknowledge Jonas Rothfuss and Kieran Chin-Cheong for their helpful feedback on the manuscript." }, { "heading": "A LINEAR VECTOR AUTOREGRESSION", "text": "Linear vector autoregression (VAR) (Lütkepohl, 2007) is a time series model conventionally used to test for Granger causality (see Section 2.1). VAR assumes that functions gi(·) in Equation 1 are linear:\nxt = ν + K∑ k=1 Ψkxt−k + εt, (11)\nwhere ν ∈ Rp is the intercept vector; Ψk ∈ Rp×p are coefficient matrices; and εt ∼ Np (0,Σε) are Gaussian innovation terms. Parameter K is the order of the VAR model and determines the maximum lag at which Granger-causal interactions occur. In VAR, Granger causality is defined by zero constraints on the coefficients, in particular, xi does not Granger-cause xj if and only if, for all lags k ∈ {1, 2, ...,K}, (Ψk)j,i = 0. These constraints can be tested by performing, for example, F -test or Wald test.\nUsually a VAR model is fitted using multivariate least squares. In high-dimensional time series, regularisation can be introduced to avoid inferring spurious associations. Table 4 shows various sparsity-inducing penalties for a linear VAR model of order K (see Equation 11), described by Nicholson et al. (2017). Different penalties induce different sparsity patterns in coefficient matrices Ψ1,Ψ2, ...,ΨK . These penalties can be adapted to the GVAR model for the sparsity-inducing term R(·) in Equation 6.\nB INFERRING GRANGER CAUSALITY UNDER NONLINEAR DYNAMICS\nBelow we provide a more detailed overview of the related work on inferring nonlinear multivariate Granger causality, focusing on the recent machine learning techniques that tackle this problem.\nKernel-based Methods. Kernel-based GC inference techniques provide a natural extension of the VAR model, described in Appendix A, to nonlinear dynamics. Marinazzo et al. (2008) leverage reproducing kernel Hilbert spaces to infer linear Granger causality in an appropriate transformed feature space. Ren et al. (2020) introduce a kernel-based GC inference technique that relies on regularisation – Hilbert–Schmidt independence criterion (HSIC) Lasso GC.\nNeural Networks with Non-uniform Embedding. Montalto et al. (2015) propose neural networks with non-uniform embedding (NUE). Significant Granger causes are identified using the NUE, a feature selection procedure. An MLP is ‘grown’ iteratively by greedily adding lagged predictor components as inputs. Once stopping conditions are satisfied, a predictor time series is claimed a significant cause of the target if at least one of its lagged components was added as an input. This technique is prohibitively costly, especially, in a high-dimensional setting, since it requires training and comparing many candidate models. Wang et al. (2018) extend the NUE by replacing MLPs with LSTMs.\nNeural Granger Causality. Tank et al. (2018) propose inferring nonlinear Granger causality using structured multilayer perceptron and long short-term memory with sparse input layer weights, cMLP and cLSTM. To infer GC, p models need to be trained with each variable as a response. cMLP and\ncLSTM leverage the group Lasso penalty and proximal gradient descent (Parikh & Boyd, 2014) to infer GC relationships from trained input layer weights.\nAttention-based Convolutional Neural Networks. Nauta et al. (2019) introduce the temporal causal discovery framework (TCDF) that utilises attention-based convolutional neural networks (CNN). Similarly to cMLP and cLSTM (Tank et al., 2018), the TCDF requires training p neural network models to forecast each variable. Key distinctions of the TCDF are (i) the choice of the temporal convolutional network architecture over MLPs or LSTMs for time series forecasting and (ii) the use of the attention mechanism to perform attribution. In addition to the GC inference, the TCDF can detect time delays at which Granger-causal interactions occur. Furthermore, Nauta et al. (2019) provide a permutation-based procedure for evaluating variable importance and identifying significant causal links.\nEconomy Statistical Recurrent Units. Khanna & Tan (2020) propose an approach for inferring nonlinear Granger causality similar to cMLP and cLSTM (Tank et al., 2018). Likewise, they penalise norms of weights in some layers to induce sparsity. The key difference from the work of Tank et al. (2018) is the use of statistical recurrent units (SRUs) as a predictive model. Khanna & Tan (2020) propose a new sample-efficient architecture – economy-SRU (eSRU).\nMinimum Predictive Information Regularisation. Wu et al. (2020) adopt an informationtheoretic approach to Granger-causal discovery. They introduce learnable corruption, e.g. additive Gaussian noise with learnable variances, for predictor variables and minimise a loss function with minimum predictive information regularisation that encourages the corruption of predictor time series. Similarly to the approaches of Tank et al. (2018); Nauta et al. (2019); Khanna & Tan (2020), this framework requires training p models separately.\nAmortised Causal Discovery & Neural Relational Inference. Kipf et al. (2018) introduce the neural relational inference (NRI) model based on graph neural networks and variational autoencoders. The NRI model disentangles the dynamics and the undirected relational structure represented explicitly as a discrete latent graph variable. This allows pooling time series data with shared dynamics, but varying relational structures. Löwe et al. (2020) provide a natural extension of the NRI model to the Granger-causal discovery. They introduce a more general framework of the amortised causal discovery wherein time series replicates have a varying causal structure, but share dynamics. In contrast to the previous methods (Tank et al., 2018; Nauta et al., 2019; Khanna & Tan, 2020; Wu et al., 2020), which in this setting, have to be retrained separately for each replicate, the NRI is trained on the pooled dataset, leveraging shared dynamics." }, { "heading": "C PROPERTIES OF SELF-EXPLAINING NEURAL NETWORKS", "text": "As defined by Alvarez-Melis & Jaakkola (2018), g(·), θ(·), and h(·) in Equation 2 need to satisfy:\n1. g(·) is monotonic and additively separable in its arguments; 2. ∂g∂zi > 0 with zi = θ(x)ih(x)i, for all i; 3. θ(·) is locally difference-bounded by h(·), i.e. for every x0, there exist δ > 0 and L ∈ R s.t. if ‖x− x0‖ < δ, then ‖θ(x)− θ(x0)‖ ≤ L ‖h(x)− h(x0)‖;\n4. {h(x)i}ki=1 are interpretable representations of x; 5. k is small." }, { "heading": "D ABLATION STUDY OF THE LOSS FUNCTION", "text": "We inspect hyperparameter tuning results for the GVAR model on Lorenz 96 (see Section 4.1.1) and synthetic fMRI time series (Smith et al., 2011) (see Section 4.1.2) as an ablation study for the loss function proposed (see Equation 6). Figures 4 and 5 show heat maps of BA scores (left) and AUPRCs (right) for different values of parameters λ and γ for Lorenz 96 and fMRI datasets, respectively. For the Lorenz 96 system, sparsity-inducing regularisation appears to be particularly important, nevertheless, there is also an increase in BA and AUPRC from a moderate smoothing penalty. For fMRI, we observe considerable performance gains from introducing both the sparsityinducing and smoothing penalty terms. Given the sparsity of the ground truth GC structure and\nthe scarce number of observations (T = 200), these gains are not unexpected. During preliminary experiments, we ran grid search across wider ranges of λ and γ values, however, did not observe further improvements from stronger regularisation. In summary, these results empirically motivate the need for two different forms of regularisation leveraged by the GVAR loss function: the sparsityinducing and smoothing penalty terms." }, { "heading": "E STABILITY-BASED THRESHOLDING: EXAMPLE", "text": "Figure 6 shows an example of agreement between dependency structures inferred on original and time-reversed synthetic sequences across a range of thresholds (see Algorithm 1). In addition, we plot the BA score for resulting thresholded matrices evaluated against the true adjacency matrix. As can be seen, the peak of stability agrees with the highest BA achieved. In both cases, the procedure described by Algorithm 1 chooses the optimal threshold, which results in the highest agreement with the true dependency structure (unknown at the time of inference)." }, { "heading": "F COMPARISON OF TRAINING & INFERENCE TIME", "text": "To compare the considered methods in terms of their computational complexity, we measure training and inference time across three simulated datasets with p ∈ {4, 15, 20} variables and varying time series lengths. This experiment was performed on an Intel Core i7-7500U CPU (2.70 GHz × 4) with a GeForce GTX 950M GPU. All models were trained for 1000 epochs with a mini-batch size of 64. In each dataset, the same numbers of hidden layers and hidden units were used across all models. When applicable, models were restricted to the same order (K). Table 5 contains average training and inference time in seconds with standard deviations. Observe that for the fMRI and Lorenz 96 datasets, GVAR is substantially faster than cLSTM and eSRU." }, { "heading": "G GC SUMMARY GRAPHS OF SIMULATED TIME SERIES", "text": "" }, { "heading": "H HYPERPARAMETER TUNING", "text": "In our experiments (see Section 4), for all of the inference techniques compared, we searched across a grid of hyperparameters that control the sparsity of inferred GC structures. Other hyperparameters were fine-tuned manually. Final results reported in the paper correspond to the best hyperparameter configurations. With this testing setup, our goal was to fairly compare best achievable inferential performance of the techniques.\nTables 6, 7, and 8 provide ranges for hyperparameter values considered in each experiment. For cMLP and cLSTM (Tank et al., 2018), parameter λ is the weight of the group Lasso penalty; for TCDF (Nauta et al., 2019), significance parameter α is used to decide which potential GC relationships are significant; eSRU (Khanna & Tan, 2020) has three different penalties weighted by λ1:3. For the stability-based thresholding (see Algorithm 1) in GVAR, we used Q = 20 equally spaced values in [0, 1] as sequence ξ5. For Lorenz 96 and fMRI experiments, grid search results are plotted in Figures 4, 8, and 5. Figure 9 contains GVAR grid search results for the Lotka–Volterra experiment.\n5We did not observe high sensitivity of performance w.r.t. ξ, as long as sufficiently many evenly spaced sparsity levels are considered.\nH.1 LORENZ 96\nH.2 FMRI" }, { "heading": "I COMPARISON WITH DYNAMIC BAYESIAN NETWORKS", "text": "We provide a comparison between GVAR and linear Gaussian dynamic Bayesian networks (DBN). DBNs are a classical approach to temporal structure learning (Murphy & Russell, 2002). We use R (R Core Team, 2020) package dbnR (Quesada, 2020) to fit DBNs on all datasets considered in Section 4. We use two structure learning algorithms: the max-min hill-climbing (MMHC) (Tsamardinos et al., 2006) and the particle swarm optimisation (Xing-Chen et al., 2007). Table 9 contains average balanced accuracies achieved by DBNs and GVAR for inferring the GC structure. Not surprisingly, DBNs outperform GVAR on the time series with linear dynamics, but fail to infer the true structure on Lorenz 96, fMRI, and Lotka–Volterra datasets." }, { "heading": "J THE LORENZ 96 SYSTEM: FURTHER EXPERIMENTS", "text": "Figure 10: Inferential performance of GVAR across a range of forcing constant values.\nIn addition to the experiments in Section 4.1.1, we examine the performance of VAR and GVAR models across a range of forcing constant values F = 0, 5, 10, 25, 50 for the Lorenz 96 system with p = 20 variables. Figure 10 shows average AUPRCs with bands corresponding to the 95% CI for the mean. It appears that for both models, inference is more challenging for lower (< 10) and higher values of F (> 20). This observation is in agreement with the results in Section 4.1.1, where all inference techniques performed worse under F = 40 than under F = 10. Note that herein same GVAR hyperparameters were used across all values of F . It is possible that better inferential performance could be achieved with GVAR after comprehensive hyperparameter tuning." }, { "heading": "K THE LOTKA–VOLTERRA SYSTEM", "text": "The original Lotka–Volterra system (Bacaër, 2011) includes only one predator and one prey species, population sizes of which are denoted by x and y, respectively. Population dynamics are given by the following coupled differential equations:\ndx dt = αx− βxy, (12) dy dt = δyx− ρy, (13)\nwhere α, β, δ, ρ > 0 are fixed parameters determining strengths of interactions.\nIn this paper, we consider a multiple species version of the system, given by Equations 9 and 10 in Section 4.2. We simulate the system under α = ρ = 1.1, β = δ = 0.2, η = 2.75 × 10−5,∣∣Pa(xi)∣∣ = ∣∣Pa(yj)∣∣ = 2, p = 10, i.e. 2p = 20 variables in total, with T = 2000 observations. Figure 11 depicts signs of GC effects between variables in a multi-species Lotka–Volterra with 2p = 20 species and 2 parents per variable. We simulate this system numerically by using the Runge-Kutta method6. We make a few adjustments to the state transition equations, in particular: we introduce normally-distributed innovation terms to make simulated data noisy; during state transitions, we clip all population sizes below 0. Figure 12 shows traces of generalised coefficients inferred by GVAR: magnitudes and signs of coefficients reflect the true dependency structure.\n6Simulations are based on the implementation available at https://github.com/smkalami/ lotka-volterra-in-python." }, { "heading": "L EFFECT SIGN DETECTION IN A LINEAR VAR", "text": "Herein we provide results for the evaluation of GVAR and our inference framework on a very simple synthetic time series dataset. We simulate time series with p = 4 variables and linear interaction dynamics given by the following equations:\nxt = a1xt−1 + εxt , wt = a2wt−1 + a3xt−1 + εwt ,\nyt = a4yt−1 + a5wt−1 + ε y t , zt = a6zt−1 + a7wt−1 + a8yt−1 + ε z t,\n(14)\nwhere coefficients ai ∼ U ([−0.8,−0.2] ∪ [0.2, 0.8]) are sampled independently in each simulation; and ε·t ∼ N (0, 0.16) are additive innovation terms. This is an adapted version of one of artificial datasets described by Peters et al. (2013), but without instantaneous effects.\nThe GC summary graph of the system is visualised in Figure 13. It is considerably denser than for the Lorenz 96, fMRI, and Lotka–Volterra time series investigated in Section 4.\nSimilarly to the experiment described in Section 4.2, we infer GC relationships with the proposed framework and evaluate inference results against the true dependency structure and effect signs. Table 10 contains average performance across 10 simulations achieved by GVAR with hyperparameter values K = 1, λ = 0.2, and γ = 0.5. In addition, we provide results for some of the baselines (no systematic hyperparameter tuning was performed for this experiment).\nGVAR attains perfect AUROC and AUPRC in all 10 simulations. In some cases, stability-based thresholding fails to recover a completely correct GC structure, nevertheless, average accuracy and balanced accuracy scores are satisfactory. Signs of inferred generalised coefficients mostly agree with the ground truth effect signs, as given by coefficients a1:8 in Equation 14.\nNot surprisingly, linear VAR performs the best on this dataset w.r.t. all evaluation metrics. Both cMLP and eSRU successfully infer GC\nrelationships, achieving results comparable to GVAR. However, neither infers effect signs as well as GVAR. Thus, similarly to the experiment in Section 4.2, we conclude that generalised coefficients are more interpretable than neural network weights leveraged by cMLP, TCDF, and eSRU.\nTo summarise, this simple experiment serves as a sanity check and shows that our GC inference framework performs reasonably in low-dimensional time series with linear dynamics and a relatively dense GC summary graph (cf. Figure 7). Generally, the method successfully infers both the dependency structure and interaction signs." }, { "heading": "M PREDICTION ERROR", "text": "Herein we evaluate the prediction error of models on held-out data. Last 20% of time series points were held out to perform prediction on Lorenz 96, fMRI, and Lotka–Volterra datasets. Root-meansquare error (RMSE) was computed for predictions across R = 5 independent replicates:\nRMSE = 1\np p∑ j=1 √√√√∑Tt=1 (x̂jt − xjt) T , (15)\nwhere x̂jt is the one-step forecast made by a model for the t-th point of the j-th variable, and T is the length of the held-out time series segment. Table 11 contains average RMSEs for all models across the considered datasets. In general, RMSEs are not associated with the inferential performance of the models (cf. tables 1, 2, and 3). For example, while TCDF achieves the best inferential performance on fMRI (see Table 2), its prediction error is higher than for cMLP. This ‘misalignment’ between the prediction error and the consistency of variable selection is not surprising and has been discussed before, e.g. by Meinshausen & Bühlmann (2010)." } ]
2,021
INTERPRETABLE MODELS FOR GRANGER CAUSALITY USING SELF-EXPLAINING NEURAL NETWORKS
SP:0ae8f7b5bbb7f3cb1f97a95af2d936f44a494a9c
[ "This paper theoretically shows that the gradient variance of the standard MLM (masked language modeling) task in BERT-style training depends on the covariance of the gradient covariance between different masks within the mini-batch. This paper then empirically shows that the covariance can be reduced by making the masks less overlapped. A modified version of MLM is proposed, which has been shown with a smaller gradient variance than the standard MLM. The experimental results show that the new masking strategy does lead to some gains on several benchmarks." ]
Masked Language Model (MLM) framework has been widely adopted for selfsupervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the Hamming distance between two different masks (given a certain text sequence). To reduce the variance due to the sampling of masks, we propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. Thereafter, the tokens within one segment are masked for training. We prove, from a theoretical perspective, that the gradients derived from this new masking schema have a smaller variance and can lead to more efficient self-supervised training. We conduct extensive experiments on both continual pre-training and general pre-training from scratch. Empirical results confirm that this new masking strategy can consistently outperform standard random masking. Detailed efficiency analysis and ablation studies further validate the advantages of our fully-explored masking strategy under the MLM framework.
[]
[ { "authors": [ "Karim Ahmed", "N. Keskar", "R. Socher" ], "title": "Weighted transformer network for machine", "venue": "translation. ArXiv,", "year": 2017 }, { "authors": [ "Guillaume Alain", "Alex Lamb", "Chinnadhurai Sankar", "Aaron Courville", "Yoshua Bengio" ], "title": "Variance reduction in sgd by distributed importance sampling", "venue": "arXiv preprint arXiv:1511.06481,", "year": 2015 }, { "authors": [ "Dogu Araci" ], "title": "Finbert: Financial sentiment analysis with pre-trained language models", "venue": "arXiv preprint arXiv:1908.10063,", "year": 2019 }, { "authors": [ "Hangbo Bao", "Li Dong", "Furu Wei", "Wenhui Wang", "Nan Yang", "Xiaodong Liu", "Yu Wang", "Songhao Piao", "Jianfeng Gao", "Ming Zhou" ], "title": "Unilmv2: Pseudo-masked language models for unified language model pre-training", "venue": "arXiv preprint arXiv:2002.12804,", "year": 2020 }, { "authors": [ "Iz Beltagy", "Kyle Lo", "Arman Cohan" ], "title": "Scibert: A pretrained language model for scientific text", "venue": "arXiv preprint arXiv:1903.10676,", "year": 2019 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Liang Chen", "Tianyuan Zhang", "Di He", "Guolin Ke", "Liwei Wang", "Tie-Yan Liu" ], "title": "Variance-reduced language pretraining via a mask proposal network", "venue": "arXiv preprint arXiv:2008.05333,", "year": 2020 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Quoc V Le", "Christopher D Manning" ], "title": "Electra: Pre-training text encoders as discriminators rather than generators", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yuxian Gu", "Zhengyan Zhang", "Xiaozhi Wang", "Zhiyuan Liu", "Maosong Sun" ], "title": "Train no evil: Selective masking for task-guided pre-training", "venue": "arXiv preprint arXiv:2004.09733,", "year": 2020 }, { "authors": [ "Suchin Gururangan", "Ana Marasović", "Swabha Swayamdipta", "Kyle Lo", "Iz Beltagy", "Doug Downey", "Noah A Smith" ], "title": "Don’t stop pretraining: Adapt language models to domains and tasks", "venue": null, "year": 2004 }, { "authors": [ "Pengcheng He", "Xiaodong Liu", "Jianfeng Gao", "Weizhu Chen" ], "title": "Deberta: Decoding-enhanced bert with disentangled attention", "venue": "arXiv preprint arXiv:2006.03654,", "year": 2020 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Mandar Joshi", "Danqi Chen", "Yinhan Liu", "Daniel S Weld", "Luke Zettlemoyer", "Omer Levy" ], "title": "Spanbert: Improving pre-training by representing and predicting spans", "venue": null, "year": 1907 }, { "authors": [ "David Jurgens", "Srijan Kumar", "Raine Hoover", "Daniel A. McFarland", "Dan Jurafsky" ], "title": "Measuring the evolution of a scientific field through citation", "venue": "frames. Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Johannes Kiesel", "Maria Mestre", "Rishabh Shukla", "Emmanuel Vincent", "Payam Adineh", "David Corney", "Benno Stein", "Martin Potthast" ], "title": "Semeval-2019 task 4: Hyperpartisan news detection", "venue": "In Proceedings of the 13th International Workshop on Semantic Evaluation,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": null, "year": 1909 }, { "authors": [ "Jinhyuk Lee", "Wonjin Yoon", "Sungdong Kim", "Donghyeon Kim", "Sunkyu Kim", "Chan Ho So", "Jaewoo Kang" ], "title": "Biobert: a pre-trained biomedical language representation model for biomedical text", "venue": "mining. Bioinformatics,", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "venue": null, "year": 1910 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Kyle Lo", "Lucy Lu Wang", "Mark Neumann", "Rodney Kinney", "Daniel S Weld" ], "title": "S2orc: The semantic scholar open research corpus", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Yi Luan", "Luheng He", "M. Ostendorf", "Hannaneh Hajishirzi" ], "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "venue": null, "year": 2018 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "K. Song", "X. Tan", "T. Qin", "Jianfeng Lu", "T. Liu" ], "title": "Mass: Masked sequence to sequence pre-training for language generation", "venue": null, "year": 2019 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yukun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "Ernie: Enhanced representation through knowledge integration", "venue": null, "year": 1904 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Chong Wang", "Xi Chen", "Alexander J Smola", "Eric P Xing" ], "title": "Variance reduction for stochastic gradient optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Wei Wang", "Bin Bi", "Ming Yan", "Chen Wu", "Zuyi Bao", "Liwei Peng", "Luo Si" ], "title": "Structbert: Incorporating language structures into pre-training for deep language understanding", "venue": null, "year": 1908 }, { "authors": [ "Lin Xiao", "Tong Zhang" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": null, "year": 2014 }, { "authors": [ "Yi Yang", "Mark Christopher Siy UY", "Allen Huang" ], "title": "Finbert: A pretrained language model for financial communications", "venue": "arXiv preprint arXiv:2006.08097,", "year": 2020 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Russ R Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Rowan Zellers", "Ari Holtzman", "Hannah Rashkin", "Yonatan Bisk", "Ali Farhadi", "Franziska Roesner", "Yejin Choi" ], "title": "Defending against neural fake news", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Junyu Zhang", "Lin Xiao" ], "title": "A stochastic composite gradient method with incremental variance reduction, 2019", "venue": null, "year": 2019 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Zhengyan Zhang", "Xu Han", "Zhiyuan Liu", "Xin Jiang", "Maosong Sun", "Qun Liu. Ernie" ], "title": "Enhanced language representation with informative entities", "venue": null, "year": 2019 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large-scale pre-trained language models have attracted tremendous attention recently due to their impressive empirical performance on a wide variety of NLP tasks. These models typically abstract semantic information from massive unlabeled corpora in a self-supervised manner. Masked language model (MLM) has been widely utilized as the objective for pre-training language models. In the MLM setup, a certain percentage of words within the input sentence are masked out, and the model learns useful semantic information by predicting those missing tokens.\nPrevious work found that the specific masking strategy employed during pre-training plays a vital role in the effectiveness of the MLM framework (Liu et al., 2019; Joshi et al., 2019; Sun et al., 2019). Specifically, Sun et al. (2019) introduce entity-level and phrase-level masking strategies, which incorporate the prior knowledge within a sentence into its masking choice. Moreover, Joshi et al. (2019) propose to mask out random contiguous spans, instead of tokens, since they can serve as more challenging targets for the MLM objective.\nAlthough effective, we identify an issue associated with the random sampling procedure of these masking strategies. Concretely, the difficulty of predicting each masked token varies and is highly dependent on the choice of the masking tokens. For example, predicting stop words such as “the” or “a” tends to be easier relative to nouns or rare words. As a result, with the same input sentence, randomly sampling certain input tokens/spans, as a typical masking recipe, will result in undesirable large variance while estimating the gradients. It has been widely demonstrated that large gradient variance typically hurts the training efficiency with stochastic gradient optimization algorithms (Zhang & Xiao, 2019; Xiao & Zhang, 2014; Johnson & Zhang, 2013). Therefore, we advocate that obtaining gradients with a smaller variance has the potential to enable more sample-efficient learning and thus accelerate the self-supervised learning stage.\nIn this paper, we start by introducing a theoretical framework to quantify the variance while estimating the training gradients. The basic idea is to decompose the total gradient variance into two terms, where the first term is induced by the data sampling process and the second one relates to the sampling procedure of masked tokens. Theoretical analysis on the second variance term demonstrates that it can be minimized by reducing the gradient covariance between two masked sequences.\nFurthermore, we conduct empirical investigation on the correlation between the gradient’s covariance while utilizing two masked sequences for training and the Hamming distance between these sequences. We observed that that the gradients’ covariance tends to decrease monotonically w.r.t the sequences’ Hamming distance.\nInspired by the observations above, we propose a fully-explored masking strategy, which maximizes the Hamming distance between any of two sampled masks on a fixed text sequence. First, a text sequence is randomly divided into multiple non-overlapping segments, where each token (e.g. subword, word or span) belongs to one of them. While the model processes this input, several different training samples are constructed by masking out one of these segments (and leaving the others as the contexts). In this manner, the gradient w.r.t. this input sequence can be calculated by averaging the gradients across multiple training samples (produced by the same input sequence). We further verify, under our theoretical framework, that the gradients obtained with such a scheme tend to have smaller variance, and thus can improve the efficiency of the pre-training process.\nWe evaluate the proposed masking strategies on both continued pre-training (Gururangan et al., 2020) and from-scratch pre-training scenarios. Specifically, Computer Science (CS) and News domain corpus (Gururangan et al., 2020) are leveraged to continually pre-train RoBERTa models, which are then evaluated by fine-tuning on downstream tasks of the corresponding domain. It is demonstrated that the proposed fully-explored masking strategies lead to pre-trained models with stronger generalization ability. Even with only a subset of the pre-training corpus utilized in (Gururangan et al., 2020), our model consistently outperforms reported baselines across four natural language understanding tasks considered. Besides, we also show the effectiveness of our method on the pre-training of language models from scratch. Moreover, the comparison between fully-explored and standard masking strategies in terms of their impacts on the model learning efficiency further validates the advantages of the proposed method. Extensive ablation studies are further conducted to explore the robustness of the proposed masking scheme." }, { "heading": "2 RELATED WORK", "text": "Self-supervised Language Pre-training Self-supervised learning has been demonstrated as a powerful paradigm for natural language pre-training in recent years. Significant research efforts have been devoted to improve different aspects of the pre-training recipe, including training objective (Lewis et al., 2019; Clark et al., 2019; Bao et al., 2020; Liu et al., 2019), architecture design (Yang et al., 2019; He et al., 2020), the incorporation of external knowledge (Sun et al., 2019; Zhang et al., 2019), etc. The idea of self-supervised learning has also been extended to generation tasks and achieves great results (Song et al., 2019; Dong et al., 2019). Although impressive empirical performance has been shown, relatively little attention has been paid to the efficiency of the pre-training stage. ELECTRA (Clark et al., 2019) introduced a discriminative objective that is defined over all input tokens. Besides, it has been showed that incorporating language structures (Wang et al., 2019) or external knowledge (Sun et al., 2019; Zhang et al., 2019) into pre-training could also help the language models to better abstract useful information from unlabeled samples.\nIn this work, we approach the training efficiency issue from a different perspective, and argue that the masking strategies, as an essential component within the MLM framework, plays a vital role especially in efficient pre-training. Notably, our fully-explored masking strategies can be easily combined with different model architectures for MLM training. Moreover, the proposed approach can be flexibly integrated with various tokenization choices, such as subword, word or span (Joshi et al., 2019). A concurrent work Chen et al. (2020) also shared similar motivation as this work, although they have a different solution and their method requires additional computation to generate the masks, and yet is outperformed by the proposed fully-explored masking (see Table 2).\nDomain-specific Continual Pre-training The models mentioned above typically abstract semantic information from massive, heterogeneous corpora. Consequently, these models are not tailored to any specific domains, which tends to be suboptimal if there is a domain of interest beforehand. Gururangan et al. (2020) showed that continual pre-training (on top of general-purpose LMs) with in-domain unlabeled data could bring further gains to downstream tasks (of that particular domain). One challenge inherent in continual pre-training is that in-domain data are usually much more limited, compared to domain-invariant corpora. As a result, how to efficiently digest information from unlabeled corpus is especially critical while adapting large pre-trained language models to specific\ndomains. To this end, we specifically consider the continual pre-training scenario to evaluate the effectiveness of our approach." }, { "heading": "3 PROPOSED APPROACH", "text": "In this section, we first review the MLM framework that is widely employed for natural language pre-training. Motivated by the gradient variance analysis of MLM in section 3.2, we present the fully-explored masking strategy, which serves as a simple yet effective solution to reduce the gradient variance during training. Connections between our method and variance reduction theory are further drawn, which provides a theoretical foundation for the effectiveness of the proposed strategy. Finally, some specific implementation details are discussed." }, { "heading": "3.1 BACKGROUND: THE MLM FRAMEWORK", "text": "Let V denote the token vocabulary and x = (x1, . . . , xn) denote a sentence of n tokens, where xi ∈ V for i = 1, . . . , n. Let m = (m1, . . . ,mn) denote a binary vector of length n, where mi ∈ {0, 1}, representing the mask over a sentence. Specifically, mi = 1 means the token xi is masked and mi = 0 if xi is not masked. We use m ◦ x to denote a masked sentence, that is,\n(m ◦ x)i = { [MASK] if mi = 1, xi if mi = 0.\nIn addition, let m be the complement of m; in other words, mi = 0 if mi = 1 and mi = 1 if mi = 0. Naturally, m ◦ x denotes a sentence with the complement mask m. For a typical language model with parameters θ, its loss function over a sentence x ∈ Vn and a mask m ∈ {0, 1}n as\n`(θ;x,m) = − logP (m ◦ x | θ,m ◦ x) = − ∑\ni :mi=1\nlogP (xi | θ,m ◦ x), (1)\nwhere P (xi | θ,m ◦ x) is the probability of the model correctly predicting xi given the masked sentence m ◦ x. If mi = 0, it always has P (xi | θ,m ◦ x) = 1 as the ground-truth xi is not masked. We will focus on masks with a fixed length. Let τ be an integer satisfying 0 ≤ τ ≤ n. The set of possible masks of length τ is defined asM(τ) ,\nM(τ) = { m ∈ {0, 1}n | ∑n i=1mi = τ } ,\nwhich has a cardinality |M(τ)| = ( n τ ) = n!τ !(n−τ)! . Therefore, the average loss function over a sentence x with masks of length τ is,\nL(θ;x) = Em∼Unif(M(τ))`(θ;x,m) = 1( n τ ) ∑ m∈M(τ) `(θ;x,m). (2)\nLet’s consider PD as the probability distribution of sentence in a corpus D ⊂ Vn. The overall loss function for training the masked language model over corpus D is\nL(θ) , Ex∼PDL(θ;x) = Ex∼PDEm∼Unif(M(τ))`(θ;x,m). (3)\nDuring each step of the training process, it randomly samples a mini-batch of sentences St ⊂ D. For each x ∈ St, we randomly pick a subset of masksKt(x) ⊂M(τ), independently across different x. Thus, the mini-batch stochastic gradient is\ngt(θ) = 1\nS ∑ x∈St 1 K ∑ m∈Kt(x) ∇θ`(θ;x,m). (4)\nwhere |St| = S and |Kt(x)| = K for all t. Clearly we have E[gt(θ)] = ∇L(θ). In the following sections, it first gives the variance of gt(θ) which is an important factor to influence model training efficiency (Xiao & Zhang, 2014; Zhang & Xiao, 2019), and then it presents the proposed fullyexplored masking strategy to help reduce the gradient variance of the masked language model." }, { "heading": "3.2 ANALYSIS: GRADIENT VARIANCE OF MLM", "text": "According to the law of total variance (Weiss, 2005), the variance of the mini-batch stochastic gradient VarSt,Kt(gt) can be decomposed as follows,\nVarSt,Kt(gt) = ESt [ VarKt(gt) | St ] + VarSt ( EKt [gt | St)] ) , (5)\nwhere for simplicity gt indicates gt(θ) as in eqn. 4, the first term captures the variance due to the sampling of masks, and the second term is the variance due to the sampling of mini-batch sentences.\nIn this work, we focus on the analysis of the first term in eqn. 5: the variance due to the sampling of masks. Denote g(m) = ∇θ`(θ;x,m) for any fixed sentence x. Consider a subset of random masks (m1, . . . ,mK) and the K-masks gradient is defined as the average of them:\ng(m1, . . . ,mK) = 1\nK K∑ k=1 g(mk). (6)\nTheorem 1. The Variance of K-masks gradient: Var ( g(m1, . . . ,mK) ) is\n1 K Var ( g(m1) ) + ( 1− 1 K ) Cov ( g(m1), g(m2) ) . (7)\nwhere, Cov ( g(m1), g(m2) ) = E [( g(m1)− ḡ )T ( g(m2)− ḡ )] , (8) and ḡ = Em∼Unif(M(τ))g(m) = 1( n τ ) ∑ m∈M(τ) g(m). (9)\nThe detailed proof of Theorem 1 is given in Appendix A.1. In the theorem 1, it indicates that the variance of K-masks gradient can be reduced by decreasing the gradient covariance between different masks." }, { "heading": "3.3 VARIANCE REDUCTION: FULLY-EXPLORED MASKING", "text": "Intuitively, if we consider the two random masks m1 and m2 are totally overlapped, the gradient covariance between them should be maximized. It motivates us to consider the correlation between gradient covariance and Hamming distance between these two masks. Thus, we have the following assumption: Assumption 1. The covariance Cov ( g(m1), g(m2) ) is monotone decreasing in term of the Hamming distance between m1 and m2.\nTo verify the Assumption 1, we sample a small set of CS domain sentences from S2ORC dataset (Gururangan et al., 2020) as the fixed mini-batch for our analysis, then calculate gradient covariance Cov ( g(m1), g(m2) ) of mask pairs (m1,m2) with different Hamming distances H(m1,m2) using this mini-batch. In Figure 2, the center of gradient covariance distribution is shifting to left (lower value) as Hamming distance increases. In Figure 3, we also observe that the average gradient covariance is decreasing in term od Hamming distance. As shown in Figure 2, 3, Assumption 1 holds for both RoBERTa-base model (Liu et al., 2019) and RoBERTa-base model after continually pre-trained on CS domain corpus.\nFigure 2: The distributions of gradient covariance Cov\n( g(m1), g(m2) ) for different Hamming dis-\ntances H(m1,m2) based on a small CS domain corpus. Left: gradient covariance distribution of selected parameters in RoBERTa-base model; Right: gradient covariance distribution of selected parameters in RoBERTa-base model after continually pre-trained on CS domain corpus.\nWe propose the fully-explored masking strategy which restricts masks sampled from M(τ) to be non-overlapping, denoted asMFE(τ) for simplicity:\nm1, . . . ,mK ∼MFE(τ),∀i 6=jH(mi,mj) = 2τ (10) With the fully-explored masking strategy, it can be easily approved that expectation of gradient over MFE(τ) is an unbiased estimation of the expectation of gradient overM(τ) as in the Lemma 2. In the Lemma 3, it states that the Theorem 1 is still hold for fully-explored masking strategy, which indicates that the variance of K-masks gradient can be reduced by restricting the masks sampling fromMFE(τ). Lemma 2. The expectation of gradient over MFE(τ) equals to the expectation of gradient over M(τ).\nProof. The joint distributions of (m1, . . . ,mK) sampling fromMFE(τ) is different from the i.i.d. case due to the non-overlapping restriction. However, the marginal distributions of mk are still the same uniform distribution overM(τ). Therefore, we still have E [ g(mk) ] = ḡ,∀k=1,...,K and as a\nconsequence E [ g(m1, . . . ,mK) ] = ḡ.\nLemma 3. The derivation of K-masks gradient variance in Eqn.7 holds for both MFE(τ) and M(τ).\nThe detailed proof of Lemma 3 can be seen in Appendix A.2." }, { "heading": "3.4 IMPLEMENTATION DETAILS", "text": "The details of fully-explored masking algorithm is illustrated in Algorithm 1. In practice, a text sequence Si is tokenized into subword pieces (Devlin et al., 2018) with the maximum sequence\nlength n set as 512 in the experiments. To understand the performance of fully-explored masking strategy at different granularity, the text sequence Si is masked at both subword level (Devlin et al., 2018; Liu et al., 2019) and span level (Joshi et al., 2019; Wang et al., 2019). The details about other hyperparameters, i.e., masking-ratio and number of splitsK will be discussed in experiment section.\nAlgorithm 1: Fully-explored Masking Language Model Input: Language corpus D = {S1, ...,ST }, |Si| = n; Masking ratio τn ; Number of sampling masks K, where K ∗ τn ≤ 1; Initial model parameters θ0; Output: model parameters θ∗ foreach Si ∈ S do\nSample K split masking vectors (m1, ...,mK) fromMFE(τ) as in Eqn.10. Calculate the gradient g(m1, ...,mK) as in Eqn. 6. Update model parameters θi+1 = Optimizer(θi, g(m1, ...,mK))\nend return θ∗ = θT" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the proposed fully-explored masking strategy for natural language pretraining in two distinct settings: i) continual pre-training, where a given pre-trained model is further adapted leveraging domain-specific unlabeled corpus; ii) pre-training from scratch, where largescale corpus such as Wikipedia and BookCorpus are employed to pre-train a model from the beginning. We also compare the training efficiency of FE-MLM and MLM frameworks to validate our theoretical findings. Ablation studies and analysis are further conducted regarding the proposed approach." }, { "heading": "4.1 EXPERIMENTAL SETTINGS", "text": "For the continual pre-training scenario, we consider unlabeled corpus from two different domains, i.e., computer science (CS) papers and news text from RealNews, introduced by Gururangan et al. (2020). As to the downstream tasks, ACL-ARC citation intent Jurgens et al. (2018) and SciERC relation classification Luan et al. (2018) are utilized for the CS domain. While for the News domain, HyperPartisan news detection Kiesel et al. (2019) and AGNews Zhang et al. (2015) are employed to facilitate the comparison with Gururangan et al. (2020).\nFollowing (Gururangan et al., 2020) for a fair comparison, RoBERTa Liu et al. (2019) is leveraged as the initial model for continual pre-training, where the same training objective is optimized on the domain-specific corpus. We choose a batch size of 48, and the model is trained using Adam Kingma & Ba (2014), with a learning rate of 1 × 10−4. It is worth noting that we observe, in our initial experiments, that downsampling only 72k documents from the total of 2.22M used by Gururangan et al. (2020) can result in similar performance on downstream tasks. This happens in the News domain as well, where we randomly sample 623k documents out of 11.90M. The model is continually pre-trained for around 40k and 20k steps on the CS and News domain, respectively. One important hyerparameter under the FE-MLM framework is the number of split the input sequence is divided into, where we use 4 as the default setting. The sensitivity of the proposed algorithm w.r.t this hyperparameter is further investigated (see Figure 4).\nFor the general pre-training experiments, we employ BERT as the baseline model. Wikiepdia and BookCorpus (Zhu et al., 2015) are used as the pre-training corpus, with a total size of 16G. We adopt the same tokenization (i.e., WordPiece embeddings) as BERT, which consists of 30,522 tokens in the vocabulary. The model is optimized using Adam with the learning rate set as 1× 10−4. A batch size of 256 is employed, and we train the model for 1M step. The resulting model is evaluated on the GLUE benchmark (Wang et al., 2018), which comprises 9 natural language understanding (NLU) tasks such as textual entailment (MNLI, RTE), question-answer entailment (QNLI), question paraphrase (QQP), paraphrase (MRPC), sentimnt analysis (SST-2), linguistic acceptability (CoLA) and textual similarity (STS-B). The HuggingFace codebase1 is used in our implementation for both settings.\n1https://github.com/huggingface/transformers" }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "Continual Pre-training Evaluation We applied our fully-explored MLM framework to both subword and span masking scenarios. The results for the RoEBRTa model continually pre-trained on the CS and News domains are presented in Table 1. It can be observed that the continual pre-training stage can benefit the downstream tasks on both domains(compared with fine-tuning the RoBERTa model directly). Besides, the baseline numbers based on our implementation is on par with or even better than those reported in Gururangan et al. (2020), even though we downsample the original unlabeled corpus (as described in the previous section).\nMore importantly, in the subword masking case, our FE-MLM framework consistently exhibits better empirical results on the downstream tasks. Note that to ensure fair comparison, the same computation is taken for both MLM and FE-LMLM training. This indicates that the models pre-trained using the FE-MLM approach have been endowed with stronger generalization ability, relative to standard MLM training. Similar trend is also observed in the span masking experiments, demonstrating that the proposed method can be naturally and flexibly integrated with different masking schemes. Besides, we found that subword masking tends to work better than span masking in the CS domain, whereas the opposite is true as to the News domain. This may be attributed to the different nature of the unlabeled corpus from two domains.\nGeneral Pre-training Evaluation We also evaluate the FE-MLM framework on the pretraining experiments with general-purpose unlabeled corpus. Specifically, we follow the same setting as BERT, except that the proposed fullyexplored masking strategy is applied (the same amount of computation is used for the baseline and our method). The corresponding results are shown in Table 2. It can be found that the FEMLM approach, while fine-tuned on the GLUE benchmark, exhibits better results on 7 out of 9 NLU datasets than the MLM baseline. This demonstrates the wide applicability of the pro-\nposed FE-MLM framework across different pre-training settings.\nWe further compare the averaged score over 9 GLUE datasets with other methods, and the numbers are summarized in Table 3. It is worth noting that the BERT-based (ReEval) baseline is obtained by fine-tuning the BERT model released by Devlin et al. (2018) on each GLUE datasets, with the results on the dev sets averaged. Another BERT-base number is reported by Clark et al. (2019), which is pretty similar our re-evaluation one. MAsk proposal network (MAP-Net) is proposed by Chen et al.\n(2020), which shares the same motivation of reducing the gradient variance during the masking stage. However, we approach the problem with a distinct strategy based upon extensive theoretical analysis. We found that BERT-base model improved with the FE-MLM training significantly outperform BERT-base model and Mask Proposal Network, further demonstrating the effectiveness of proposed approach." }, { "heading": "4.3 ABLATION STUDIES AND ANALYSIS", "text": "Training Efficiency Although previous results has demonstrated that our model at the end of pre-training process exhibits stronger generalization ability, it is still unclear how the proposed FEMLM framework influence the training efficiency during training. In this regard, we examine the intermediate models obtained with both MLM and FE-MLM training by fine-tuning and evaluating them on the ACL-ARC dataset. Specifically, the RoBERTa-base setting is used here, which is continually pre-trained on the unlabeled corpus from the CS domain. As shown on the left side of Figure 4, FE-MLM beats MLM at different steps of pre-training. More importantly, the performance of the FE-MLM model improves much faster at the early stage (i.e., less than around 15,000 steps), indicating that the model is able to extract useful semantic information (from unlabeled corpus) more efficiently with the proposed masking strategy. This observation further highlights the advantage and importance of reducing gradient variance under the MLM framework.\nThe Effect of Masking Split Number The number of masking split the input sentence is divided into is a vital hyperparameter for the FE-MLM approach. Therefore, we investigate its impact on the performance of resulting models. Concretely, the setting of continual pre-training on the CS domain is employed, where the RoBERTa model is pre-trained with the FE-MLM objective. Different split number is explored, including 2, 4, 6, 8, and 12.5% of all the tokens are masked within each split. The results are visualized on the right side of Figure 4. We found that the downstream task performance (on both ACL-ARC and SciERC datasets) is fairly stable w.r.t. different split numbers. This may relate to our non-overlapping sampling strategy, which helps the model to explore various position in the sentence as efficiently as possible, so that the model exhibits strong performance even with only two splits." }, { "heading": "5 CONCLUSION", "text": "In this paper, we identified that under the MLM framework, the procedure of randomly sampling masked tokens will give rise to undesirably large variance while estimating the training gradients. Therefore, we introduced a theoretical framework to quantify the gradient variance, where the connection between gradient covariance and the Hamming distance between two different masked sequences are drawn. Motivated by these observations, we proposed a fully-explored masking strategy, where a text sequence is divided into multiple non-overlapping segments. During training, all tokens in one segment are masked out, and the model is asked to predict them with the other segments as the context. It was demonstrated theoretically that the gradients obtained with such a novel masking strategy have a smaller variance, thus enabling more efficient pre-training. Extensive experiments on both continual pre-training and general pre-training from scratch showed that the proposed masking strategy consistently outperforms standard random masking." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREM 1\nProof. Var ( g(m1, . . . ,mK) ) = E [ ‖g(m1, . . . ,mK)− ḡ‖2 ] = E\n[∥∥∥ 1K ∑Kk=1 g(mk)− ḡ∥∥∥2] = 1 K2 E [∥∥∥∑Kk=1(g(mk)− ḡ)∥∥∥2]\n= 1\nK2 E K∑ k=1 ‖g(mk)− ḡ‖2 + ∑ k 6=l ( g(mk)− ḡ )T ( g(ml)− ḡ ) = 1\nK2 K∑ k=1 Var ( g(mk) ) + K∑ k 6=l Cov ( g(mk), g(ml) ) (11) where for each pair k 6= l,\nCov ( g(mk), g(ml) ) = E [( g(mk)− ḡ )T ( g(ml)− ḡ )] ,\nSince m1, . . . ,mK are i.i.d. samples from the uniform distribution overM(τ), we have Var ( g(m1) ) = · · · = Var ( g(mK) ) (12)\nCov ( g(mk), g(ml) ) = Cov ( g(m1), g(m2) ) ,∀ k 6= l. (13)\nTherefore we have the following variance decomposition: Var ( g(m1, . . . ,mK) ) = 1 K Var ( g(m1) ) + ( 1− 1 K ) Cov ( g(m1), g(m2) ) . (14)\nA.2 PROOF OF LEMMA 3\nProof. The joint distribution of the pairs (mk,ml) sampling fromMFE(τ) are different from the i.i.d. case, it can be shown (by symmetry) that the identity equation 13 also holds. Considering the fact that the derivation in equation 11 holds for any sampling strategy, we conclude that the variance decomposition in equation 14 still holds." } ]
2,020
FULLY-EXPLORED MASKED LANGUAGE MODEL
SP:bafc54f2425a7c809ceb795b0c972efba778d06d
[ "The authors introduce a DSL, the Restricted Access Sequence Processing (RASP) language, that they claim can serve as a computational model for the transformer-encoder. They develop the reader's intuition for RASP by providing RASP implementations of many basic operations such as computing histograms, sorting, and reversing. They also show how, for a given RASP program, to determine the minimum number of layers required and to upper-bound the number of heads required to implement it as a transformer. Lastly, they analyze two transformer variants, restricted-attention transformers and sandwich transformers. For the former, they use the RASP perspective to claim a theoretical limitation, and for the latter, they comment that a known empirical finding is intuitive in light of the RASP perspective." ]
What is the computational model behind a transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language. We map the basic components of a transformer-encoder – attention and feed-forward computation – into the simple primitives of select, aggregate and zipmap, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a transformer, augmenting it with tools we discover in our work. In particular, we provide RASP programs for histograms, sorting, and even logical inference similar to that of Clark et al. (2020). We further use our model to relate their difficulty in terms of the number of required layers and attention heads. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works.
[]
[ { "authors": [ "Joshua Ainslie", "Santiago Ontañón", "Chris Alberti", "Philip Pham", "Anirudh Ravula", "Sumit Sanghai" ], "title": "ETC: encoding long and structured data in transformers", "venue": "CoRR, abs/2004.08483,", "year": 2020 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "CoRR, abs/2004.05150,", "year": 2020 }, { "authors": [ "Satwik Bhattamishra", "Kabir Ahuja", "Navin Goyal" ], "title": "On the ability and limitations of transformers to recognize formal languages, 2020", "venue": null, "year": 2020 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "CoRR, abs/1904.10509,", "year": 2019 }, { "authors": [ "Peter Clark", "Oyvind Tafjord", "Kyle Richardson" ], "title": "Transformers as soft reasoners over language, 2020", "venue": null, "year": 2020 }, { "authors": [ "Michael Hahn" ], "title": "Theoretical limitations of self-attention in neural sequence models", "venue": "CoRR, abs/1906.06755,", "year": 2019 }, { "authors": [ "Kurt Hornik", "Maxwell B. Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Armand Joulin", "Tomas Mikolov" ], "title": "Inferring algorithmic patterns with stack-augmented recurrent nets", "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "CoRR, abs/1508.04025,", "year": 2015 }, { "authors": [ "William Merrill", "Gail Weiss", "Yoav Goldberg", "Roy Schwartz", "Noah A. Smith", "Eran Yahav" ], "title": "A formal hierarchy of RNN architectures", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 443–459,", "year": 2020 }, { "authors": [ "Christian W. Omlin", "C. Lee Giles" ], "title": "Extraction of rules from discrete-time recurrent neural networks", "venue": "Neural Networks,", "year": 1996 }, { "authors": [ "Ofir Press", "Noah A. Smith", "Omer Levy" ], "title": "Improving transformer models by reordering their sublayers", "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Guillaume Rabusseau", "Tianyu Li", "Doina Precup" ], "title": "Connecting weighted automata and recurrent neural networks through spectral learning", "venue": "CoRR, abs/1807.01406,", "year": 2018 }, { "authors": [ "Aurko Roy", "Mohammad Saffar", "Ashish Vaswani", "David Grangier" ], "title": "Efficient content-based sparse attention with routing transformers, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "CoRR, abs/1706.03762,", "year": 2017 }, { "authors": [ "Jesse Vig", "Yonatan Belinkov" ], "title": "Analyzing the structure of attention in a transformer language model", "venue": "In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2019 }, { "authors": [ "Gail Weiss", "Yoav Goldberg", "Eran Yahav" ], "title": "Extracting automata from recurrent neural networks using queries and counterexamples", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chulhee Yun", "Srinadh Bhojanapalli", "Ankit Singh Rawat", "Sashank J. Reddi", "Sanjiv Kumar" ], "title": "Are transformers universal approximators of sequence-to-sequence", "venue": null, "year": 2019 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontañón", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang", "Amr Ahmed" ], "title": "Big bird: Transformers for longer sequences", "venue": "URL https://arxiv.org/abs/2007.14062", "year": 2007 }, { "authors": [ "Quanshi Zhang", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Interpretable convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Transformer-Encoder Vaswani" ], "title": "A transformer-encoder with L layers, H heads, and input and internal dimensions d,m is a length-preserving function T : (Rd)∗ → (Rd)∗ parameterised by the weights of L transformer-encoder layers", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "While Yun et al. (2019) show that sufficiently large transformers can approximate any constantlength sequence-to-sequence function, and Hahn (2019) provides theoretical limitations on their ability to compute functions on unbounded input length, neither of these provide insight on how a transformer may achieve a specific task. Orthogonally, Bhattamishra et al. (2020) provide transformer constructions for several counting languages, but this also does not direct us towards a general model.\nThis is in stark contrast to other neural network architectures, which do have clear computational models. For example, convolution networks are seen as as a sequence of filters (Zhang et al., 2018), and finite-state automata and their variants have been extensively used both for extraction from and theoretical analysis of recurrent neural networks (RNNs) (Omlin & Giles, 1996; Weiss et al., 2018; Rabusseau et al., 2018; Merrill et al., 2020), even inspiring new RNN variants (Joulin & Mikolov, 2015).\nIn this work we propose a computational model for the transformer-encoder, in the form of a simple sequence-processing language which we dub RASP(Restricted Access Sequence Processing Language). Much like how automata describe the token-by-token processing behavior of an RNN, our language captures the unique information flow constraints under which a transformer (Vaswani et al., 2017) operates as it processes input sequences.\nConsidering computation problems and their implementation in the RASP language allows us to “think like a transformer” while abstracting away the technical details of a neural network in favor of symbolic programs. A RASP program operates on sequences of values from uniform atomic types, and transforms them by composing a restricted set of sequence processors. One pair of processors is used to select inputs for aggregation, and then aggregate the selected items. Another processor performs arbitrary but local computation over its (localized) input. However, access to the complete sequence is available only through aggregate operations that reduce a stream of numbers to a scalar. The key to performing complex global computations under this model is to compose the aggregations such that they gather the correct information, that can then be locally processed for a final output.\nGiven a RASP program, we can analyze it to infer the minimal number of layers and maximum number of heads that is required to implement it as a transformer. We show several examples of expressive programs written in the RASP language, showing how complex operations can be\nimplemented by a transformer. Thinking in terms of the RASP model also allows us to shed light on recent empirical observation of transformer variants (Press et al., 2020) and find concrete limitations of “efficient transformers” with restricted-attention (Tay et al., 2020)." }, { "heading": "2 THE RESTRICTED ACCESS SEQUENCE PROCESSING LANGUAGE", "text": "In this section, we present the the Restricted Access Sequence Processing Language (RASP). RASP assumes a machine composed of several Turing-complete processors, each of which can only run functions taking and returning a fixed number of primitive arguments, and a simple memory accessor that is controlled by these processors. The select, aggregate, and zipmap operations which we present will define and constrain how the processors work together to process an input sequence.\nWe will focus here only on the language itself, leaving the discussion of its exact relation to transformers to Section 3.\nOverview A RASP program works by manipulating sequences, occasionally with the help of selectors. Sequences contain values of uniform atomic type, such as booleans, integers, floats, or strings. They are functions used for selecting elements from sequences, and are used (together with the appropriate operations) only in the process of creating new sequences. All sequences in RASP are lazily evaluated, meaning that their length and contents are not populated until passed an input.\nThe Base Sequences Every program in RASP begins from the same set of base sequences, and then creates new ones using a small number of core operations. These base sequences are indices, length, and tokens, evaluated on input x1, x2, ..., xn as their names suggest: (0, 1, ..., n − 1), (n, n, ..., n) (of length n), and (x1, x2, ..., xn), respectively.\nCombining Sequences Sequences can be combined in an ‘elementwise’ manner, such that the value of the resulting sequence at each position i is a function of the values in the combined sequences at position i (similar to a map operation), or have positions ‘mixed’ in more complicated ways using selectors, which are functions f : N × N → {True,False} whose sole purpose is to guide the combination of existing sequences into new ones.\nWe present the basic ingredients of RASP using an example. Figure 1 shows a simple RASP function for sorting a sequence of values according to a sequence of keys. It accepts an input sequence vals and uses the base sequence indices, that is available to any RASP program, to compute its output in three operations as follows:\n1. count_conditioned of Line 2 creates a new sequence that counts for each element of keys the number of “previous items” it has in keys, where the “previous items” are defined to be all items that have a lesser value, or equal value and lower index. Thus, num_prevs creates a sequence of numbers, representing the target sorted position of each item.\n2. select of line 7 creates a new selector which will focus each position i on the corresponding position j for which indices[i] is equal to num_prevs[j]. Effectively, it will direct the elements in each position j towards their target location i.\n3. Finally, aggregate of line 8 applies select_sorted_val to vals, moving each i-th element of vals to its calculated sorted position num_prevs[i].\nWe now describe the base operations of RASP in-depth, occasionally presenting an example on the hypothetical input sequence x of length n.\n• zipmap The zipmap operation takes a tuple of sequences and an element-processing function f, and applies f per-index to the values in those sequences to create a new sequence. For a simple example, y1=zipmap((indices,indices), lambda i,j:i+j) creates a sequence that always evaluates to (0, 2, ..., 2n− 2).\n• aggregate The aggregate operation takes a selector s, a sequence x, and an optional parameter default, and averages subsets of the values of x into a new sequence y as follows: for every index i, y[i] is the average of x[j0], x[j1], ...x[jk], where j0, j1, ..., jk are the indices j ∈ [n] for which s(i,j) is True. We say k is the focus width of s at i. If k = 0, then y[j0] is assigned the value in default. For example: if s(i, j) returns True iff i is odd and j=0, and the value in default is d, then y will evaluate to (d,x[0],d,x[0],...,y[n− 1]) where y[n−1] is either d or x[0] depending on the parity of n.\n• select The select operation takes two sequences-tuples of lengths k and l, me=(m1,m2,...,mk) and other=(ot1,ot2,...,otl), and a function f expecting k + 1 atomic values and giving boolean output. It composes these to create a selector s as follows: for every two indices i, j, s(i, j) is the output of f on the i-th and j-th slice of me and other respectively, i.e., s(i, j)=f(m1[i],...,mk[i],ot1[j]...otl[j]). For a simple example, in s=select((indices,),(indices,),lambda mi,oti:mi%2==1 and oti==0), then m1=indices, ot1=indices, and s is the same selector we used for our example in aggregate above.\n• count_conditioned This operation takes the same parameters me,other and f as select, but this time returns a sequence y describing the number of selected influencing positions j for each output position i that s=select(me,other,f) would have created. In other words, for each i, y[i]= k where j1, ..., jk is the set of positions j for which s(i, j)=True. For example, h=count_conditioned((tokens,),(tokens,),lambda a,b:a==b) returns an in-place histogram for the tokens in the input sequence: h(“abaa”)=(3, 1, 3, 3).\nThis concludes the base operations of RASP – all other operations are shortcuts for combinations of the above 4, occasionally with the base sequences.\nSugar We implement RASP with a variety of syntactic sugar, presented fully in appendix E. Briefly:\n1. When applying zipmap to a single sequence, it may be passed directly without using a tuple, e.g.: zipmap(indices,f) is equivalent to zipmap((indices,),f).\n2. zipmap has sugar for most of the binary operators, e.g.: for two sequences x,y, then x+y is sugar for zipmap((x,y),lambda a,b:a+b).\n3. Whenever the focus width of s at some index is ≤ 1 (“up-to-one selection”), aggregate(s,x,default=d) does not explicitly compute the division. In this case the values of x do not have to be numbers.\n4. aggregate accepts one additional optional parameter elementwise_function. The full order of parameters is s,x,elementwise_function,default, and the use of elementwise_function is as follows: aggregate(s,x,f,d) is equivalent to aggregate(s,zipmap(x,f),default=d)." }, { "heading": "2.1 EXAMPLES", "text": "We now present some more example RASP programs, by increasing order of complexity.\nSimple Examples The first and simplest example is to compute an in-place histogram for some sequence vals. This is achieved with a single application of count_conditioned: histogram=count_conditioned(vals,vals,lambda a,b:a==b).\nFrom Length to Parity While length is provided as a primitive in the language, it can actually be achieved as a composition of the other base operations and sequences. This is done by computing full_s=select((),(),lambda :True) followed by 1/aggregate(full_s,indices,lambda i:int(i==0)) (the fraction of elements equal to 0 in indices, inverted). From length and that same full_s we can then define count(vals,v), a function taking any sequence vals and value v and returning a new sequence counting the number of appearances of v in vals. The implementation of count is simply length*aggregate(full_s,vals,lambda e:e==v). count in turn enables us to write programs like parity simply as count(tokens,1)%2==01.\nReverse We can reverse a sequence seq with the help of an up-to-one selector mapping each position to its opposite: flip_s = select(indices, length-1-indices, lambda m,oth:m==oth). We use flip_s to re-order the tokens of seq: reverse=aggregate(flip_s,seq).\nBalanced Parentheses For balanced parentheses we use count_conditioned twice, storing in prev_opens and prev_closes the number of previous “(” or “)” (respectively) tokens each position has, including itself. The sequence is balanced if prev_opens-prev_closes has no negative values, and is 0 at position length-1. These two qualities can be easily computed using two final select-aggregate pairs, and then combined with a zipmap.\nMost Frequent Tokens In fig. 2 we show how RASP can be used to arrange for any input sequence s the most frequent tokens in s, without repetition. The solution has 2 parts: first, we compute the histogram for all the tokens, and mask it such that all but the first of each token is given a negative value. Then, we sort the tokens according to the masked histogram. The solution uses the sort function from Figure 1.\n1This does not contradict with the findings of Hahn (2019), who showed that parity is not computable in transformers when each selector is restricted to width 1 (“hard attention”).\nCount Conditioned The operation count_conditioned is a powerful part of RASP, appearing in many other programs. Surprisingly, it is realisable as a composition of the other operations (and base sequence indices). Understanding its implementation is interesting for learning how to “truly” think like a transformer, and we present the code in Figure 3. The intuition is as follows: we compute the select whose width we want to calculate twice, once such that it also selects the position 0, and once such that it only selects this position. We then aggregate both these values, broadcasting 1 from position 0 and 0 from everywhere else, and using default value 0. The first aggregate computes for each position the inverse of the number of selected positions (excluding 0) plus one, and the second computes whether that position would also focus on 0. A straightforward zipmaps then gives us the result. To further help the intuition, we present in fig. 4 the computation flow for a histogram calculation, when count_conditioned is implemented as described here.\nNote Many of the functions provided with RASP can be expressed in terms of count_conditioned – such as count (which counts how many elements in a sequence are equal to a given value) and contains (which checks if count is greater than 0) – but this is not necessarily an optimal implementation with respect to number of heads it uses. The RASPlibrary provides optimal implementations." }, { "heading": "2.2 IMPLEMENTING SYMBOLIC REASONING A LA ROVER", "text": "How might a transformer implement reasoning, as in Clark et al. (2020)? RASP empowers us to clearly think about such problems, and we sketch a solution direction here. We begin by reformulating the task of Clark et al. to a form that focuses on the core problem, moving away from natural language into more ‘concrete’ domain, and by limiting the type of relations we will consider.\nIn the work of Clark et al., a transformer is presented a sequence of statements and a query Q, and must recognise whether Q is implied by the previous relations or not. For example: a1∈A1, b∈A1 =⇒ b∈A2, a1∈A2? evaluates to True, whereas a1/∈A1, a1/∈A2? evaluates to False. Different inputs for this task can have different depth: the number of ‘logical hops’ needed to correctly identify whether the query statement is true.\nNote. The original work accepts this input in natural language, e.g., “Alan is young. [...] If someone is young then...”, and allows statements with more complicated logical form, such as a1∈A1∧a1∈A2 =⇒ a1∈B. In this section we consider only a simplified and symbolic version, in which the statements are limited to the form of the previous paragraph. We assume the statements are separated by a special token |.\nWe sketch this task in RASP as follows: first, mark each statement ‘block’ in the sequence by its index (as a block), by counting for each token the number of separators before it in the input. Set aside the\nfinal block (the query), it is the set of tokens with no separators after them. For each remaining block mark whether it is a relation (∈ or /∈) or inference ( 7→) statement. Then, for each relation block, note at the position of the set token the element and whether it is inside or outside, and similarly over the element token note the set. Next, for as many repetitions as the logical depth that the program should cover: share set information between all elements and element information between all sets (including from inference blocks, which initially have none initially empty), and then apply one logical step ‘locally’ at each inference block. Finally, for the query, seek any occurrence of the set token in the sequence, and return whether the element token is listed there in its contents.\nGeneralisation on Inference Depth A similar solution to the one we have proposed would be to make all of these logical inferences backwards from the query, i.e. by propagating backwards the requirements that would be sufficient to answer the query. If a trained transformer implements both of these solutions in parallel (for instance, to increase its robustness), this may explain the generalisation to greater query depth observed by Clark et al.." }, { "heading": "3 RELATION TO TRANSFORMERS, AND ANALYSIS", "text": "We discuss how RASP relates to the real transformer architecture, and how it may also be used to compare transformer variants, or analyse the ‘difficulty’ of a task for transformers.\nConnection of RASP to Transformers The select and aggregate operations of RASP correspond to the attention-scoring and then pooling of transformer self-attention heads, the zipmap operations correspond to the feed-forward sublayers, and the computed sequences correspond to head or feedforward inputs and outputs. indices and tokens represent the initial input, while length and conditioned_contains are in fact combinations of the other primitives. The persistence of sequences – such that they may be accessed multiple times over the course of a RASP program – is encouraged by the existence of skip connections in the transformer architecture. In appendix A we consolidate these connections, giving a full description of transformers, and showing how any given transformer can be represented in the RASP language (provided a slight generalization of select2)." }, { "heading": "3.1 PREDICTING TRANSFORMER COMPLEXITY WITH RASP", "text": "The purpose of RASP is to help us reason about the computation process enabled by a transformer. We find that RASP programs lend themselves easily to ‘width’ and ‘depth’ analysis, enabling us to predict the number of heads and layers that a transformer will need to implement the same solution. We discuss this analysis now, and evaluate the predictions it provides in Appendix B.\nFor any given RASP program, we can compute the minimal number of layers required to implement it in a transformer, and upper bound the number of heads this implementation requires.3 This is provided its internal dimensions are wide enough to replicate the given processing functions. This analysis can give us intuition regarding the relative difficulty of different tasks for the transformer architecture, where each algorithm we find for a task gives us an upper bound on the number of layers and heads a transformer needs to solve it.\nWe implement such an analysis and provide a draw_comp_flow function, which automatically positions each attention head using a greedy scheduler, and displays the computation flow accordingly (see fig. 4, and others in the supplementary material).\nA similar analysis can be done for different \"primitive\" computations in isolation, giving us intuition on the ‘cost’ of various common computations: how many additional heads and layers each computation adds when applied to a previously computed sequence. For example, our implementation of sort takes 2 layers, and so whenever we apply it to a computed sequence we know it will increase our program’s depth by 2 from that sequence.\n2This version is omitted from RASP purely in the interest of clarity for the programmer. 3The reason the number of heads can only be upper bounded is because it is possible that some selects in the program are equivalent, or can be combined into a single select without interfering with each other, but it is impossible to identify this statically.\nAlgorithm for computing program depth and width As noted, the first 3 base operations of RASP – select, aggregate, and zipmap – have direct parallels in the transformer architecture, and so we may easily analyse the result of any RASP program to see how many layers and heads it would take to realise in an actual transformer. (For length and count_conditioned, we analyse them in terms of their deconstruction to the other operations and sequences.)\nThe first part of the analysis is simple: every sequence and selector is initiated with a “minimum depth” d, reflecting the earliest layer at which it can be computed, and so the minimum number of layers needed to create any given sequence is d. d is computed for each new sequence as follows:\n1. The base sequences indices and tokens have d = 0, as they are input to the transformer rather than part of its computation.\n2. Any sequence created from a zipmap is given d equal to the maximum d of the inputs to that zipmap, as it can be created immediately after the last of them in the same feed forward computation that concludes it.4\n3. Every selector gets d equal to the maximum d of its creating sequences X plus 1, to reflect the fact that all of them must be calculated before it can even begin (as multi-headed attention happens only once, at the beginning of each layer).\n4. Any selector created from an aggregate has d equal to at least that of its creating selector, and at least one plus those of the input sequences to the aggregate operation (as they must be passed through the attention to create the new sequences).\nRASP makes it easy to access all sequence and selectors on which an unfinished value is dependent, and allowing us to analyse not only the depth but also the width of a given program. The width of the computation is a reflection of how many (unique) selectors are being used at every layer of the transformer: the number of attention heads needed to mimick that layer in a transformer. We say that a selector s is being used at some layer l if a sequence that is aggregated from s is calculated at l. This is because it is possible a selector s may have minimum depth d, but is only or also used at later layers: for instance if the sequences needed for the aggregation operation using s are not yet ready.\nSimilarly, a sequence does not need to be computed at its minimum possible depth, as it is possible it will only be needed much later. Hence there is no one analysis for a given program, and there is room for creating a scheduling algorithm that minimises the maximum width of the transformer, i.e., the maximum of the widths of all layers (useful, as transformers tend to be created with uniform width)." }, { "heading": "4 IMPLICATIONS FOR TRANSFORMER VARIANTS", "text": "" }, { "heading": "4.1 RESTRICTED-ATTENTION TRANSFORMERS", "text": "Multiple works propose restricting the attention mechanism of transformers in order to create more efficient transformers, reducing the time complexity of each layer from O(n2) to O(nlog(n)) or even O(n) with respect to the input sequence length n (see Tay et al. (2020) for a survey of such approaches and their complexity). Several of these do so using sparse attention, in which the attention is masked using different patterns to reduce the number of locations that can interact (see for instance (Child et al., 2019; Beltagy et al., 2020; Ainslie et al., 2020; Zaheer et al., 2020; Roy et al., 2020)).\nConsidering these variants of transformers in terms of the RASP language, allows us to reason about the computations they can and cannot perform. In terms of RASP, these variants of transformers all impose restrictions on the selectors, forcing some of the n2 index pairs (i, j) to False.\nFigure 1 showed how to implement sorting of an input sequence with arbitrary alphabet size, comparison function, and length5. We now prove that RASP variants where the selector is restricted to O(n) pairs (i.e., transformer variants with sufficiently restricted attention), cannot sort.\n4Except in the special case where this maximum d is 0, and the zipmap is not being called from within an aggregate, in which case the sequence is assigned d = 1. This reflects that at least one layer must be used to reach the first feed-forward computation, but also that attention may do a little processing itself before it “aggregates\", using the linear translation V.\n5Providing sufficiently stable word and positional embeddings, a practical limitation that applies to all transformer variants.\nThe computation model of RASP and indeed of all transformer variants allows comparison of values between more than one sequence position only during the select operation, i.e., only while computing the attention distribution Hence, all comparisons necessary for sorting must be applied in select. It follows that whenever select is restricted such that it compares at most O(n) index pairs per head, no constant number of heads and layers will be sufficient for the model to perform sorting on arbitrary length – as sorting is known to require at least O(nlog(n)) comparisons.\nThus, variants of transformers in which the attention is masked to impose O(n) complexity require at least O(log(n)) layers to sort. It also follows that they require O(log(n)) layers to implement count_conditioned, as we see in fig. 1 that count_conditioned can be applied to create a sequence (num_prevs) which is sufficient to complete a sorting operation with only O(n) further operations." }, { "heading": "4.2 SANDWICH TRANSFORMERS", "text": "Recently, Press et al. (2020) showed that reordering the attention and feed-forward sublayers of a transformer affects its ability to train on language modeling tasks. In particular, they showed that 1. pushing feed-forward sublayers towards the bottom of a transformer weakened it, and 2. pushing attention sublayers to the bottom and feed-forward sublayers to the top strengthened it, provided there was still some interleaving in the middle (making a sandwich transformer).\nConsidering the base operations of RASP helps us understand the observations of Press et al.. In RASP, the feed-forward and attention sublayers are the zipmap and select-aggregate (or gather for short) operations. Any arrangement of the sublayers into a set architecture, from the ‘vanilla’ transformer to the variations considered in (Press et al., 2020), imposes a restriction on the number and order of RASP operations that can be chained in a RASP program. For example, an architecture in which all feed-forward sublayers appear before the attention sublayers imposes that no zipmap operation may be applied to the results of any gather operation.\nIn RASP, there is no value to repeated applications of zipmap before the first gather, as no further information can be generated beyond that already described by indices and tokens. This immediately explains the first observation of Press et al. (2020). Conversely, an architecture beginning with several attention sublayers – i.e., multiple gather operations – will be able to gather a large amount of information into each position early in the computation, if only by simple rules. More complicated gathering rules can be realised by applying zipmaps to the gathered information before generating new selectors6, explaining the interleaved attention/feed-forward middle section present in the discovered architecture." }, { "heading": "5 EXPERIMENTS", "text": "To evaluate the relevance of the RASP language to transformers in practice, we train transformers on a small set of synthetic tasks and compare their results to the head- and layer- bounds and attention patterns predicted by RASP programs for the same tasks.\nWhile no RASP program promises to be a unique solution to the task it solves, several of the trained networks find solutions similar to those predicted by RASP. Among the most striking of these is the transformer trained to compute an in-place histogram, e.g. §abbd 7→ (1, 1, 2, 2, 1). We considered this task when the input sequences are presented with a beginning-of-sequence (BOS) token §, writing a single-head RASP program for it and training a single-head transformer on it. Visualizing the selection/attention patterns of these two heads (one RASP and one transformer) showed an identical pattern – see Figure 5.\nWe present the single-head RASP program for this task in Figure 6. Its operation is as follows: first, the selector same_and_0 focuses each position i on all positions j containing the same token as i, and also on the position 0. Hence the width of this selector at each position i 6= 0 is exactly one plus the value vi that should be output at i. Aggregating the sequence (1, 0, 0, ..., 0) with this selector (which always includes focus on 0) gives us the value ai = 1vi+1 at each location i, from which vi can then be recovered with a simple zipmap.\n6Actually, the unbounded power of the processing functions f that RASP allows passing into select and aggregate technically renders zipmap unnecessary.\nThe direct parallel between our program’s only selector and our trained transformer’s attention pattern (Figure 5) suggests that this RASP program describes the exact mechanism that our transformer has discovered.\nWe present further experiments on additional tasks in Appendix B." }, { "heading": "6 CONCLUSIONS", "text": "We abstract the computation model of the Transformer-encoder into a simple sequence processing language that captures the constraints on information flow in a Transformer. Considering computation problems and their implementation in the RASP language allows us to “think like a transformer” while abstracting away the technical details of a neural network in favor of symbolic programs. Moreover, provided it uses reasonable element-processing functions, we can analyze any RASP program to infer the minimum number of layers and maximum number of heads required to implement it in a transformer. We show several examples of expressive programs written in the RASP language, showing how complex operations can be implemented by a transformer. We train several transformers on these tasks, and find that RASP helps us predict both the correct number of heads and layers to use for these tasks and also the attention patterns that the transformers realise to solve them. Additionally, we use RASP to shed light on an empirical observation over transformer variants made by Press et al. (2020), and find concrete limitations of some “efficient transformers” architectures." }, { "heading": "A TRANSFORMERS IN RASP", "text": "RASP is almost – but not quite – a strict over-approximation of transformer-encoders. In this section, we show how the addition of score, a generalisation of select that may assign non-boolean values to index pairs, makes RASP a strict over-approximation. In particular, we give an explicit translation from any given transformer-encoder to a RASP program, provided the augmentation with score. If the reader prefers to start there, a full description of transformers is given in section D (with notations in the preceding section).\nWe now introduce the score operation, and expand aggregate to receive a scorer:\n• score Similarly to select, the score operation takes two tuples of sequences me=(m1,...,mk) and other=(ot1,...,otl), and a function f expecting k+l atomic values. This time however, f may return any non-negative float value. It creates from these a scorer similarly to how select creates a selector from its inputs.\n• aggregate The aggregate operation is expanded such that it may receive either a scorer or a selector where it previously accepted only a selector. When it receives a scorer, each y[i] is assigned the weighted average of all the values of x, according to the values in the scorer:\ny[i] = ∑ j∈[n] s(i,j)·x[j]∑\nj∈[n] s(i,j)\nIntuitively, the select operation, and the application of aggregate to a selector, can be seen as the special case of a score-select pair in which the scorer has only given scores of 0 and 1. It does have one difference in that it also allows for a default value which may be used when the total score for some index i is 0. However, this can be seen as sugar: it is not difficult to create a mechanism similar to that of count_conditioned in order to recognize when a selector has width 0, and so avoid the direct use of default in aggregate7. Moreover, if the data is always given with a special beginning-of-sequence (BOS) token, then the default value can simply be loaded from that location whenever no other focus is found.\n7The head and layer analysis of RASP can be trivially updated to take this into account.\nTheorem A.1. Let T : (Rd)∗ → (Rd)∗ be a transformer and y0 : Σ → (Rd)∗ be an input embedding computed as the sum of a token and positional embedding, y0(x)i = w(xi) + p(i). Then the computation of Ty0 , y0 ◦ T can be mimicked in a RASP program that writes the output to d float-sequences and uses exactly LH score and aggregate calls and (H + 1)L+ 1 zipmap calls, where L and H are the number of layers and attention heads in T , respectively.\nProof Sketch In figure 7 we present code that, given the token and positional embedding w : Σ→ Rd and p : N→ Rd and all the weights of a transformer, recreates that same transformer in RASP. Our code relies on the helper functions tup2vec, vec2tup, first_half and second_half which help convert between the tuples of values given to the processing functions by zipmap and aggregate and the vectors they represent. For simplicity in the presentation, we assume here that the transformer is given as a collection of linear transformations, layer-norms, and feed-forward functions which can be applied to its internal vectors directly.\nThe main routine is T_y0, which applies the initial embedding and stores it in a tuple of d sequences, x, and then applies each layer to it in turn. This takes L calls to the layer function, each of which computes score-aggregate (with additional zipmap before the aggregate) pair H times to mimic each of the heads, and then calls a zipmap on the concatenation of their result to complete the remaining (elementwise) computations of the layer.\nNote. Recall that, as noted in E, when a processing function passed to zipmap returns multiple values, zipmap simply generates that same number of sequences. In particular, at all iterations of the loop in T_y0, x is a tuple of d sequences where d is the embedding dimension of the given transformer.\nWe see that, when augmented with score, the RASP language naturally composes the components of a given transformer to reconstruct it exactly.\nWhy not have score? The motivation for the omission of score from RASP is cleanliness: it is far easier to think in terms of select as opposed to score, and we have not yet encountered a problem where we used scorers whose values where outside of 0 and 1. In time, as we use the language more and encounter the limitations this choice poses, we may return to score and see what other kind of special cases of it we will benefit from including in RASP.\nPower of RASP As seen in this section, RASP can (provided this slight generalization of select) represent any transformer. Additionally, we see that it is not arbitrarily overpowered. For example, it does not allow iterating over a sequence of arbitrary length one-by-one to perform some gradual computation (as might be done in RNN or DFA), and in general does not allow arbitrary repetition of operations as other languages might allow. This is because the number of operations in a RASP program is predetermined: RASP programs do not admit loops. This distinction between transformers and RNNs is known, and there is interest in bridging it. For example, the Universal Transformer attempts to introduce loops into transformers, by allowing them to also have a control mechanism that decides whether to repeat a layer during computation (Dehghani et al., 2018)." }, { "heading": "B EXPERIMENTS", "text": "For RASP to be useful, it is important to see that RASP programs relate well to actual transformer behavior in practice. In this section, we train transformers on a small set of synthetic tasks for which we can write RASP programs, and see how the these programs relate to our empirical results.\nWe consider the following tasks:\n1. Count-a: Return the number of ‘a’ tokens in a sequence, e.g., aba 7→ 2. 2. Histogram: For each token, its number of repetitions in a sequence, presented in-place. For\nexample, aababc 7→ (3, 3, 2, 3, 2, 1). 3. Reverse: Reversing a sequence, e.g.: abc 7→cba.\nFor Histogram, we also consider a variant with a special beginning-of-sequence (BOS) token §, appearing exactly once at the beginning of each sequence (and nowhere else). For example, §aabc7→ (1, 2, 2, 1, 1).\nWe train transformers for each of these tasks, and test whether our RASP programs accurately predict the minimum number of heads and layers needed to perform them. We also visualise their attention distributions8, and see if they match the selectors used by our programs.\nWe find that several of the RASP programs presented in this paper show similar attention (selector) patterns to those of the trained transformers in practice, suggesting that programming in RASP helps us provide reasonable predictions of transformer behavior. Moreover, we often find that reducing the number of heads and layers in a transformer beyond the number needed in our RASP program for the same task significantly degrades its accuracy. This suggests that the specific programs we have presented in for these tasks are also optimal solutions.\nData and Evaluation Unless stated otherwise: for all of the languages, we use the alphabet {a,b,c,d,e} with sequences of sizes 1 through 100. These are generated by first choosing the length uniformly from 1 to 100, and then choosing each token uniformly from the alphabet. We use 50, 000 train samples, 1, 000 test samples, and 1, 000 validation samples.\nFor tasks giving a ‘single’ output value – such as Count-a (which gives 1 number) as opposed to Reverse which gives a new sequence – we train the network to return that value in all positions, e.g., aba 7→ (2, 2, 2) for Count-a. This makes the visualisation of the attention distributions clearer (as all locations are trying to do something meaningful, as opposed to one), and is also more clearly aligned with the tasks we have described in this work.\nWe measure the accuracy of a transformer on a batch of sequences as the fraction of total predictions it made for those sequences that were correct, e.g. x5 for a batch with total sequence length 5. For the train, set, and validation sets, we report accuracy as the average batch accuracy.\n8i.e., after softmaxing the attention scores.\nArchitecture We use the transformer architecture provided with PyTorch 1.7.0, with an additional single linear transformation and softmax at the end to convert to the output classes prediction. Unless stated otherwise, we use small embedding and feed-forward dimensions: 20 and 40, respectively. We vary the number of heads and layers per task.\nTraining Method We train with the ADAM optimizer, sin-cosine positional embedding, and dropout 0.1 in the transformer weights. We use learning rate 0.0003, batch size 50, and PyTorch’s ExponentialLR learning rate scheduler with gamma 0.98 (updated after every epoch). Excluding confirmation of a negative result, we train each network for 20 epochs. If a network hits 100% accuracy before then, we stop.\nHelpful RASP functions In this section we will make frequent use of the full-selector full_s=select((),(),lambda :True) and the function frac_condition(sequences,f), which computes aggregate(full_s,sequences,lambda a:int(f(*a)))9 – the fraction of input positions for which the f is satisfied on the values of sequences. Note that frac_condition requires only 1 layer (after sequences have been computed) with 1 head, and that that head is full_s. Recall also that length is computed: length=1/frac_condition(indices,lambda i:i==0).\nB.1 COUNT-a\nTo avoid gradient problems from trying to obtain large numerical values from the transformer (e.g., 8), we encode Count-a as a categorical task. In particular, we create 21 output tokens {0,1,2,...,20}, and if there are more than 20 a tokens in the sequence we just report 20.\nRASP permits a 1-layer, 1-head program for this task: count_a = lengthfrac_condition( tokens, lambda t:t==\"a\"). (length and frac_condition share the selector full_s.) Accordingly, training a transformer with 1 layer and 1 head on Count-a succeeded, reaching test accuracy 98.9% on the 20th epoch.\nThe attention pattern of this transformer on the input sequence abbaabcde is shown in Figure 8. While the distribution is not perfectly uniform, it does seem to focus on the entire sequence, as the use of full_s in our RASP program suggests (contrast for example with the attention distribution for Reverse, in Figure 12)10.\n9The exact implementation is slightly different to account for the case when there is only one sequence, but the idea is the same.\n10(We leave here a cautionary tale: if you do not properly scale the colorbar of your attention figure, it will look like the difference between focus on different locations is much greater than it is!)\nWe stress that this is a different distribution to that which we might intuitively expect for this task – namely, an attention pattern focused solely on instances of a in the sequence – and that RASP has successfully pushed us to predict it!\nB.2 HISTOGRAM\nAs with Count-a, we encode histograms as a categorical task. This time we limit the maximum count to 10, i.e., we only use the output tokens 1,2,...,10. (0 is irrelevant as it will not appear in an in-place histogram). Unlike most other tasks, for Histograms we use an alphabet of size 10: {a,b,...,j}. For histograms, we use an input alphabet of size 10: {a,b,...,j}.\nWe trained one transformer with 1 layer and 2 heads, and another with only 1 layer and 1 head. After 20 epochs, the transformer with 2 heads reached test accuracy 89.3%. In contrast, after 50 epochs, the transformer with only 1 head was still at accuracy 55%! Increasing its embedding and feed-forward dimensions to 50 and 100 respectively also did not work: a 1-layer 1-head transformer with these dimensions still only reached 79.3% test accuracy after 50 epochs, and this after being past 77+% validation accuracy since the 27th epoch.\nDrawing the attention map for the single-head transformer (Figure 9) did not seem to relate to the selection pattern of count_conditioned at all, unsurprisingly considering that it did not have enough heads. See for example the apparent focus on b by several query positions not containing b, as opposed to our expectations of the count_conditioned focus pattern as shown in Figure 4.\nFor the 2-heads transformer, we draw its attention maps in Figure 10. The solution has some clear parallels with our RASP prediction of histogram selection patterns of count_conditioned, such as most tokens focusing on themselves in the one head and sharing distribution patterns in the other. But we can also easily find differences: in particular, the d and e tokens seem to avoid rather than focus on themselves, and the shared focus of most tokens in the similar-attentions head is not on 0 but on d and e. There is a possibility that in this transformer, d and e are playing a role similar to that which we gave to 0 in our implementation of count_conditioned. We leave a deeper exploration of this to future work.\nB.3 HISTOGRAM WITH BOS\nRecall the implementation of count_conditioned (Figure 3): it calculates the “width\" (number of selected locations) of a hypothetical selector s by simulating it along two actual selectors, s_with_0 and s_just_0. s_with_0 is used to calculate for every index the fraction 1/c′i+1 where c′i is the width of s on everything except index 0, and s_just_0 is used to make a final adjustment from c′i to the actual width, depending on the focus on 0.\nIf, then, the contents at position 0 are constant across all inputs, then the second selector s_just_0 becomes unnecessary: any information it conveys can be hard-coded into the RASP program (practically, the transformer). It follows that for setups where all input sequences are prepended with a special beginning-of-sequence (BOS) token, count_conditioned can be implemented with only one head, using just the s_with_0 selection pattern.\nWe prepend all of the original Histogram inputs with a special BOS token § (and their outputs with 1), and train a new 1-layer, 1-head transformer on the resulting data set. For gamma=0.99, the results satisfy our predictions perfectly: the transformer reaches 99.7% test accuracy in 20 epochs, and drawing its attention distribution (Figure 11) shows that it follows exactly the pattern of the selector s_with_0! (For gamma=0.98 the attention distribution was also very similar to that of s_with_0, but the model reached only 86.4% test accuracy after 20 epochs.)\nDiscussion of BOS The significance of such ‘non-input’ tokens in transformers has been previously discussed, with different interpretations. For example, Vig & Belinkov (2019) refer to the attention focused on the initial token of a sequence – seemingly when there is nothing else to focus on – as null attention. They report that the null token gathered as much as 97% of the attention of some of their heads, and suggest that this is consistent with these heads being unimportant to the transformer’s overall performance. Conversely, this new result suggests that the null token at the beginning of a sequence is playing an important role in the transformer calculations, and in particular is directly assisting in counting!\nB.4 REVERSE\nIn RASP, Reverse is implemented using flip_s=select(indices,length-1-indices,lambda i,j:i==j) followed by reverse=aggregate(flip_s,tokens). This takes two layers of one head each (recall that length itself requires one layer to compute).\nWe train two transformers on Reverse: one with 2 layers and 1 head, and the other with 1 layer and 2 heads, to verify that the separation into 2 layers is indeed necessary. To give room for the index-based selection pattern (i.e., a scoring method that involves comparison of indices and not just tokens), we give the transformers per-head width at least as large as our maximum length. In particular, we use embedding dimension 100 for the 2-layer transformer and 200 for the 1-layer transformer11. We also give them each feed-forward dimension twice their embedding dimension.\nThe 2-layer transformer reaches test accuracy 99.6% after 20 epochs. In contrast, and as expected, the single-layer transformer remains trapped at 39.6% test accuracy even after 50 epochs. Plotting the attention for the 2-layer transformer (Figure 12) matches some of the predictions of our RASP program: the reverse-matching attention is only computed in the 2nd layer, and computed perfectly at that point. We are inclined to believe the length is being computed at the first layer (as we predict).\n11Intuitively, this allows the query and key vectors to encode their positions/target sources as one-hot vectors, matching each other perfectly when computing the attention scores. We leave a full exploration of the relation between embedding dimension and selector complexity to later work.\nBut there are also devitions from our prediction: the attention pattern suggests that the transformer is computing the length using a different mechanism from the one that we have suggested.\nWe now strengthen the claim that the initial layer of the Reverse transformer is computing the sequence length, by showing that when the sequence length is fixed, then a single-layer (and single-head) transformer does succeed on Reverse. We fix the sequence length to 50 and train a 1-layer 1-head transformer on Reverse. This simpler task can be presented in one layer and one head in RASP using the single selector flip50_s=select(indices,49-indices,lambda i,j:i==j), from which the result is computed reverse50=aggregate(flip50_s,tokens).\nAs expected, the transformer succeeds in its task, reaching 100% test accuracy in only 3 epochs. In Figure 13 we illustrate that it has indeed learned a constant (i, 49− i) location pairing, by visualising its attention on sequences slightly longer or shorter than it has been trained on.\nFor completeness, in Figure 14 we show also the attention patterns of the single-layer transformer trained on variable-length Reverse. In keeping with our predictions, it has not managed to learn the reverse-matching attention pattern at all. This is because it needs an additional layer to compute the length before it can create the correct attention pattern." }, { "heading": "C NOTATIONS", "text": "Basic Notations For every n ∈ N, we denote [n] = {1, ..., n}.\nMatrices For a matrix X ∈ Rn×d, we refer to its i-th row as Xi ∈ Rd, and for a vector v ∈ Rd we refer to its i-th value as vi ∈ R. We additionally refer to n as the length of X and abusively denote |X| = n. For X ∈ Rn×d and b ∈ Rd, we use the shorthand X + b to describe the addition of b to each of the rows of X , i.e.: (X + b)i = Xi + b for every i ∈ [n]. For a scalar α ∈ R, any operation X α, ∈ {+,−,×,÷} is applied elementwise to all the values of X . Matrix multiplication between two matrices A,B is denoted simply AB, and the transpose of a matrix A is denoted AT .\nWe occasionally treat input or output sequences x1, ..., xn ∈ Rd as matrices X ∈ Rn×d whose rows are the individual input vectors: Xi = xi. When n may be arbitrary, we will say that X ∈ (Rd)∗. Definition C.1. A linear transformation with input dimension d and output dimension m is a function lA,b : (Rd)∗ → (Rm)∗ parameterised by a matrix A ∈ Rd×m and vector b ∈ Rm as follows: lA,b(X) = XA+ b for every X ∈ (Rd)∗. A and b are the weights of the transformation. Definition C.2. The softmax function s : R∗ → R∗ is defined: for every d ∈ N and x ∈ Rd, s(x) ∈ Rd such that s(x)i = e\nxi∑ j∈[d] e\nxj for every i ∈ [d]. We also denote by S the row-wise softmax function: for every n, d ∈ N and X ∈ Rn×d, S(X) ∈ Rn×d, and S(X)i = s(Xi) for every i ∈ [n].\nWe denote byR(X) the elementwise application of the ReLU function, r : x 7→ max(0, x), to X .\nFunction Qualities A function f : A∗ → B∗ is length preserving if it satisfies |f(x)| = |x| for any x ∈ (Rd)∗ (i.e., for any input sequence x1, ..., xn ∈ A, f returns a sequence y1, ..., yn ∈ B). If there also exists a function g : A→ B such that f(x)i = g(xi) for any i ≤ |x|, then f is elementwise, and we say that f is an elementwise application of g. Note that linear transformations are elementwise." }, { "heading": "D TRANSFORMER-ENCODERS", "text": "At the highest level, a transformer-encoder T (henceforth, a transformer) is a parameterised lengthpreserving function T : (Rd)∗ → (Rd)∗ composed of multiple layers of length-preserving functions ` : (Rd)∗ → (Rd)∗, i.e. f = `1 ◦ `2... ◦ `L which we will describe in this section. Generally speaking, a transformer’s layers are not elementwise, and indeed the transformer would not be interesting if they were. However, when we come to look at their components, we see that this quality rests entirely only on their use of attention12.\nAttention Attention is a function devised to enable ‘recollection’ of previously processed data from a history of arbitrary length (Bahdanau et al., 2015; Luong et al., 2015). Transformers use a variant called scaled dot-product attention to collect data from multiple locations in an input sequence.\nDefinition D.1. Scaled Dot-Product Attention is a function a : (Rd)∗ → (Rm)∗ parameterised by 3 linear transformations, lQ, lK , lV : (Rd)∗ → (Rm)∗ and defined for every X ∈ (Rd)∗ as follows:\na(X) = S ( lQ(X)lK(X) T\n√ m\n) lV (X)\nNote. The original definition of scaled dot-product attention allows lQ, lK , and lV to have different output dimensions m. In this case, the denominator is √ mk.\nFor convenience, from here on we refer to scaled dot-product attention simply as attention.\nThe attention computation can be broken into 3 stages. First, a pairwise score is calculated for each pair of locations, showing how much the input in location j should influence the output in location i: this is the value Si,j in the matrix S = lQ(X)lK(X) T\n√ m . Then, each input is processed in-place (lV (X)) to create candidate outputs, and finally the candidate outputs are averaged for each output location i, according to the softmaxed scores S(S)i for that location. In this sense, attention can be seen as a request to gather into each output information from various locations, where lQ and lK work together to select information sources, and lV encodes the transferred information.\n12Unsurprisingly, given the title of the paper.\nTransformer layers often gather information with multiple attention functions, referred to as attention heads, whose results are concatenated back into a single output at the end: Definition D.2. Let d,H,m ∈ N be such that d = Hm. A multi-headed attention function with input dimension d and H heads is a function A : (Rd)∗ → (Rd)∗ parameterised by the weights of H scaled dot-product attention functions a1, ..., aH as follows: 13 for every X ∈ (Rd)∗,\nA(X) = a1(X)·a2(X)·...·aH(X) where · denotes row-wise concatenation, i.e. for every i ∈ [|X|], A(X)i is the concatenation of a1(X)i through aH(X)i. The functions a1, ..., aH are referred to as the heads of A.\nIn addition to attention, transformers use layer-norm and feed-forward components, as follows: Definition D.3. A single-row layer-norm over dimension d is a function g : Rd → Rd parameterised by vectors a, b ∈ Rd and constant ε ∈ R, and defined for every x ∈ Rd and i ≤ d as follows:\ng(x)i = ai(xi − x̄) std(x) + ε + bi\nwhere x̄ = ∑ j∈[d] xj d is the mean of x and std(x) = √ 1 d−1 ∑ i≤d(xi − x̄)2 is its standard deviation.\nA layer-norm over dimension d, n : (Rd)∗ → (Rd)∗, is an elementwise application of a single-row layer-norm of dimension d.\nThe layer-norm’s function is more a reguliser, and indeed, it will not play a part in our abstraction. Definition D.4. A feed-forward function with input dimension d and internal dimension m is an elementwise function F : (Rd)∗ → (Rd)∗ obtained by composing two linear transformations L1 : (Rd)∗ → (Rm)∗,L2 : (Rm)∗ → (Rd)∗ and ReLU, as follows:14 F(X) , L2(R(L1(X))).\nThe feed-forward component is elementwise, and the combination of two linear transformations with nonlinear activation provides strong expressive capacity (Hornik et al., 1989). Definition D.5 (Transformer-Encoder Layer). A transformer-encoder layer with input dimension d, internal dimension m, and H heads (such that d/H ∈ N), is a length-preserving function ` : (Rd)∗ → (Rd)∗ composed of one multi-headed attention A with input dimension d and H heads, one feed forward function F with input dimension d and internal dimension m, two layer-norm functions n1,n2 over d, and one linear transformation lA : (Rd)∗ → (Rd)∗, as follows: for every X ∈ (Rd)∗,\nX1 = X + lA(A(n1(X))) (1) `(X) = X1 + F(n2(X1)) (2)\nWe refer to the additions in both equations as a ‘skip connection’. Note that the layernorm, feedforward, and skip connection components of the layer are elementwise: were it not for the attention, the entire layer would be elementwise.\nFinally, we present the full encoder architecture: Definition D.6 (Transformer-Encoder Vaswani et al. (2017)). A transformer-encoder with L layers, H heads, and input and internal dimensions d,m is a length-preserving function T : (Rd)∗ → (Rd)∗ parameterised by the weights of L transformer-encoder layers `1, ..., `L, each with H heads and input and internal dimensions d,m, and defined for every X ∈ (Rd)∗ as follows:\nT (X) = `L(...`2(`1(X)))\nPermutation Invariance of Transformers An interesting trait of the transformer architecture is that it has no inherent positional awareness. Specifically: for any transformer T : (Rd)∗ → (Rd)∗, input sequence x = x1, x2, ..., xn ∈ Rd, and permutation π, we have T (π(x)) = π(T (x)) 15.\n13Some definitions of multi-headed attention may present it with an additional ‘+X’ in the computation (representing a ‘skip connection’ present in the transformer), or final linear transformation applied to the result. For reasons that will be clarified later, we prefer to set the boundaries of the definition only to the direct ‘mixing’ operation shown, and instead write the skip connection and further linear transformation explicitly when presenting the transformer.\n14During training, a dropout layer is also applied after the ReLU operation, but this is not present in inference. 15As all components of transformers other than attention are elementwise, we need only consider attention in order to be convinced of this. We see quickly that at each output location i, attention is a function only of\nDiscrete Input Transformers T : (Rd)∗ → (Rd)∗ are used to process non-empty sequences over a finite alphabet Σ by composing them with a simple length-preserving function, y0 : Σ+ → (Rd)∗: Ty0(x) , T (y0(x)). This y0 is in turn composed from a token embedding w : Σ→ Rd and position embedding p : N→ Rd, which are normally combined using addition: for every x = x1...xn ∈ Σ∗, y0(x1, x2, ..., xn)i = w(xi) + p(i) 16.\nFrom here, whenever we refer to a ‘transformer over (some finite alphabet) Σ’, we mean a transformer paired with an initial embedding y0 as described above." }, { "heading": "E ADDITIONAL DETAILS ABOUT RASP", "text": "A note about types in RASP Technically, there is no one ‘tokens’, rather 5 options: tokens_str, tokens_int, tokens_float and tokens_bool cast the input sequence to the corresponding atomic types, and tokens_asis takes the input sequence as-is. For brevity, we refer to all of these as tokens here.\nRASP Sequences RASP operates exclusively on sequence- and selector-generating functions, which we refer to as sequences and selectors respectively and RASP-functions together. All RASP-functions take as input exactly one non-empty sequence, and when describing them and how RASP manipulates them to create new RASP-functions we will do so in terms of the sequences and selectors that they and their manipulations generate from each input sequence. When it is clear from context, we will simply refer to them as sequences and selectors, and describe them directly in terms of their outputs. For example, if we say that a RASP operation applies +1 elementwise to each value in a sequence u, we actually mean that it returns a new sequence v such that for every input x and position 0 ≤ i < |x|, v(x)[i]=u(x)[i]+1.\nNote In this section we will refer to the i-th value in a sequence s as s[i], such that s[i] is in one of the atomic types. We stress however that this is only for the discussion, and not a part of the language.\nAdditional operations For brevity in code, the RASP also comes with the following syntactic sugar:\n• Anywhere that a tuple of sequences is passed into an operation, a single sequence may be passed in as-is as well. For example, y=zipmap((indices,),(indices,),lambda a,b:a+b) can also be written y=zipmap(indices,indices,lambda a,b:a+b).\n• The zipmap operation is accessible through a large range of operators, covering its application to all of the base binary and unary operations on the atomic types. For example, the above y can equivalently be defined as y=indices+indices. These operators can also be mixed with constants from the atomic primitives, such that an equal y be obtained using y=2*indices.\n• aggregate may receive a tuple of sequences xx instead of a single sequence x. In this case, it returns a new tuple (of the same length) of sequences, the result of applying aggregate to each of the sequences in xx. For example, the line a,b = aggregate(s,(x,y)) is equivalent to a,b = aggregate(s,x), aggregate(s,y).\n• The processing function passed to zipmap may return more than one value (provided the number of values it returns is constant). In this case, the operation will arrange the output values into the same number of output sequences. For example, y1,y2=zipmap(x,lambda v:v+1,v+2) is equivalent to writing y1=x+1 and then y2=x+2.\n• Anywhere that a processing function is expected, if one is not provided, the identity function is used17.\nlK(X), lV (X), and lQ(X)i, where the order of the rows of lK(X) and lV (X) does not matter as long as they remain aligned.\n16Note that without the position embedding, y0 would be elementwise, and so its combination with T would be permutation invariant – an undesirable trait for sequence processing.\n17This is consistent with aggregate allowing you to choose whether to pass a single sequence, or a tuple of sequences and a processing function.\n• The function select1 which takes only a single tuple of sequences xx and an indexcomputing function fi, and is syntactic sugar for select(xx,(indices,),lambda *a:fi(a[:-1])==a[-1]). select1 is guaranteed to return a true select satisfying the \"up-to-one\" property, i.e., that can be successfully paired in aggregates with an x that does not contain numbers." } ]
2,020
THINKING LIKE TRANSFORMERS
SP:43947cdb5064af3146a898c27347d7d987f92e30
[ "This paper considers the drift detection for episodic data, where data episodes are assumed to be i.i.d. but data within each episodic can be correlated. It is assumed that the pre-change (nominal) mean and covariance of each episodic is perfectly known or can be accurately estimated from reference data. The Uniform Degradation Test (UDT) and Partial Degradation Test (PDT) are proposed to detect the mean shift. Moreover, this paper uses bootstrap to control the false alarm rate by setting the threshold as empirical quantiles of the detection statistic computed from reference data. " ]
Detection of deterioration of agent performance in dynamic environments is challenging due to the non-i.i.d nature of the observed performance. We consider an episodic framework, where the objective is to detect when an agent begins to falter. We devise a hypothesis testing procedure for non-i.i.d rewards, which is optimal under certain conditions. To apply the procedure sequentially in an online manner, we also suggest a novel Bootstrap mechanism for False Alarm Rate control (BFAR). We demonstrate our procedure in problems where the rewards are not independent, nor identically-distributed, nor normally-distributed. The statistical power of the new testing procedure is shown to outperform alternative tests – often by orders of magnitude – for a variety of environment modifications (which cause deterioration in agent performance). Our detection method is entirely external to the agent, and in particular does not require model-based learning. Furthermore, it can be applied to detect changes or drifts in any episodic signal.
[ { "affiliations": [], "name": "STARTS FALTERING" } ]
[ { "authors": [ "Vineet Abhishek", "Shie Mannor" ], "title": "A nonparametric sequential test for online randomized experiments", "venue": "Proceedings of the 26th International Conference on World Wide Web Companion,", "year": 2017 }, { "authors": [ "Pragnya Alatur", "Kfir Y. Levy", "Andreas Krause" ], "title": "Multi-player bandits: The adversarial case", "venue": null, "year": 2020 }, { "authors": [ "Mohammed Alshiekh" ], "title": "Safe reinforcement learning via shielding", "venue": "Logic in Computer Science,", "year": 2017 }, { "authors": [ "Bastian Alt", "Adrian Sosic", "Heinz Koeppl" ], "title": "Correlation priors for reinforcement learning", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Samaneh Aminikhanghahi", "D. Cook" ], "title": "A survey of methods for time series change point detection", "venue": "Knowledge and Information Systems,", "year": 2016 }, { "authors": [ "Taposh Banerjee", "Miao Liu", "Jonathan How" ], "title": "Quickest change detection approach to optimal control in markov decision processes with model changes", "venue": null, "year": 2016 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Indiana Univ. Math. J.,", "year": 1957 }, { "authors": [ "Omar Besbes", "Yonatan Gur", "Assaf Zeevi" ], "title": "Stochastic multi-armed-bandit problem with nonstationary rewards", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Giacomo Boracchi", "Diego Carrera", "Cristiano Cervellera", "Danilo Maccio" ], "title": "Quanttree: Histograms for change detection in multivariate data streams", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "D. Brook" ], "title": "An approach to the probability distribution of cusum run", "venue": "length. Biometrika,", "year": 1972 }, { "authors": [ "Tom Bylander" ], "title": "Lecture notes: Reinforcement learning. http://www.cs.utsa.edu/ ̃bylander/cs6243/reinforcement-learning.pdf", "venue": null, "year": 2020 }, { "authors": [ "James Chen" ], "title": "Conditional value at risk (cvar)", "venue": "https://www.investopedia.com/terms/ c/conditional_value_at_risk.asp,", "year": 2020 }, { "authors": [ "Richard Cheng" ], "title": "End-to-end safe reinforcement learning through barrier functions for safetycritical continuous control tasks", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yinlam Chow" ], "title": "A lyapunov-based approach to safe reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Cedric Colas", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "A hitchhiker’s guide to statistical comparisons of reinforcement learning algorithms, 2019", "venue": null, "year": 2019 }, { "authors": [ "Bin Dai", "Shilin Ding", "Grace Wahba" ], "title": "Multivariate bernoulli distribution", "venue": "doi: 10.3150/12-BEJSP10. URL https://doi.org/10.3150/ 12-BEJSP10", "year": 2013 }, { "authors": [ "David A. Dickey", "Wayne A. Fuller" ], "title": "Distribution of the estimators for autoregressive time series with a unit root", "venue": "Journal of the American Statistical Association,", "year": 1979 }, { "authors": [ "Gregory Ditzler", "Robi Polikar", "Cesare Alippi" ], "title": "Learning in nonstationary environments: A survey", "venue": "IEEE Computational Intelligence Magazine,", "year": 2015 }, { "authors": [ "Gabriel Dulac-Arnold", "Daniel Mankowitz", "Todd Hester" ], "title": "Challenges of real-world reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Bradley Efron" ], "title": "Second thoughts on the bootstrap", "venue": "Statist. Sci., 18(2):135–140,", "year": 2003 }, { "authors": [ "Javier Garcia", "Fernando Fernandez" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Aurélien Garivier", "Eric Moulines" ], "title": "On upper-confidence bound policies for switching bandit problems", "venue": "International Conference on Algorithmic Learning Theory, pp. 174–188,", "year": 2011 }, { "authors": [ "Megan Goldman" ], "title": "Lecture notes in stat c141: The bonferroni correction", "venue": "https://www.stat.berkeley.edu/ mgoldman/Section0402.pdf,", "year": 2008 }, { "authors": [ "Anupam Gupta", "Tomer Koren", "Kunal Talwar" ], "title": "Better algorithms for stochastic bandits with adversarial corruptions", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Maayan Harel", "Koby Crammer", "Ran El-Yaniv", "Shie Mannor" ], "title": "Concept drift detection through resampling", "venue": "International Conference on Machine Learning, pp. II–1009–II–1017,", "year": 2014 }, { "authors": [ "Peter Henderson" ], "title": "Deep reinforcement learning that matters", "venue": null, "year": 2017 }, { "authors": [ "Pablo Hernandez-Leal", "Michael Kaisers", "Tim Baarslag", "Enrique Munoz de Cote" ], "title": "A survey of learning in multiagent environments: Dealing with non-stationarity, 2019", "venue": null, "year": 2019 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Mark E. Irwin" ], "title": "Lecture notes: Convergence in distribution and central limit theorem", "venue": "http: //www2.stat.duke.edu/ ̃sayan/230/2017/Section53.pdf,", "year": 2006 }, { "authors": [ "Sebastian Junges" ], "title": "Safety-constrained reinforcement learning for mdps. International Conference on Tools and Algorithms for the Construction and Analysis of Systems, 2016", "venue": null, "year": 2016 }, { "authors": [ "J.T. Kent K.V. Mardia", "J.M. Bibby" ], "title": "Multivariate analysis", "venue": null, "year": 1979 }, { "authors": [ "Eugene Kharitonov", "Aleksandr Vorobev", "Craig Macdonald", "Pavel Serdyukov", "Iadh Ounis" ], "title": "Sequential testing for early stopping of online experiments", "venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp", "year": 2015 }, { "authors": [ "Dmytro Korenkevych", "A. Rupam Mahmood", "Gautham Vasan", "James Bergstra" ], "title": "Autoregressive policies for continuous control deep reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https:// github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail,", "year": 2018 }, { "authors": [ "Dirk P. Kroese", "T. Brereton", "T. Taimre", "Z. Botev" ], "title": "Why the monte carlo method is so important today", "venue": "Wiley Interdisciplinary Reviews: Computational Statistics,", "year": 2014 }, { "authors": [ "L.I. Kuncheva" ], "title": "Change detection in streaming multivariate data using likelihood detectors", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2013 }, { "authors": [ "David L. Demets K.K. Gordon Lan" ], "title": "Interim analysis: The alpha spending function approach", "venue": "Statistics in Medicine,", "year": 1994 }, { "authors": [ "Erwan Lecarpentier", "Emmanuel Rachelson" ], "title": "Non-stationary markov decision processes: a worstcase approach using model-based reinforcement learning", "venue": "NeurIPS 2019,", "year": 2019 }, { "authors": [ "Kimin Lee" ], "title": "Context-aware dynamics model for generalization in model-based rl", "venue": null, "year": 2020 }, { "authors": [ "Chao-Wen Lu", "Marion R. Reynolds Jr." ], "title": "Cusum charts for monitoring an autocorrelated process", "venue": "Journal of Quality Technology,", "year": 2001 }, { "authors": [ "Robert Lund", "Xiaolan L. Wang", "Qi Qi Lu", "Jaxk Reeves", "Colin Gallagher", "Yang Feng" ], "title": "Changepoint Detection in Periodic and Autocorrelated Time Series", "venue": "Journal of Climate,", "year": 2007 }, { "authors": [ "Thodoris Lykouris", "Vahab Mirrokni", "Renato Paes Leme" ], "title": "Bandits with adversarial scaling", "venue": null, "year": 2020 }, { "authors": [ "Tatsuya Matsushima", "Hiroki Furuta", "Y. Matsuo", "Ofir Nachum", "Shixiang Gu" ], "title": "Deploymentefficient reinforcement learning via model-based offline optimization", "venue": "ArXiv, abs/2006.03647,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "Proceedings of Machine Learning Research,", "year": 2016 }, { "authors": [ "Subhojyoti Mukherjee", "Odalric-Ambrym Maillard" ], "title": "Distribution-dependent and time-uniform bounds for piecewise i.i.d bandits", "venue": "arXiv preprint arXiv:1905.13159,", "year": 2019 }, { "authors": [ "Susan A Murphy", "Mark J van der Laan", "James M Robins" ], "title": "Marginal mean models for dynamic regimes", "venue": "Journal of the American Statistical Association,", "year": 2001 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang (Shane) Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": "PMLR, 100:110–121,", "year": 2020 }, { "authors": [ "Jerzy Neyman", "Egon Sharpe Pearson", "Karl Pearson" ], "title": "On the problem of the most efficient tests of statistical hypotheses", "venue": "Philosophical Transactions of the Royal Society of London,", "year": 1933 }, { "authors": [ "Peter C. O’Brien", "Thomas R. Fleming" ], "title": "A multiple testing procedure for clinical trials", "venue": "Biometrics, 35(3):549–556,", "year": 1979 }, { "authors": [ "E.S. Page" ], "title": "Continuous Inspection Schemes", "venue": "Biometrika, 41(1-2):100–115,", "year": 1954 }, { "authors": [ "Fabio Pardo", "Arash Tavakoli", "Vitaly Levdik", "Petar Kormushev" ], "title": "Time limits in reinforcement learning", "venue": "CoRR, abs/1712.00378,", "year": 2017 }, { "authors": [ "V.V. Petrov" ], "title": "Sums of Independent Random Variables", "venue": "Nauka,", "year": 1972 }, { "authors": [ "S.J. Pocock" ], "title": "Group sequential methods in the design and analysis of clinical trials", "venue": "Biometrika, 64 (2):191–199,", "year": 1977 }, { "authors": [ "R. Tyrrell Rockafellar", "Stanislav Uryasev" ], "title": "Optimization of conditional value-at-risk", "venue": "Journal of Risk,", "year": 2000 }, { "authors": [ "Thomas P. Ryan" ], "title": "Statistical Methods for Quality Improvement", "venue": "Wiley; 3rd Edition,", "year": 2011 }, { "authors": [ "E. Todorov", "T. Erez", "Y. Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "A. Wald" ], "title": "Sequential tests of statistical hypotheses", "venue": "Annals of Mathematical Statistics,", "year": 1945 }, { "authors": [ "James Westgard", "Torgny Groth", "T Aronsson", "C Verdier" ], "title": "Combined shewhart-cusum control chart for improved quality control in clinical chemistry", "venue": "Clinical chemistry, 23:1881–7,", "year": 1977 }, { "authors": [ "S.S. Wilks" ], "title": "The large-sample distribution of the likelihood ratio for testing composite hypotheses", "venue": "Ann. Math. Statist., 9(1):60–62,", "year": 1938 }, { "authors": [ "S.M. Williams" ], "title": "Quality control: an application of the cusum", "venue": "BMJ: British medical journal,", "year": 1992 }, { "authors": [ "Yao Xie", "David Siegmund" ], "title": "Weak change-point detection using temporal correlation", "venue": null, "year": 2011 }, { "authors": [ "E. Yashchin" ], "title": "On the analysis and design of cusum-shewhart control schemes", "venue": "IBM Journal of Research and Development,", "year": 1985 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "Mopo: Model-based offline policy optimization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xingyu Zhao" ], "title": "Assessing the safety and reliability of autonomous vehicles from road testing", "venue": "ISSRE,", "year": 2019 }, { "authors": [ "Shiyu Zhou", "Nong Jin", "Jionghua (Judy) Jin" ], "title": "Cycle-based signal monitoring using a directionally variant multivariate control chart system", "venue": "IIE Transactions,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) algorithms have recently demonstrated impressive success in a variety of sequential decision-making problems (Badia et al., 2020; Hessel et al., 2018). While most RL works focus on the maximization of rewards under various conditions, a key issue in real-world RL tasks is the safety and reliability of the system (Dulac-Arnold et al., 2019; Chan et al., 2020), arising in both offline and online settings.\nIn offline settings, comparing the agent performance in different environments is important for generalization (e.g., in sim-to-real and transfer learning). The comparison may indicate the difficulty of the problem or help to select the right learning algorithms. Uncertainty estimation, which could help to address this challenge, is currently considered a hard problem in RL, in particular for modelfree methods (Yu et al., 2020).\nIn online settings, where a fixed, already-trained agent runs continuously, its performance may be affected (gradually or abruptly) by changes in the controlled system or its surroundings, or when reaching new states beyond the ones explored during the training. Some works address the robustness of the agent to such changes (Lecarpentier & Rachelson, 2019; Lee et al., 2020). However, noticing the changes may be equally important, as it allows us to fall back into manual control, send the agent to re-train, guide diagnosis, or even bring the agent to halt. This is particularly critical in real-world problems such as health care and autonomous driving (Zhao et al., 2019), where agents are required to be fixed and stable: interventions in the policy are often limited or forbidden (Matsushima et al., 2020), but any performance degradation should be detected as soon as possible.\nMany sequential statistical tests exist for detection of mean degradation in a random process. However, common methods (Page, 1954; Lan, 1994; Harel et al., 2014) assume independent and identically distributed (i.i.d) samples, while in RL the feedback from the environment is usually both highly correlated over consecutive time-steps, and varies over the life-time of the task (Korenkevych et al., 2019). This is demonstrated in Fig. 1.\nA possible solution is to apply statistical tests to large blocks of time-steps assumed to be i.i.d (Ditzler et al., 2015). Since many RL applications consist of repeating episodes, such a blocks-partition can be applied in a natural way (Colas et al., 2019). However, this approach requires complete episodes for change detection, while a faster response is often required. Furthermore, naively ap-\nplying a statistical test on the accumulated feedback (e.g., sum of rewards) from complete episodes, ignores the dependencies within the episodes and may miss vital information, leading to highly sub-optimal tests.\nIn this work, we devise an optimal test for detection of degradation of the rewards in an episodic RL task (or in any other episodic signal), based on the covariance structure within the episodes. Even in absence of the assumptions that guarantee its optimality, the test is still asymptotically superior to the common approach of comparing the mean (Colas et al., 2019). The test can detect changes and drifts in both the offline and the online settings defined above. In addition, for the online settings, we suggest a novel Bootstrap mechanism to control the False Alarm Rate (BFAR) through adjustment of test thresholds in sequential tests of episodic signals. The suggested procedures rely on the ability to estimate the correlations within the episodes, e.g., through a ”reference dataset” of episodes.\nSince the test is applied directly to the rewards, it is model-free in the following senses: the underlying process is not assumed to be known, to be Markov, or to be observable at all (as opposed to other works, e.g., Banerjee et al. (2016)), and we require no knowledge about the process or the running policy. Furthermore, as the rewards are simply referred to as episodic time-series, the test can be similarly applied to detect changes in any episodic signal.\nWe demonstrate the new procedures in the environments of Pendulum (OpenAI), HalfCheetah and Humanoid (MuJoCo; Todorov et al., 2012). BFAR is shown to successfully control the false alarm rate. The covariance-based degradation-test detects degradation faster and more often than three alternative tests – in certain cases by orders of magnitude.\nSection 3 formulates the offline setup (individual tests) and the online setup (sequential tests). Section 4 introduces the model of an episodic signal, and derives an optimal test for degradation in such a signal. Section 5 shows how to adjust the test for online settings and control the false alarm rate. Section 6 describes the experiments, Section 7 discusses related works and Section 8 summarizes.\nTo the best of our knowledge, we are the first to exploit the covariance between rewards in posttraining phase to test for changes in RL-based systems. The contributions of this paper are (i) a new framework for model-free statistical tests on episodic (non-i.i.d) data with trusted referenceepisodes; (ii) an optimal test (under certain conditions) for degradation in episodic data; and (iii) a novel bootstrap mechanism that controls the false alarm rate of sequential tests on episodic data." }, { "heading": "2 PRELIMINARIES", "text": "Reinforcement learning and episodic framework: A Reinforcement Learning (RL) problem is usually modeled as a decision process, where a learning agent has to repeatedly make decisions that affect its future states and rewards. The process is often organized as a finite sequence of timesteps (an episode) that repeats multiple times in different variants, e.g., with different initial states. Common examples are board and video games (Brockman et al., 2016), as well as more realistic problems such as repeating drives in autonomous driving tasks.\nOnce the agent is fixed (which is the case in the scope of this work), the rewards of the decision process essentially reduce to a (decision-free) random process {Xt}nt=1, which can be defined by its PDF (f{Xt}nt=1 : R\nn → [0,∞)). {Xt} usually depend on each other: even in the popular Markov Decision Process (Bellman, 1957), where the dependence goes only a single step back, long-term correlations may still carry information if the states are not observable by the agent.\nHypothesis tests: Consider a parametric probability function p(X|θ) describing a random process, and consider two different hypotheses H0, HA determining the value (simple hypothesis) or allowed values (complex hypothesis) of θ. When designing a test to decide between the hypotheses, the basic metrics for the test efficacy are its significance P (not reject H0|H0) = 1−α and its power P (reject H0|HA) = β. A statistical hypothesis test with significance 1 − α and power β is said to be optimal if any test with as high significance 1− α̃ ≥ 1− α has smaller power β̃ ≤ β. The likelihood of the hypothesis H : θ ∈ Θ given data X is defined as L(H|X) = supθ∈Θp(X|θ). According to Neyman-Pearson Lemma (Neyman et al., 1933), a threshold-test on the likelihood ratio LR(H0, HA|X) = L(H0|X)/L(HA|X) is optimal. In a threshold-test, the threshold is uniquely determined by the desired significance level α, though is often difficult to calculate given α.\nIn many practical applications, a hypothesis test is repeatedly applied as the data change or grow, a procedure known as a sequential test. If the null hypothesisH0 is true, and any individual hypothesis test falsely rejects H0 with some probability α, then the probability that at least one of the multiple tests will reject H0 is α0 > α, termed family-wise type-I error (or false alarm rate when associated with frequency). See Appendix K for more details about hypothesis testing and sequential tests in particular.\nCommon approaches for sequential tests, such as CUSUM (Page, 1954; Ryan, 2011) and α-spending functions (Lan, 1994; Pocock, 1977), usually require strong assumptions such as independence or normality, as further discussed in Appendix F." }, { "heading": "3 PROBLEM SETUP", "text": "In this work, we consider two setups where detecting performance deterioration is important – sequential degradation-tests and individual degradation-tests. The individual tests, in addition to their importance in (offline) settings such as sim-to-real and transfer learning, are used in this work as building-blocks for the (online) sequential tests.\nBoth setups assume a fixed agent that was previously trained, and aim to detect whenever the agent performance begins to deteriorate, e.g., due to environment changes. The ability to notice such changes is essential in many real-world problems, as explained in Section 1. Setup 1 (Individual degradation-test). We consider a fixed trained agent (policy must be fixed but is not necessarily optimal), whose rewards in an episodic environment (with episodes of length T ) were previously recorded for multiple episodes (the reference dataset). The agent runs in a new environment for n time-steps (both n < T and n ≥ T are valid). The goal is to decide whether the rewards in the new environment are smaller than the original environment or not. If the new environment is identical, the probability of a false alarm must not exceed α. Setup 2 (Sequential degradation-test). As in Setup 1, we consider a fixed trained agent with recorded reference data of multiple episodes. This time the agent keeps running in the same environment, and at a certain point in time its rewards begin to deteriorate, e.g., due to changes in the environment. The goal is to alert to the degradation as soon as possible. As long as the environment has not changed, the probability of a false alarm must not exceed α0 during a run of h̃ episodes.\nNote that while in this work the setups focus on degradation, they can be easily modified to look for any change (as positive changes may also indicate the need for further training, for example)." }, { "heading": "4 OPTIMIZATION OF INDIVIDUAL DEGRADATION-TESTS", "text": "To tackle the problem of Setup 1, we first define the properties of an episodic signal and the general assumptions regarding its degradation. Definition 4.1 (T -long episodic signal). Let n, T ∈ N, and write n = KT + τ0 (for non-negative integers K, τ0 with τ0 ≤ T ). A sequence of real-valued random variables {Xt}nt=1 is a T -long episodic signal, if its joint probability density function can be written as\nf{Xt}nt=1(x1, ..., xn) = [ K−1∏ k=0 f{Xt}Tt=1(xkT+1, ..., xkT+T ) ] · f{Xt}τ0t=1(xKT+1, ..., xKT+τ0) (1)\n(where an empty product is defined as 1). We further denote µ0 := E[(X1, ..., XT )>] ∈ RT ,Σ0 := Cov((X1, ..., XT )>, (X1, ..., XT )) ∈ RT×T .\nNote that the episodic signal consists of i.i.d episodes, but is not assumed to be independent or identically-distributed within the episodes. For simplicity we focus on one-dimensional episodic signals, although a generalization to multidimensional signals is straight-forward (see Appendix G).\nIn the analysis below we assume that both µ0 and Σ0 are known. In practice, this can be achieved either through detailed domain knowledge, or by estimation from the recorded reference dataset of Setup 1, assuming it satisfies Eq. (1). The estimation errors decrease as O(1/ √ N) with the number N of reference episodes, and are distributed according to the Central Limit Theorem (for means)\nand Wishart distribution (K. V. Mardia & Bibby, 1979) (for covariance). While in this work we use up to N = 10000 reference episodes, Appendix E shows that N = 300 reference episodes are sufficient for reasonable results in HalfCheetah, for example. Note that correlations estimation has been already discussed in several other RL works (Alt et al., 2019).\nFig. 1 demonstrates the estimation of mean and covariance parameters for a trained agent in the environment of HalfCheetah, from a reference dataset of N = 10000 episodes. This also demonstrates the non-trivial correlations structure in the environment. According to Fig. 1b, the variance in the rewards varies and does not seem to reach stationarity within the scope of an episode. Fig. 1c shows the autocorrelation function ACF (t2 − t1) = corr(t1, t2) for different reference times t1. It is evident that the correlations last for hundreds of time-steps, and depend on the time t1 rather than merely on the time-difference t2− t1. This means that the autocorrelation function is not expressive enough for the actual correlations structure.\nOnce the per-episode parametersµ0 ∈ RT ,Σ0 ∈ RT×T are known, the expectations and covariance matrix of the whole signal µ ∈ Rn,Σ ∈ Rn×n can be derived directly: µ consists of periodic repetitions of µ0 , and Σ consists of copies of Σ0 as T × T blocks along its diagonal. For both parameters, the last repetition is cropped if n is not an integer multiplication of T . In other words, by taking advantage of the episodic setup, we can treat the temporal univariate non-i.i.d signal as a multivariate signal with easily-measured mean and covariance – even if the signal is measured in the middle of an episode.\nThe degradation in the signal X = {Xt}nt=1 is defined through the difference between two hypotheses. The null hypothesis H0 states that X is a T -long episodic signal with expectations µ0 ∈ RT and invertible covariance matrix Σ0 ∈ RT×T . Our first alternative hypothesis – uniform degradation – states that X is a T -long episodic signal with the same covariance Σ0 but smaller expectations: ∃ ≥ 0,∀1 ≤ t ≤ T : (µ)t = (µ0)t − . Note that this hypothesis is complex ( ≥ 0), where 0 can be tuned according to the minimal degradation magnitude of interest. In fact, Theorem 4.1 shows that the optimal corresponding test is independent of the choice of 0.\nTheorem 4.1 (Optimal test for uniform degradation). Define the uniform-degradation weightedmean sunif (X) := W ·X , whereW := 1> ·Σ−1 ∈ Rn (and 1 is the all-1 vector). If the distribution of X is multivariate normal, then a threshold-test on sunif is optimal.\nProof Sketch. According to Neyman-Pearson Lemma (Neyman et al., 1933), a threshold-test on the likelihood-ratio (LR) between H0 and HA is optimal. Since HA is complex, the LR is a minimum over ∈ [ 0,∞). Lemma 1 shows that ∃s0 : sunif ≥ s0 ⇒ = 0 and sunif ≤ s0 ⇒ = (sunif ). The rest of the proof in Appendix J substitutes in both domains of sunif to prove monotony of the LR in sunif , from which we can conclude monotony in sunif over all R.\nFollowing Theorem 4.1, we define the Uniform Degradation Test (UDT) to be a threshold-test on sunif , i.e., ”declare a degradation if sunif < κ” for a pre-defined κ.\nRecall that optimality of a test is defined in Section 2 as having maximal power given significance level. To achieve the significance α required in Setup 1, we apply a bootstrap mechanism that randomly samples episodes from the reference dataset and calculates the corresponding statistic (e.g., sunif ). This yields a bootstrap-estimate of the distribution of the statistic under H0, and the α-quantile of the estimated distribution is chosen as the test-threshold (κ = qα(sunif |H0)). Note that Theorem 4.1 relies on multivariate normality assumption, which is often too strong for real-world applications. Theorem 4.2 guarantees that if we remove the normality assumption, it is still beneficial to look into the episodes instead of considering them as atomic blocks; that is, UDT is still asymptotically better than a test on the simple mean ssimp = ∑n t=1Xt/n. Note that ”asymptotic” refers to the signal length n→∞ (while T remains constant), and is translated in the sequential setup into a ”very long lookback-horizon h” (rather than very long running time).\nTheorem 4.2 (Asymptotic power of UDT). Denote the length of the signal n = K · T , assume a uniform degradation of size √\nK , and let two threshold-tests τsimp on ssimp and UDT on sunif be\ntuned to have significance α. Then\nlimK→∞P ( τsimp rejects H0 ∣∣HA) = Φ(q0α + T√ 1>Σ01 )\n≤ Φ ( q0α + √ 1>Σ−10 1 ) = limK→∞P ( UDT rejects H0 ∣∣HA) (2) where Φ is the CDF of the standard normal distribution, and q0α is its α-quantile.\nProof Sketch. Since the episodes of the signal are i.i.d, both ssimp and sunif are asymptotically normal according to the Central Limit Theorem. The means and variances of both statistics are calculated in Lemma 2. Calculation of the variance of sunif relies on writing sunif as a sum of linear transformations of X (sunif = ∑n i=1(Σ\n−1)iX), and using the relation between Σ and Σ0. Appendix J shows that the inequality between the resulted powers is equivalent to a matrix-form of the means-inequality, and proves it by applying Cauchy-Schwarz inequality to Σ−1/20 1 and Σ 1/2 0 1.\nMotivated by Theorem 4.2, we define G2 := (1 >Σ−10 1)(1 >Σ01) T 2 to be the asymptotic power gain of UDT, quantify it, and show that it increases with the heterogeneity of the spectrum of Σ0.\nProposition 4.1 (Asymptotic power gain). G2 = 1 + ∑T i,j=1 wij(λi−λj)2, where {λi}Ti=1 are the eigenvalues of Σ0 and {wij}Ti,j=1 are positive weights.\nProof Sketch. The result can be calculated after diagonalization of Σ0, and the weights {wij} correspond to the diagonalizing matrix. See Appendix J for more details.\nThis time, calculation of the optimal test-statistic through the LR yields a minimum over ( T m ) possible subsets of decreased entries, which is computationally heavy. However, Theorem 4.3 shows that if we optimize for small values of (where optimality is indeed most valuable), a near-optimal statistic is spart, which is the sum of the m = p · T smallest time-steps of (X − µ) after a Σ−10 - transformation (see formal definition in Definition I.11). We define the Partial Degradation Test (PDT) to be a threshold-test on spart with a parameter p.\nTheorem 4.3 (Near-optimal test for uniform degradation). Assume that X is multivariate normal, and let Pα be the maximal power of a hypothesis test with significance 1 − α. The power of a threshold-test on spart with significance 1− α is Pα −O( ).\nProof Sketch. The expression that is minimized is a sum of two terms. One term is the sum of a subset of entries of Σ−1(X − µ), which is minimized by simply taking the lowest entries (up to the constraint of consistency across episodes, which requires us to sum the rewards per time-step in advance). In Appendix J we bound the second term and its effects on the modified statistic and on the modified test-threshold. We show that the resulted decrease of rejection probability isO( )." }, { "heading": "5 BOOTSTRAP FOR FALSE ALARM RATE CONTROL (BFAR)", "text": "For Setup 2, we suggest a sequential testing procedure: run an individual degradation-test every d steps (i.e., F = T/d test-points per episode), and return once any individual test declares a degradation. The tests can run according to Section 4, applied on the h recent episodes. Multiple tests may be applied every test-point, e.g., with varying test-statistics {s} or lookback-horizons {h}. This procedure, as implemented for the experiments of Section 6, is described in Fig. 3.\nSetup 2 limits the probability of a false alarm to α0 in a run of h̃ episodes. To satisfy this condition, we set a uniform threshold κ on the p-values of the individual tests (i.e., declare once a test returns p-val < κ). The threshold is determined using a Bootstrap mechanism for False Alarm control (BFAR, Algorithm 1).\nWhile bootstrap methods for false alarm control are quite popular, they often rely on the data samples being i.i.d (Kharitonov et al., 2015; Abhishek & Mannor, 2017), which is crucial for the re-sampling to reliably mimic the source of the signal. To address the non-i.i.d signal, we take advantage of the episodic framework and sample whole episodes. We then use the re-sampled sequence to simulate tests on sub-sequences where the first and last episodes may be incomplete, as described below. This allows simulation of sequences of various lengths (including non-integer number of episodes) without assuming independence, normality, or identical distributions within the episodes.\nAlgorithm 1: BFAR: Bootstrap for FAR control Input: reference dataset x ∈ RN×T ; statistic functions {s}; lookback-horizons {h1, ..., hmax}; test length h̃ ∈ N; B ∈ N; α0 ∈ (0, 1); Output: test threshold for individual tests; Initialize P = (1, ..., 1) ∈ [0, 1]B ; for b in 1:B do\nInitialize Y ∈ R(hmax+h̃)T ; for k in 0:(hmax+h̃-1) do\nSample j uniformly from (1, ..., N); Y [kT + 1 : kT + T ]← (xj1, ..., xjT );\nfor t in test-points do for h in lookback-horizons and s in\nstatistic functions do y ← Y [t− hT : t]; p← individual test pvalue(y vs. x; s) P [b]← min(P [b], p);\nReturn quantileα0(P );\nBFAR samples hmax + h̃ episodes (where hmax is the maximal lookback-horizon) from reference data of N episodes, to simulate sequential data Y . Then individual tests are simulated for any test-point along h̃ episodes, starting after hmax episodes. The minimal p-value determines whether a detection would occur in Y . The whole procedure repeats B times, creating a bootstrap estimate of the distribution of the minimal p-value along h̃ episodes. We choose the tests threshold to be the α0-quantile of this distribution, such that α0 of the bootstrap simulations would raise a false alarm.\nNote that the statistic for the tests is given to BFAR as an input, making its choice independent of BFAR. Additional details and time complexity are discussed in Appendices H,L." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 METHODOLOGY", "text": "We run experiments in standard Reinforcement Learning environments as described below. For every environment, we use a PyTorch implementation (Kostrikov, 2018) of the standard A2C algorithm (Mnih et al., 2016) to train an agent. We let the trained agent run in the environment for N0 episodes and record its rewards, considered the trusted reference data. We then define several scenarios, and let the agent run for M × N episodes in each scenario (divided later into M = 100 blocks of N episodes). One scenario is named H0 and is identical to the reference run (up to initialstate randomization). The other scenarios are defined per environment, and present environmental changes expected to harm the agent’s rewards. The agent is not trained to adapt to these changes, and the goal is to test how long it takes for a degradation-test to detect its degradation.\nIndividual degradation-tests of length n (Setup 1) are applied for every scenario over the first n time-steps of each block. Sequential degradation-tests (Setup 2) are applied sequentially on the episodes of each block. Since the agent is assumed to run continuously as the environment changes from H0 to an alternative scenario, each block is preceded by a random sample of H0 episodes, as demonstrated in Fig. 3.\nBFAR adjusts the tests thresholds to have a false alarm with probability α0 = 5% per h̃ = N episodes (where N is the data-block size). Two lookback-horizons h1, h2 are chosen for every environment. The rewards are downsampled by factor d before applying the tests, intended to reduce the parameters estimation error and the running time of the tests. Table 1 summarizes the setup of the various environments.\nThe experimented degradation-tests are a threshold-test on the simple Mean; CUSUM (Ryan, 2011); Hotelling (Hotelling, 1931); UDT and PDT (with p = 0.9) from Section 4; and a Mixed Degradation Test (MDT) that runs Mean, Hotelling and PDT in parallel – applying all three in every test-point (as permitted in Algorithm 1). Further implementation details are discussed in Appendix D." }, { "heading": "6.2 RESULTS", "text": "We run the tests in the environments of Pendulum (OpenAI), where the goal is to keep a onedimensional pendulum pointing upwards; HalfCheetah (Todorov et al., 2012), where the goal is for\na two-dimensional cheetah to run as fast as possible; and Humanoid, where the goal is for a person to walk without falling. In each environment we define the scenario ccostx of control cost increased to x% of its original value, in addition to scenarios of changed dynamics as specified in Appendix D.\nIn all the environments the rewards are clearly not independent, identically-distributed or normallydistributed (see Fig. 1 for example). Yet the false alarm rates are close to α0 = 5% per h̃ episodes in all the tests, as demonstrated in Fig. 4 for HalfCheetah, for example. These results for the H0 scenarios indicate that BFAR tunes the thresholds properly in spite of the complexity of the data. Note that BFAR never observed the data of scenario H0, but only the reference data.\nIn most of the non-H0 scenarios, our tests prove to be more powerful than the standard tests, often by extreme margins. For example, increased control cost in all the environments and additive noise in Pendulum are all 100%-detected by the suggested tests, usually within few episodes (Fig. 4); whereas Mean, CUSUM and Hotelling have very poor detection rates. Mean did not detect degradation in Pendulum even after the control cost increased from 110% to 300%(!).\nNote that we run the tests with two lookback-horizons in parallel, as allowed by BFAR. This proves useful: with +30% control cost in HalfCheetah, for example, the short lookback-horizon allows fast detection of degradation; but with merely +10%, the long horizon is necessary to notice the slight degradation over a large number of episodes. This is demonstrated in Fig. 11 in Appendix C.\nThe covariance-based tests reduce the weights of the highly-varying (and presumably noisier) timesteps. In HalfCheetah they turn out to be in the later parts of the episode. As a result, in certain scenarios, Mean (which ignores the different variances), CUSUM and Hotelling (which exploit them only in a heuristic way) do better in individual degradation-tests of 100 samples (out of T = 1000) than they do in one or even 10 full episodes. This does not occur in UDT and PDT. Essentially, we see that ignoring the noise variability leads to violation of the principle that more data are better.\nIn Pendulum the ratio between variance of different steps may reach 5 orders of magnitude. This phenomenon increases the potential power of the covariance-based tests. For example, when the pole is shortened, negative changes in the highly-weighted time-steps are detected even when the mean of the whole signal increases. This feature allows us to detect slight changes in the environment before they develop into larger changes and cause damage.\nOn the other hand, a challenging situation arises when certain rewards decrease but the highlyweighted ones slightly increase (as in longer Pendulum’s pole), which strongly violates the assumptions of Section 4. UDT is doomed to falter in such scenarios. PDT proves somewhat robust to this phenomenon since it is capable of focusing on a subset of time-steps, as demonstrated in increased gravity in HalfCheetah (see Fig. 4). However, it cannot overcome the extreme weights differences in Pendulum. The one test that demonstrated robustness to all the experimented scenarios, including modified Pendulum’s length and mass, is MDT. MDT combines Mean, Hotelling and PDT and does not fall far behind any of the three, in any of the scenarios. Hence, it presents excellent results in some scenarios and reasonable results in the others.\nDetailed experiments results are available in Appendix C." }, { "heading": "7 RELATED WORK", "text": "Training in non-stationary environments has been widely researched, in particular in the frameworks of MAB (Mukherjee & Maillard, 2019; Garivier & Moulines, 2011; Besbes et al., 2014; Lykouris et al., 2020; Alatur et al., 2020; Gupta et al., 2019; Jun et al., 2018), model-based RL (Lecarpentier & Rachelson, 2019; Lee et al., 2020) and general multi-agent environments (Hernandez-Leal et al., 2019). Banerjee et al. (2016) explicitly detect changes in the environment and modify the policy accordingly, but assume that the environment is Markov, fully-observable, and its transition model is known – three assumptions that we avoid and that do not hold in many real-world problems. Safe exploration during training in RL was addressed by Garcia & Fernandez (2015); Chow et al. (2018); Junges et al. (2016); Cheng et al. (2019); Alshiekh (2017). Note that our work refers to changes beyond the scope of the training phase: it addresses the stage where the agent is fixed and required not to train further, in particular not in an online manner. Robust algorithms may prevent degradation in the first place, but when they fail – or when their assumptions are not met – a model-free monitor with minimal assumptions (as the one suggested in this work) is crucial.\nSequential tests were addressed by many over the years. Common approaches rely on strong assumptions such as samples independence (Page, 1954; Ryan, 2011) and normality (Pocock, 1977; O’Brien & Fleming, 1979). Generalizations exist for certain private cases (Lu & Jr., 2001; Xie & Siegmund, 2011), sometimes at cost of alternative assumptions such as known change-size (Lund et al., 2007). Samples independence is usually assumed also in recent works based on numeric approaches (Kharitonov et al., 2015; Abhishek & Mannor, 2017; Harel et al., 2014), and is often justified by consolidating many data samples (e.g., an episode) together as a single sample (Colas et al., 2019). Ditzler et al. (2015) wrote that ”change detection is typically carried out by inspecting i.i.d features extracted from the incoming data stream, e.g., the sample mean”. Certain works address monitoring of cyclic signals (Zhou et al., 2005), but to the best of our knowledge, we are the first to devise an optimal test for mean change in temporal non-i.i.d signals, and bootstrap-based false alarm control for such non-i.i.d signals.\nOur work can be seen in part as converting a univariate temporal episodic signal into a T - dimensional multivariate signal (with incomplete observations in mid-episodes). Many works addressed the problem of changepoint detection in multivariate variables, e.g., using histograms comparison (Boracchi et al., 2018), Hotelling statistic (Hotelling, 1931), and K-L distance (Kuncheva, 2013). Hotelling in particular looks for changed mean under unchanged covariance, similarly to our work. However, it is not derived optimally for mean change detection, and it also inherently ignores the sign of change. Our test is optimal under similar conditions to Hotelling test, is further proved to be robust to the normality assumption, and is shown to perform better in a variety of experiments. We are not aware of any other work that derives an optimal test to either the uniform degradation or the partial degradation complex hypotheses." }, { "heading": "8 SUMMARY", "text": "We introduce a novel approach that is optimal (under certain conditions) for detection of changes in episodic signals, exploiting the correlations structure as measured in a reference dataset. In environments of classic control (Pendulum) and MuJoCo (HalfCheetah, Humanoid), the suggested statistical tests detected degradation faster than alternatives, often by orders of magnitude. Certain conditions, such as combination of positive and negative changes in very heterogeneous signals, may cause instability in some of the suggested tests; however, this is shown to be solved by running the new test in parallel to standard tests – with only a small loss of test power.\nWe also introduce BFAR, a bootstrap mechanism that adjusts tests thresholds according to the desired false alarm rate in sequential tests. The mechanism empirically succeeded in providing valid thresholds for various tests in all the environments, in spite of the non-i.i.d data.\nThe suggested approach may contribute to development of more reliable RL-based systems. Future research may: consider different hypotheses, such as a permitted small degradation (instead of H0) or a mix of degradation and improvement (instead of HA); suggest additional stabilizing mechanisms for covariance-based tests; exploit other metrics than rewards for tests on model-based RL systems; and apply comparative tests of episodic signals beyond the scope of drifts detection." } ]
null
null
SP:355303ce20a95719616333e88b1732715e1a9ff7
[ "In this paper, a reweighting technique is proposed to suppress the impact of heteroscedastic label noise in regression model training. The objective function of the regression model training process is composed of a weighted combination of instance-wise training loss. The instance-wise weight is determined by the estimated noise variance based on prior information of the label generation process. The weighting formulation is inspired by the best possible estimator of noisy measurements reaching the Cramer-Rao bound. " ]
In model learning, when the training dataset on which the parameters are optimized and the testing dataset on which the model is evaluated are not sampled from identical distributions, we say that the datasets are misaligned. It is wellknown that this misalignment can negatively impact model performance. A common source of misalignment is that the inputs are sampled from different distributions. Another source for this misalignment is that the label generating process used to create the training dataset is imperfect. In this work, we consider this setting and additionally assume that the label generating process is able to provide us with a quantity for the role of each label in the misalignment between the datasets, which we consider to be privileged information. Specifically, we consider the task of regression with labels corrupted by heteroscedastic noise and we assume that we have access to an estimate of the variance over each sample. We propose a general approach to include this privileged information in the loss function together with dataset statistics inferred from the mini-batch to mitigate the impact of the dataset misalignment. Subsequently, we propose a specific algorithm for the heteroscedastic regression case, called Batch Inverse-Variance weighting, which adapts inverse-variance weighting for linear regression to the case of neural network function approximation. We demonstrate that this approach achieves a significant improvement in network training performances compared to baselines when confronted with high, input-independent noise.
[]
[ { "authors": [ "Devansh Arpit", "Stanisław Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio" ], "title": "A closer look at memorization in deep networks", "venue": "URL http://arxiv.org/abs/1706.05394", "year": 2017 }, { "authors": [ "Kaidi Cao", "Yining Chen", "Junwei Lu", "Nikos Arechiga", "Adrien Gaidon", "Tengyu Ma" ], "title": "Heteroskedastic and imbalanced deep learning with adaptive regularization. arXiv:2006.15766 [cs, stat], Jun 2020", "venue": "URL http://arxiv.org/abs/2006.15766", "year": 2006 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Hadi Fanaee-T", "Joao Gama" ], "title": "Event labeling combining ensemble detectors and background knowledge", "venue": "Progress in Artificial Intelligence, pp", "year": 2013 }, { "authors": [ "G.R. Fisher" ], "title": "Maximum likelihood estimators with heteroscedastic errors", "venue": "Revue de l’Institut International de Statistique / Review of the International Statistical Institute,", "year": 1957 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Jacob Goldberger", "Ehud Ben-Reuven" ], "title": "Training deep neural-networks using a noise adaptation layer", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv:1801.01290 [cs, stat], Aug 2018", "venue": "URL http://arxiv.org/abs/1801.01290", "year": 2018 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "[cs],", "year": 2015 }, { "authors": [ "Daniel Hernández-Lobato", "Viktoriia Sharmanska", "Kristian Kersting", "Christoph H. Lampert", "Novi Quadrianto" ], "title": "Mind the nuisance: Gaussian process classification using privileged noise. arXiv:1407.0179 [cs, stat", "venue": "Jul 2014. URL http://arxiv.org/abs/1407.0179", "year": 2014 }, { "authors": [ "Judy Hoffman", "Saurabh Gupta", "Trevor Darrell" ], "title": "Learning with side information through modality hallucination", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 826–834", "year": 2016 }, { "authors": [ "Simon Jenni", "Paolo Favaro" ], "title": "Deep bilevel learning. arXiv:1809.01465 [cs, stat], Sep 2018", "venue": "URL http://arxiv.org/abs/1809.01465", "year": 2018 }, { "authors": [ "Kenji Kawaguchi", "Leslie Pack Kaelbling", "Yoshua Bengio" ], "title": "Generalization in deep learning. arXiv:1710.05468 [cs, stat], Jul 2020", "venue": "URL http://arxiv.org/abs/1710.05468", "year": 2020 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jan Kremer", "Fei Sha", "Christian Igel" ], "title": "Robust active label correction", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "John Lambert", "Ozan Sener", "Silvio Savarese" ], "title": "Deep learning under privileged information using heteroscedastic dropout", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8886–8895", "year": 2018 }, { "authors": [ "Yuncheng Li", "Jianchao Yang", "Yale Song", "Liangliang Cao", "Jiebo Luo", "Li-Jia Li" ], "title": "Learning from noisy labels with distillation. arXiv:1703.02391 [cs, stat], Apr 2017", "venue": "URL http://arxiv. org/abs/1703.02391", "year": 2017 }, { "authors": [ "Tongliang Liu", "Dacheng Tao" ], "title": "Classification with noisy labels by importance reweighting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2016 }, { "authors": [ "Z.P. Liu", "J.P. Castagna" ], "title": "Avoiding overfitting caused by noise using a uniform training mode", "venue": "In IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339),", "year": 1999 }, { "authors": [ "Yueming Lyu", "Ivor W. Tsang" ], "title": "Curriculum loss: Robust learning and generalization against label corruption. arXiv:1905.10045 [cs, stat], Feb 2020", "venue": "URL http://arxiv.org/abs/1905", "year": 1905 }, { "authors": [ "Xingjun Ma", "Yisen Wang", "Michael E. Houle", "Shuo Zhou", "Sarah M. Erfani", "Shu-Tao Xia", "Sudanthi Wijewickrema", "James Bailey" ], "title": "Dimensionality-driven learning with noisy labels. arXiv:1806.02612 [cs, stat], Jul 2018", "venue": "URL http://arxiv.org/abs/1806.02612", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski" ], "title": "Humanlevel control through deep reinforcement learning", "venue": "doi: 10.1038/nature14236", "year": 2015 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "D.A. Nix", "A.S. Weigend" ], "title": "Estimating the mean and variance of the target probability distribution", "venue": "In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94),", "year": 1994 }, { "authors": [ "Valentin Peretroukhin", "Brandon Wagstaff", "Jonathan Kelly" ], "title": "Deep probabilistic regression of elements of so(3) using quaternion averaging and uncertainty injection", "venue": "In CVPR Workshops,", "year": 2019 }, { "authors": [ "Stephan Rasp", "Michael S. Pritchard", "Pierre Gentine" ], "title": "Deep learning to represent subgrid processes in climate models", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Cosma Rohilla Shalizi" ], "title": "Advanced Data Analysis from an Elementary Point of View, 2019", "venue": "URL https://www.stat.cmu.edu/ ̃cshalizi/ADAfaEPoV/ADAfaEPoV.pdf. (Accessed November", "year": 2020 }, { "authors": [ "Yanyao Shen", "Sujay Sanghavi" ], "title": "Learning with bad training data via iterative trimmed loss minimization. arXiv:1810.11874 [cs, stat], Feb 2019", "venue": "URL http://arxiv.org/abs/1810", "year": 2019 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting. arXiv:1902.07379 [cs, stat], Sep 2019", "venue": "URL http://arxiv.org/abs/1902.07379", "year": 1902 }, { "authors": [ "Hwanjun Song", "Minseok Kim", "Dongmin Park", "Jae-Gil Lee" ], "title": "Learning from noisy labels with deep neural networks: A survey. arXiv:2007.08199 [cs, stat], Jul 2020", "venue": "URL http://arxiv", "year": 2007 }, { "authors": [ "Yang Song", "Zhifei Zhang" ], "title": "UTKFace, Large Scale Face Dataset, 2017", "venue": "URL https: //susanqq.github.io/UTKFace/. (Accessed June", "year": 2020 }, { "authors": [ "Ryutaro Tanno", "Ardavan Saeedi", "Swami Sankaranarayanan", "Daniel C. Alexander", "Nathan Silberman" ], "title": "Learning from noisy labels by regularized estimation of annotator confusion", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11236–11245", "year": 2019 }, { "authors": [ "Sebastian Thrun", "Wolfram Burgard", "Dieter Fox" ], "title": "Probabilistic Robotics. Intelligent robotics and autonomous agents", "venue": null, "year": 2006 }, { "authors": [ "Grant Van Horn", "Steve Branson", "Ryan Farrell", "Scott Haber", "Jessie Barry", "Panos Ipeirotis", "Pietro Perona", "Serge Belongie" ], "title": "Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection", "venue": "In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Vladimir Vapnik", "Rauf Izmailov" ], "title": "Learning using privileged information: Similarity control and knowledge transfer", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Vladimir Vapnik", "Akshay Vashist" ], "title": "A new learning paradigm: Learning using privileged information", "venue": "Neural Networks,", "year": 2009 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "Zaiyi Chen", "Yuan Luo", "Jinfeng Yi", "James Bailey" ], "title": "Symmetric cross entropy for robust learning with noisy labels. arXiv:1908.06112 [cs, stat], Aug 2019", "venue": "URL http://arxiv.org/abs/1908.06112", "year": 1908 }, { "authors": [ "Jun Xie", "Martin Kiefel", "Ming-Ting Sun", "Andreas Geiger" ], "title": "Semantic instance annotation of street scenes by 3d to 2d label transfer", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Kun Yi", "Jianxin Wu" ], "title": "Probabilistic end-to-end noise correction for learning with noisy labels. arXiv:1903.07788 [cs], Mar 2019", "venue": "URL http://arxiv.org/abs/1903.07788", "year": 1903 }, { "authors": [ "Xingrui Yu", "Bo Han", "Jiangchao Yao", "Gang Niu", "Ivor W. Tsang", "Masashi Sugiyama" ], "title": "How does disagreement help generalization against label corruption? arXiv:1901.04215 [cs, stat], May 2019", "venue": "URL http://arxiv.org/abs/1901.04215", "year": 1901 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Under review as a conference paper at ICLR", "venue": "Understanding", "year": 2021 }, { "authors": [ "Adam optimizer (Kingma", "Ba" ], "title": "2017), a learning rate of 0.001 over 20 epochs. A batch size of 256 was used in order to ensure the best performance for the L2 method with noisy labels as well as to reduce the time necessary to the training process", "venue": "BIKE SHARING DATASET A.2.1 DATASET DESCRIPTION", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In supervised learning, a central assumption is that the samples in the training dataset, used to train the model, and the samples in the testing set, used to evaluate the model, are sampled from identical distributions. Formally, for input x and label y, this assumption implies that ptrain(x, y) = ptest(x, y). This assumption can be decomposed as the product ptrain(x) · ptrain(y|x) = ptest(x) · ptest(y|x), which is true if two conditions are respected:\n1. The features in both datasets are sampled from the same distribution: ptrain(x) = ptest(x). When this is condition is violated, the training dataset is not representative.\n2. The labels in both datasets are sampled from the same conditional distribution: ptrain(y|x) = ptest(y|x). If this condition is violated, the training labels are noisy.\nIn practice, these assumptions are not always respected because gathering representative and precise data (including labels) can be arduous. In this case, the training and testing datasets are misaligned, and the performance of the deployed model may decrease since the training process did not actually optimize the model’s parameters based on the correct data (Arpit et al., 2017; Kawaguchi et al., 2020). One possible reason for misalignment is that there is some uncertainty about the labels in the training set as a result of the labeling process. Since our objective is to optimize the performance of the model compared to ground truth labels, we should consider that the labels in test dataset have no uncertainty, even though it may be impossible to collect such a dataset in practice. As a result, ptest(y|x) is sampled from a Dirac delta function, whereas ptrain(y|x) is not since it encapsulates the uncertainty in the labelling process, which leads to misalignment.\nIn this paper, we propose an algorithm for more efficient model training in the case where we have some information about the sample-wise misalignment. More specifically, we examine the case of\nregression with a deep network where labels are corrupted by heteroscedastic noise. We assume that we have access at least an estimate of the variance of the distribution of the noise that corrupted each label, information that is available if the labels are being generated by some stochastic process that is capable of also jointly reporting uncertainty. We examine how the knowledge of the estimate of the label noise variance can be used to mitigate the effect of the noise on the learning process of a deep neural network. We refer to our method as Batch Inverse-Variance (BIV), which, inspired by information theory, performs a re-weighting using both the the sample-wise variance but also statistics over the entire mini-batch. BIV shows a strong empirical advantage over L2 loss as well as over a simple filtering of the samples based on a threshold over the variance.1\nOur claimed contributions are threefold:\n1. A definition of the problem of learning with information quantifying the misalignment between datasets for the case of heteroscedastic noisy labels in regression.\n2. A general formulation of how to use the mini-batch to infer statistics of the dataset and incorporate this information in the loss function when training on neural networks.\n3. We present Batch Inverse-Variance as an instantiation of this framework and show its usefulness when applied to regression tasks with labels corrupted by heteroscedastic noise.\nThe outline of the paper is as follows: In section 2, we describe the task of regression with heteroscedastic noisy labels and its parallels with learning with privileged information, and we explain the challenges of applying classical heteroscedastic regression methods to stochastic gradient descent. In section 3, we position our work among the existing literature on learning with noisy labels. In section 4, we present a general framework to incorporate information regarding dataset misalignment in the mini-batch loss. We introduce BIV within this framework to tackle heteroscedastic regression. In section 5, we describe the setup for the experiments we made to validate the benefits of using BIV, and we present and analyze the results in section 6." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 HETEROSCEDASTIC NOISY LABELS IN REGRESSION", "text": "Here, we introduce how heteroscedastic noisy labels can be generated in regression and how the variance can be known. Consider an unlabelled dataset of inputs {xi}. To label it, one must apply to each input xi an instance of a label generator which should provide its true label yi. This label generator has access to some features zi correlated to xi. We define LGj : Z −→ R . When the labelling process is not exact and causes some noise on the label, the noisy label of xi provided by LGj is defined as ỹi,j . Noise on a measured or estimated value is often represented by a Gaussian distribution, based on the central limit theorem, as most noisy processes are the sum of several independent variables. Gaussian distributions are also mathematically practical, although they present some drawbacks as they can only represent unimodal and symmetric noise (Thrun et al., 2006). We model:\nỹi,j = yi + δyi,j with δyi,j ∼ N(0, σ2i,j) (1)\nσ2i,j can be a function of zi and LGj , without any assumption on its dependence on one or the other. We finally assume that the label generator is able to provide an estimate of σ2i,j , therefore being redefined as LGj : Z −→ R× R≥0. The training dataset is formed of triplets (xi, σ2i,j , ỹi,j), renamed (xk, σ 2 k, ỹk) for triplet k for simplicity. This setup describes many labelling processes, such as:\nCrowd-sourced labelling: In the example case of age estimation from facial pictures, labellers Alice and Bob are given zi = xi the picture of someone’s face and are asked to estimate the age of that person. Age is harder to estimate for older people come (5 and 15 years of age are harder to confuse than 75 and 85) suggesting a correlation between σ2i,j and zi. But Alice and Bob may also have been given different instructions regarding the precision needed, inducing a correlation between σ2i,j and LGj . Finally, there may be some additional interactions between zi and LGj , as for example Alice may know Charlie, recognize him on the picture and label his age with lower\n1Our code is available in supplemental material and will be publicly released after the reviewing process.\nuncertainty. Both labellers can provide an estimation of the uncertainty around their labels, for example with a plus-minus range which can be used as a proxy for standard deviation.\nLabelling from sensor readings, population studies, or simulations: Imagine you want build a dataset of pictures xi from a camera on the ground labelled with the position yi of a drone in the sky. To estimate the position of the drone at the moment the picture was taken, you could use state estimation algorithms based on the Bayes’ filter (Thrun et al., 2006). These algorithms take as an input zi the measurements of the drone’s sensors, and provide a full posterior distribution over the state, sometimes under a Gaussian assumption for Kalman filters for example. The uncertainty depends, among others, on the precision of the sensors, the observability of a given state, the precision of the dynamic model, and the time since sensor signals were received. Similarly, studies based on population such as polling or pharmaceutical trials have quantified uncertainties based on the quantity and quality of their samples. It is also possible to train on simulators, as in climate sciences (Rasp et al., 2018) or in epidemiology (Alsdurf et al., 2020), and some of them provide their estimations’ uncertainty based on the simulation procedure and the inclusion of real measurements in the model.\nUsing predictions from a neural network in complex neural architectures: In deep reinforcement learning for example, the critic network learns to predict a value from a state-action pair under the supervision of the heteroscedastic noisy output of a target network plus the reward (Mnih et al., 2015; Haarnoja et al., 2018). While the estimation of the uncertainty of the output of a neural network is not an easy task, it is an active field of research (Gal & Ghahramani, 2016; Peretroukhin et al., 2019). There, zi is the state-action pair at the next step, and LGj the target network being updated over time. The prediction is a mix of aleatoric and epistemic uncertainties as defined by Kendall & Gal (2017) which are dependent on both zi and LGj .\nWe could not find any current dataset that provides such label uncertainty information for regression. However, as it is precious information, we argue that it should actually be provided when possible. In classification, Xie et al. (2016; 2020) took a step in this direction by providing a “confidence” score from 0 to 255 for each pixel in the KITTI-360 dataset ." }, { "heading": "2.2 LEARNING USING PRIVILEGED INFORMATION", "text": "Training with a dataset of triplets (xi,x∗i , yi), where x ∗ i is only given at training time and not available at test time, fits in the framework of learning using privileged information (LUPI), defined in Vapnik & Vashist (2009) and mainly applied to SVMs. In most works in this field, this privileged information makes the task easier on a sample-to-sample basis. For example, object detection can be improved by adding segmentation masks (Feyereisl et al., 2014) or depth images (Hoffman et al., 2016). Another interpretation of LUPI is to use privileged information as a vector for knowledge transfer between a teacher and a student (Vapnik & Izmailov, 2015). Hernández-Lobato et al. (2014) and Lambert et al. (2018) have made a link between privileged information and uncertainty, using it to evaluate the confidence of the model for a training sample. The former applied the approach to Gaussian processes through latent noise, and the latter to neural networks through Gaussian dropout.\nMore formally, at training time the neural network has access to triplets (xk,x∗k, yk) where xk is the input, yk its corresponding label, and x∗k the additional information with respect to this sample.\nThe objective in LUPI is the same as classical supervised learning: train the network parameters θ so that, at test time, and without access to information x∗i , the expected loss is minimized, i.e.:\nθopt = argmin θ E{xi,yi}∈Dtest [L (f(xi, θ), yi))] (2)\nwhere L(f(xi, θ), yi) is the objective loss function based on the true label and on the network’s prediction f(xi, θ), for example the L2 distance in the task of regression.\nIn our work, we have x∗i = σ 2 i . In contrast with the usual LUPI setting, x ∗ i does not help the task on a sample-to-sample basis, but instead informs about the role of each sample on the misalignment between the datasets due to the noise in the labelling process. The objective, however, is the same: use this privileged information during training to minimize the expected loss at test time." }, { "heading": "2.3 HETEROSCEDASTIC REGRESSION FOR LINEAR MODELS", "text": "The task of heteroscedastic linear regression, where the model is linear, is solved by optimizing a weighted mean square error (WMSE) with inverse-variance weights, which is the optimal solution as per the Gauss-Markov theorem (Shalizi, 2019):\nn∑ i=0 yi − xi · β σ2i\n(3)\nwhere β is the vector of parameters used as linear coefficients. This is also the solution to maximum likelihood estimation for β (Fisher, 1957).\nWhile the solution to such an optimization is known for linear regression (β∗ = (xTwx−1)xTwy), several problems appear when attempting to adapt it to gradient-based methods on neural networks, such as stochastic gradient descent: (1) the learning rate in gradient-based methods impacts the optimization process in multiple ways and should be controllable by the practitioner regardless of the amount of noise in the samples to prevent very small or large gradients from destabilizing the learning process (2) similarly, near ground-truth samples should not have a disproportionate learning rate with respect to the others, as they risk to cause overfitting.\nIn our work, we propose a method to apply such weights to neural networks while addressing these issues." }, { "heading": "3 RELATED WORK", "text": "Noise on labels amounts to a loss of information. When the noise is significant enough, it leads to overfitting and lower model performance (Liu & Castagna, 1999; Zhang et al., 2017). This effect is more prevalent in small data settings (Van Horn et al., 2015). Four possible strategies exist in the literature to tackle this problem: detection, correction, robustness, or re-weighting. Detection consists of identifying noisy labels and ignoring them in the learning process. These methods are often based on the observation that neural networks first fit on consistent, non-noisy data (Arpit et al., 2017), thus converging to a higher loss on the noisy samples (Reed et al., 2015; Shen & Sanghavi, 2019). Other methods use several neural networks to co-teach each other (Han et al., 2018; Yu et al., 2019) or dropout to estimate the consistency of the data (Reed et al., 2015). However, in the case of imbalanced training datasets, higher loss can also be the signature of a non-noisy but rare sample. Cao et al. (2020) address this ambiguity by regularizing different regions of the input space differently. Correction strategies go further: once noise is detected, the noisy labels are changed to probability distributions. Such an operation requires a noise model. Goldberger & BenReuven (2017); Kremer et al. (2018); Ma et al. (2018); Tanno et al. (2019); Yi & Wu (2019) learn it jointly with the parameters, assuming a correlation between the noise and the input, the labels, or both. Robust loss functions are less sensitive to noise. Liu & Castagna (1999) proposed to avoid overfitting due to noise by ignoring samples during the training when the prediction error is reasonable. Natarajan et al. (2013) compute the loss assuming knowledge of example-independant mislabelling probabilities in binary classification, and then optimize these hyperparameters with cross-validation. More recent works are based on reverse cross-entropy (Wang et al., 2019) or curriculum loss (Lyu & Tsang, 2020). Others leverage a distillate of the information gathered from a subset of clean labels to guide training with noisy labels (Li et al., 2017). Re-weighting the samples is another efficient method for mitigating noise in datasets. Liu & Tao (2016) estimate the effective label probabilities as well as noise rates for a given input and use these estimates to weigh the samples using importance sampling. Shu et al. (2019) go one step further by learning the weighting function through a meta-learning method. Jenni & Favaro (2018) control overfitting by adjusting sample weights in the training and validation mini-batches, increasing robustness to overfitting on noisy labels.\nWhile most works that address noisy labels consider classification tasks (Song et al., 2020), only some of these strategies can be generalized to regression. Heteroscedastic regression occurs when each label’s noise is sampled from a different distribution. Nix & Weigend (1994) tackle this problem in neural networks by jointly training a variance estimator based on the maximum likelihood of an underlying Gaussian model. Kendall & Gal (2017) use the same idea to estimate the aleatoric (input-dependant) uncertainty of the network’s prediction, while using dropout as a Bayesian ap-\nproximation for the epistemic uncertainty (due to the learning process) as in (Gal & Ghahramani, 2016).\nOur method tackles heteroscedastic regression in neural networks using a re-weighting approach. The main distinction between our work and most of the related literature is that, while we do not require that the noise variance is a function of the input or of the label, we do assume that we have access to the noise variance, or at least an estimate of it. In addition, we do not seek to regress the variance of the model’s prediction. This is significant compared to the previous works in both regression and classification as it changes the loss function and removes the need for a regularizer for the variance prediction." }, { "heading": "4 INCORPORATING PRIVILEGED INFORMATION IN THE LOSS FUNCTION", "text": "In this section, we first present a general operator to incorporate privileged information and infer dataset statistics on the loss computed at the mini-batch level. Then, we describe our solution, BIV, an instance of this operator for heteroscedastic regression. Finally, we introduce a more basic filtering function which we will use as a baseline in our experiments." }, { "heading": "4.1 GENERAL OPERATOR FOR TRAINING ON THE MINI-BATCH LEVEL", "text": "The value of privileged information about misalignment between the training and testing datasets is often higher when combined with statistics about the datasets. For example, in the case of noisy labels, the uncertainty on each label is relevant when compared to the information carried by the other samples, similarly to (3).\nWe propose to both incorporate the privileged information and infer dataset statistics during the training of neural networks over a mini-batch, as opposed to the individual samples. There are two main advantages in doing so. First, if the mini-batch samples are independently and identically sampled from the dataset, they can be used to infer some statistics of the whole dataset. Working on the mini-batch level allows us to use such an approach without any pre-processing step over the whole dataset, which is important for many tasks such as continuous learning. Second, by focusing on the single step of computing the loss, this approach does not interfere with any of the variety of other methods used to optimize the learning process, such as regularization, batch normalization, annealing learning rates, etc.\nIn general, this approach can be expressed by defining an operator G applied on the objective loss function. For a mini-batch Di of K sample triplets, we define the loss as:\nLbatch(Di, θ) = G (x1:K ,x∗1:K ,y1:K ,L (·, ·)) (4) Note that without any privileged information, operator G is usually the unweighted average of the loss computed over each sample of the batch, which is equivalent to empirical risk minimization." }, { "heading": "4.2 BATCH INVERSE-VARIANCE WEIGHTING FOR HETEROSCEDASTIC NOISY LABELS", "text": "To tackle the problem of heteroscedastic regression in neural networks, we follow the intuition of equation (3) and describe G as a weighted average with weights wk = 1/ ( σ2k + ) . We introduce the Batch Inverse-Variance (BIV) loss function:\nLbatch(Di, θ) = ( K∑ k=0 1 σ2k + )−1 K∑ k=0 L (f(xk, θ), ỹk) σ2k +\n(5)\nHere, the inverse of the sum has two major roles. It is a normalization constant for the mini-batch, allowing to keep a consistency in the learning rate. Note that consistency is verified as, when σ2k is identical for each sample, this formulation leads to empirical risk minimization.\nThe hyper-parameter is effectively a lower bound on the variance. This allows us to incorporate samples with ground-truth labels without completely ignoring the other samples. The choice of is a trade-off between regulating the weights of near-ground-truth labels and using BIV at its full capacity. We found that it can be set between 0.01 and 0.1 and use = 0.1. More details on can be found in appendix B.1.\nThese two elements, added to the advantages of computing the loss function in the mini-batch only, allow us to overcome the challenges related to inverse variance weighting applied to gradient descent, as described in section 2.3." }, { "heading": "4.3 CUTOFF: FILTERING HETEROSCEDASTIC NOISY LABELS", "text": "As we do not assume that there is a correlation between xk and σ2k, most correction and re-weighting algorithms as presented in section 3 are not applicable. Most robust loss function are specifically designed for classification problems. We thus compare BIV to a detection and rejection strategy.\nIn heteroscedastic regression, an important difference from classification with noisy labels is that all labels are corrupted, albeit not at the same scale. Defining which labels to ignore is therefore a matter of putting a threshold on the variance. As we have access to this information, strategies such as the ones used in section 3 are not necessary. Instead, we simply use an inverse Heaviside step function as a weight in the loss function:\nwk = 1σ2k<C (6)\nwhere the threshold C is a hyper-parameter. Similarly to equation (5), we normalize the loss in the mini-batch by the sum of the weights, equal here to the number of samples considered as valid. As this filtering is equivalent to cutting off a part of the dataset, we refer to this method as ‘Cutoff’, and consider it to be a relevant baseline to compare BIV against." }, { "heading": "5 EXPERIMENTAL SETUP", "text": "To test the validity of the BIV loss (5) approach, we compared its performance with the classical L2 loss as well as cutoff loss (6) on two datasets. We refer to ground-truth (GT) labels when training with L2 on noise-less data as the best performance that could be achieved on this dataset.\nUnfortunately, we did not find any existing dataset for regression where label uncertainty is associated to the samples. We therefore used two UCI datasets (Dua & Graff, 2017) for regression cases, and artificially added noise to them. UTKFace Aligned&Cropped (Song & Zhang, 2017) (UTKF), is a dataset for image-based age prediction. In the Bike Sharing dataset (Fanaee-T & Gama, 2013), the task is to predict the number of bicycles rented in Washington D.C., from structured data containing the date, hour, and weather conditions. For UTKF, a convolutional neural network was used to predict the age, while a simple multi-layer perceptron was used for BikeSharing. More details about the datasets and models can be found in appendix A." }, { "heading": "5.1 NOISE GENERATION", "text": "To produce the datasets {xk, σ2k, ỹk} with noise as described in section 2.1, we use a two-step process which does not assume any correlation between the noise and the state.\n1. the noise variance σ2k is sampled from a distribution P (σ 2) which only has support for\nσ2 ≥ 0 2. ỹk is sampled from a normal distribution N (yk, σ2k).\nP (σ2) has a strong effect on the impact of BIV or Cutoff. For example, if it is a Dirac delta and all variances are the same, BIV becomes L2. We evaluate BIV on three different types of P (σ2). The average noise variance µP was chosen empirically so that the lowest test loss achieved by L2 is doubled compared to the ground-truth label case: µP = 2000 for UTKF and 20000 for BikeSharing.\nUniform distribution The uniform distribution is characterized by its bounds a, b. Its expected value µP is the average of its bounds, and its variance V = (b− a)2/12 . As P only has support for σ2 ≥ 0, the maximum variance Vmax is when a = 0 and b = 2µP . While such a distribution is not realistic, it is simple conceptually and allows for interesting insights.\n“Binary uniform” A more realistic distribution, which can also help us understand the effects of BIV, is when the data is generated from two regimes: low and high noise. We call the “binary uniform” distribution a mixture of two uniform distributions balanced by parameter p.\nWith probability p, the label is in a low noise regime: σ2 ∼ U(0, 1), with expected value µl = 0.5. With probability 1−p, the label is in a high noise regime: σ2 ∼ U(ah, bh). ah and bh are chosen such that the average is µh and the variance of the high-noise distribution is determined by Vh ∈ [0, Vmax]. Note that, if we want a given expected value of the whole distribution µP , the value of µh changes depending on p: µh = (µP − pµl)/(1− p) Therefore, the high-noise expected value µh of the noise variance σ2 in a distribution with high p will be higher than the one for a low p, for the same value of µP . In other words, a higher p means more chance to be in the low-noise regime, but the high-noise regime is noisier.\nGamma distributions While the mixture of 2 uniform distributions ensures support in the low noise region, it is not continuous. We therefore also propose to use a Gamma distribution with shape parameter α. If we want to control the expected value µP , we adjust β = α/µP . For a fixed expected value µP , lower α and β mean that there is a stronger support towards low variance noise, but the tail of the distribution spreads longer on the high-noise size. In other words, a lower α means more chances to have low-noise samples, but when they are noisy, the variance is higher. When α ≤ 1, the highest support is at σ2 = 0." }, { "heading": "5.2 EVALUATING THE PERFORMANCE OF THE MODEL", "text": "The objective of BIV is to improve the performance at predicting the true label yi, as mentioned in equation (2). While a non-noisy test dataset may not be available in a real application, we aimed here at determining if BIV performs better than L2, and therefore measured the performance of the network using ground-truth test data." }, { "heading": "6 EXPERIMENTAL RESULTS AND ANALYSIS", "text": "" }, { "heading": "6.1 FOR L2 LOSS, MEAN VARIANCE IS ALL THAT MATTERS", "text": "Before looking at the results for BIV, we share an interesting insight for L2 loss with noisy labels which helps simplifying the analysis of the results. Under the unbiased, heteroscedastic Gaussianbased noise model presented in section 5.1, the only parameter of distribution P (σ) that mattered to describe the performance of the L2 loss is its average µP , which is also the variance of the overall noise distribution. Independently of the distribution type, and the values of V , p and Vh, or α, as long as µP is equal, the L2 loss trained neural networks had the same performance. This is shown in Figure 1. For the sake of clarity, all the curves in this section were smoothed using moving average with a 35 steps window, and the shaded area represents the standard deviation over 10 runs." }, { "heading": "6.2 HIGH AND LOW VARIANCE NOISE REGIMES: BIV ACTS AS A FILTER", "text": "With the binary uniform distribution, the noise is split in two regimes, with high or low variances. In this case, our results show that BIV performs better than L2, and actually similarly to the cutoff loss presented in section 4.3 with a threshold C = 1.\nFigure 2 compares the test losses on UTKF with different values of p for Vh = 0. While the L2 curves are strongly impacted by the noise, both the BIV and cutoff losses lead to better and very similar performances for a given p. When p = 0.3, there are not a lot of information that can be\nused, and the performance is still impacted by the noise. When p = 0.9, there is nearly as much near-ground-truth data as in the noiseless case, and the performance is comparable.\nIn the case of binary uniform distributions, BIV is acting as a filter, cutting off labels which are too noisy to contain any useful information." }, { "heading": "6.3 CONTINUOUS, DECREASING NOISE VARIANCE DISTRIBUTION: THE ADVANTAGE OF BIV", "text": "On Gamma distributions, there is no clear threshold to define which information to use. When α ≤ 1, BIV shows a strong advantage compared to both L2 and cutoff. Figure 3 shows the results in both the BikeSharing and the UTKF datasets for Gamma distributions with α = 1.\nIn both datasets, when the cutoff parameter C is too low (µP /4 and µP /2), there is not enough data to train the model. When C is too high (2µP and 5µP ), the data is too noisy and the curves go close to the original L2 loss. Even at the best case (C = µP ), cutoff is not better than BIV. This is because, in contrast to cutoff, BIV is able to extract some information from noisier samples while avoiding to overfit on them.\nIn Table 1, we present the lowest value of the test loss curves for the different methods with other α parameters for the Gamma distributions over both datasets. BIV consistently leads to the best performances, regardless of P (σ2). The plots showing these runs can be found in the appendix B.3.1. BIV is less sensitive to hyperparameters than cutoff, as it avoids the need to choose the right cutoff parameter for each distribution P (σ2). BIV’s own hyperparameter can be set between 0.01 and 0.1 for any dataset with a normalized output, as shown in appendix B.1, and be ready to use. As can be seen as a minimal variance, scaling it for other label distributions is straightforward: it suffices to multiply it by the variance of the label distribution.\nThe benefit of BIV over L2 is clearly higher when α is lower. This is due to an increase in the support for low-variance noise in P (σ2). The more BIV can count on low-noise elements and differentiate them from high noise ones, the better it can perform. This is consistent with results from section\n6.2, and with other experiments we have run. For example, when α > 1, the highest support of P is not at σ2 = 0. BIV was less able to improve the performance compared to L2.\nWe also ran the experiment with uniform distributions: the performance is better when variance V is closer to Vmax (and a to 0). But even when V = Vmax, as there is less support in low noise variance than for Gamma distributions with α ≤ 1, the improvement is less important. In all cases, BIV was performing consistently better than L2 and at least better than cutoff in all the experiments we ran. More details on these results can be found in appendix B.2." }, { "heading": "6.4 ROBUSTNESS", "text": "We identified two elements that could impact the performances of BIV: the size of the mini-batches and the accuracy of the noise variance estimation. We tested the robustness of BIV when these factors are different than during our experiments.\nSize of the mini-batches In equation 5, each weight is normalized based on the assumption that the distribution of noise variances in the mini-batch is representative of the one in the whole training dataset. While this is less the case with smaller mini-batches, our results show that BIV still performs very well in these cases, as presented in section B.4.1.\nNoisy variances Because the noise variance σ2i is often estimated, the method needs to be robust to errors in σ2i ’s. A model for the noise of σ 2 i can be a Gaussian for which the variance is proportional to σ2i . In this case, results show that the effect of moderate to high levels of noise on BIV is not significant. More details can be seen in section B.4.2" }, { "heading": "7 CONCLUSION", "text": "We have proposed a mini-batch based approach to incorporate in the loss function privileged information which quantifies the participation of each sample to the misalignment between the training and testing datasets. We described how such a setup can occur in the case of regression with heteroscedastic noisy labels. To tackle this problem, we introduced BIV, a method to apply inversevariance weights in stochastic gradient descent. BIV is able to extract more information from the noisy dataset than L2 loss or threshold-based filtering approaches, and consistently outperforms them on both structured and unstructured datasets. BIV can improve the performance of supervised learning in many heteroscedastic regression scenarios, where the label is generated by a process such as crowd-labelling, sensor-based state estimation, simulation, or complex neural architectures. More generally, the framework for including privileged information quantifying the datasets misalignment in the loss function on a mini-batch level could be used to account for other types of misalignment, such as under-represented hidden features or correlation between samples." }, { "heading": "A APPENDIX - DATASETS AND NEURAL NETWORKS", "text": "" }, { "heading": "A.1 UTKFACE", "text": "" }, { "heading": "A.1.1 DATASET DESCRIPTION", "text": "The UTKFace Aligned&Cropped dataset (Song & Zhang, 2017) consists of 20,000 pictures of faces labelled with their age, ranging from 0 to 116 years. We use it in a regression setting: the network must predict the age of a person given the photo of their face. Unless described otherwise, 16,000 images were used for training, and 4,000 for testing.\nSome images are in black and white and some are in color. The pixel dimension of each image is 200x200.\nBoth the pixels and the labels were normalized before the training, so that their mean is 0 and standard deviation is 1 over the whole dataset. The noise variances were correspondingly scaled, as well as the cutoff threshold if applicable." }, { "heading": "A.1.2 NEURAL NETWORK AND TRAINING HYPER-PARAMETERS", "text": "The model that we used was a Resnet-18 (He et al., 2015), not pretrained. It was trained with an Adam optimizer (Kingma & Ba, 2017), a learning rate of 0.001 over 20 epochs. A batch size of 256 was used in order to ensure the best performance for the L2 method with noisy labels as well as to reduce the time necessary to the training process." }, { "heading": "A.2 BIKE SHARING DATASET", "text": "" }, { "heading": "A.2.1 DATASET DESCRIPTION", "text": "The Bike Sharing Dataset (Fanaee-T & Gama, 2013) consists of 17,379 samples of structured data. It contains, for nearly each hour of the years 2011 and 2012, the date, season, year, month, hour, day of the week, a boolean for it being a holiday, a boolean for it being a working day, the weather situation on a scale of 4 (1: clear and beautiful, 4: stormy or snowy), the temperature, the feeling temperature, the humidity, and the wind speed, in the city of Washington DC. It also contains the number of casual, registered, and total bike renters for each hour as recorded on the Capital Bikeshare system.\nWe use it in a regression setting: the network must predict the total number of bike renters given the time and weather information. Unless described otherwise, 7,000 samples were used for training, and 3,379 for testing. We used less samples than available for training because the low-data situation, noise has a stronger effect on the performance. The minimal test loss achieved with 7000 noiseless samples was very close to the one with 14000 samples, hinting that the additional samples did not give a lot of additional information.\nWe applied some pre-processing on the data to make it easier for the network to learn. First, the date was normalized from a scale between day 1 to day 730 to a scale between 0 and 4π. Then, we provided the network with the cosine and the sine of this number. This allowed to have the same representation for the same days of the year, while having the same distance between any two consecutive days, keeping the cyclic nature of a year. A similar idea was applied to hours, normalized from 0 to 2π instead of 0 to 24, and with the cosine and sine given to the network. The day of the week, being a category, was given as a one-hot vector of dimension 7. We also removed the season and the month as it was redundant information with the date.\nOverall, the number of features was 19:\n1 Year\n2-4 Date (sine and cos)\n4-5 Hour (sine and cos)\n6 to 12 Days of the week (one-hot vector)\n13 Holiday boolean\n14 Working day boolean\n15 Weather situation\n16 Temperature\n17 Felt temperature\n18 Humidity\n19 Wind speed\nWe observed that the network was learning significantly faster and better provided with this format for the data.\nBoth the features and the labels were normalized before the training, so that their mean is 0 and standard deviation is 1 over the whole dataset. The noise variances were correspondingly scaled, as well as the cutoff threshold if applicable." }, { "heading": "A.2.2 NEURAL NETWORK AND TRAINING HYPER-PARAMETERS", "text": "The model that we used was a multi-layer perceptron with 4 hidden layers, the first one with 100 neurons, then 50, 20, and 10. The activation function was ReLU. We did not use any additional technique such as batch normalization as it did not improve the performances.\nThe model was trained over 100 epochs on mini-batches of size 256 for similar reasons than explained in section A.1.2, using the Adam optimizer with learning rate 0.001." }, { "heading": "B APPENDIX - ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "B.1 THE INFLUENCE OF", "text": "In this section, we provide experimental results justifying our recommendation of the range of [10−2; 10−1] for . In the experiments presented in this article, we have used = 10−1. must be chosen as part of a trade-off between mitigating the effect of BIV with near ground-truth labels while keeping its effect with noisy labels. To better understand these results, it is important to remember that is added to the variance that is used in the loss function. When the labels are normalized - which is the case in our work -, the noise and its variance for each label is normalized too. The value we recommend for should therefore be valid for any normalized set of labels." }, { "heading": "B.1.1 FOR BIV WITH NEAR GROUND-TRUTH LABELS", "text": "One of the main problems BIV induces is in the presence of ground-truth or near ground-truth (NGT) labels. In this case, a near-zero variance can induce an very strong weight, effectively reduce the mini-batch to this single sample, and thus ignore the other potentially valid samples.\nIn this case, hyperparameter is key, as it allows to set a maximal weight and has a stronger relative effect on the NGT labels than on noisier ones. We tested on the UTKF dataset with a uniform distribution of variance from a = 0 to b = 1, which is effectively very little noise for the L2 loss but, by allowing NGT labels, can already showcase the influence of on the BIV performance. The resulting graph can be seen in figure 4.\nIt is clear that a very small , such as 10−6 or 10−5, leads to a significant loss of performance. However, over 10−2, the performance is as good as the L2 loss. Note that it is not surprising when is high such as 102, as in this case the weights are very close to similar and BIV is effectively the same as L2." }, { "heading": "B.1.2 A HIGH REDUCES THE ADVANTAGES OF BIV ON NOISY LABELS", "text": "As discussed in the previous section, a higher value of makes the weights more similar for each of samples, and therefore reduces the effect of BIV. We tested BIV with different values of with a binary distribution with p = 0.5 and µP = 2000 on UTKF. In this setup, BIV should have enough data to train correctly when filtering out the labels in the noisy regime, as shown in section 6.2. The results are shown in figure 5.\nAs expected, when is very high, the results are very similar to L2. The first effects of BIV can be seen when = 10, but until = 1 it is still not optimal. From = 0.1, the algorithm is correctly filtering the data.\nConsidering the results from sections B.1.1 and B.1.2, we consider that should be between 10−2 and 10−1 to optimize the balance between regulating the importance of near ground-truth labels and benefiting from the effect of BIV." }, { "heading": "B.2 BIV ON DIFFERENT DISTRIBUTIONS", "text": "" }, { "heading": "B.2.1 UNIFORM DISTRIBUTIONS", "text": "We present in figure 6 the results of the experiment with uniform distributions in more details.\nAs explained in section 6.3, we observe that BIV and L2 have the same performances when V = 0 (and a = b = µP ). This is to be expected, as all samples have the exact same noise variance and thus the same weights. When V = Vmax (a = 0 and b = 2µP ), BIV has an advantage, as it is able to differentiate the samples and use the support of low-noise labels. When V = Vmax/2 (a = 0.293µP and b = 1.707µP ), the difference between the samples is less important, and BIV only does a bit\nbetter than L2 on BikeSharing. On UTKF, the process has more variability and it is difficult to detect this effect.\nIn all cases, the benefit from using BIV is less important than with Gamma distributions with α ≤ 1, where the support on low-noise samples is higher.\nIn this setting, we also show as shown in figure 7 that cutting off the noisy data is not a good strategy, as it always performs worse than L2." }, { "heading": "B.3 BIV ON GAMMA DISTRIBUTIONS", "text": "B.3.1 α ≤ 1\nAs described in section 6.3, the smaller α, the better the performance of BIV and cutoffs. We show in figures 8 and 9 the curves that led to the numbers in Table 1. BIV consistently outperforms the other methods. The performance of cutoff methods strongly depends on C, and the best value of C is not the same for every distribution P .\nB.3.2 α > 1\nWhen α > 1, the highest support of P (σ2) shifts towards µP . This makes the samples less distinguishable for BIV and therefore the benefits of using it are reduced. This is shown in Figure 10 on UTKF." }, { "heading": "B.4 ROBUSTNESS OF BIV", "text": "" }, { "heading": "B.4.1 SIZE OF THE MINI-BATCHES", "text": "In equation (5), the normalization constant is computed from the samples in the mini-batch. If the distribution of the noise variances in mini-batch is representative of the whole training dataset, the relative weight given to each sample with respect to the others is the same than if the normalization was made over the whole dataset. The larger the mini-batch, the more representative it is. In our\nexperiments, we used a size of 256, which is arguably high. We tested our algorithm with lower batch sizes, from 16 to 128, to see if it was a critical factor in the performances.\nThe results are presented in figure 11. In UTKF, the batch size does not make any significant difference in the performance with respect to the amount of samples seen, except for a slightly steeper overfitting once the best loss has been achieved. In BikeSharing, a smaller batch size makes the training faster with respect to the amount of samples, but with a higher minimal loss, for both L2 and BIV. While a larger batch size leads to a lower loss function, the effect of BIV compared to the corresponding L2 curve is not compromised by smaller batch-sizes.\nTwo main factors may explain this robustness. First, a mini-batch of size 16 seems to be already representative enough of the whole dataset for the purpose of normalization. Second, the fact that the mini-batches are populated differently at every epoch improves the robustness as a sample who would have been in a non-representative batch at one epoch may not be at another epoch. In any case, the size of the mini-batch is not a critical factor for BIV." }, { "heading": "B.4.2 NOISY VARIANCE ESTIMATION", "text": "In many scenarios, the variance σ2 from which the noise was sampled is estimated, or inferred from a proxy, and therefore prone to be noisy itself. We tested the robustness of our method to such variance noise. In this experimental setup, the value given to the BIV algorithm is disturbed by noise δσ2i . We modelled this noise on σ 2 i to be sampled from a normal distribution whose standard deviation is proportional to σ2i with a coefficient of variance disturbance Dv:\nδσ2i ∼ N (0, Dvσ 2 i /9) (7)\nDividing σ2i by 9 allows to scale Dv so that, when Dv = 1, δσ2i < −σ 2 i is at 3 standard deviations from the mean of the distribution.\nWe then compute the noisy variance, which needs to be positive, as σ̃2i = ∣∣∣σ2i + δσ2i ∣∣∣. The noise is therefore biased, but when Dv ≤ 1, this is negligible as it happens with probability less than 0.15%. The results presented in figure 12 show that, when Dv ≤ 1, BIV is robust to such noise. While a higher Dv leads to lower performance, the impact is small compared to the effect of BIV. However, when Dv = 2, which is an arguably high level of noise and leads to bias as explained previously, the beneficial effect of BIV is significantly affected in BikeSharing, and completely disappears in UTKF." } ]
2,020
BATCH INVERSE-VARIANCE WEIGHTING: DEEP HET-
SP:88a54725f8b4e2e8b1876b37b783876ed14a205b
[ "The authors investigate the convergence of the projected Heavy-ball method (and an adaptive variant) for convex problems with convex constraints. The authors prove 4 results: 2 individual (last iterate) convergence rates and 2 rates using averaging. Notably, in their proofs they require an increasing (from 1/2 to 1) momentum parameter and a decreasing stepsize. Finally, the authors present some experimental results." ]
The adaptive stochastic gradient descent (SGD) with momentum has been widely adopted in deep learning as well as convex optimization. In practice, the last iterate is commonly used as the final solution. However, the available regret analysis and the setting of constant momentum parameters only guarantee the optimal convergence of the averaged solution. In this paper, we fill this theory-practice gap by investigating the convergence of the last iterate (referred to as individual convergence), which is a more difficult task than convergence analysis of the averaged solution. Specifically, in the constrained convex cases, we prove that the adaptive Polyak’s Heavy-ball (HB) method, in which the step size is only updated using the exponential moving average strategy, attains an individual convergence rate of O( 1 √ t ), as opposed to that of O( log t √ t ) of SGD, where t is the number of iterations. Our new analysis not only shows how the HB momentum and its timevarying weight help us to achieve the acceleration in convex optimization but also gives valuable hints how the momentum parameters should be scheduled in deep learning. Empirical results validate the correctness of our convergence analysis in optimizing convex functions and demonstrate the improved performance of the adaptive HB methods in training deep networks.
[ { "affiliations": [], "name": "Wei Tao" }, { "affiliations": [], "name": "Sheng Long" }, { "affiliations": [], "name": "Gaowei Wu" }, { "affiliations": [], "name": "Qing Tao" } ]
[ { "authors": [ "Ahmet Alacaoglu", "Yura Malitsky", "Panayotis Mertikopoulos", "Volkan Cevher" ], "title": "A new regret analysis for adam-type algorithms", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Xi Chen", "Qihang Lin", "Javier Pena" ], "title": "Optimal regularized dual averaging methods for stochastic optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alexandre Défossez", "L. Bottou", "Francis R. Bach", "Nicolas Usunier" ], "title": "On the convergence of adam and adagrad", "venue": "ArXiv, abs/2003.02395,", "year": 2020 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "John C Duchi" ], "title": "Introductory lectures on stochastic optimization", "venue": "The mathematics of data,", "year": 2018 }, { "authors": [ "Euhanna Ghadimi", "Hamid Reza Feyzmahdavian", "Mikael Johansson" ], "title": "Global convergence of the heavy-ball method for convex optimization", "venue": "In 2015 European Control Conference (ECC),", "year": 2015 }, { "authors": [ "Igor Gitman", "Hunter Lang", "Pengchuan Zhang", "Lin Xiao" ], "title": "Understanding the role of momentum in stochastic gradient methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nicholas JA Harvey", "Christopher Liaw", "Yaniv Plan", "Sikander Randhawa" ], "title": "Tight analyses for nonsmooth stochastic gradient descent", "venue": "In Annual Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Chonghai Hu", "Weike Pan", "James T Kwok" ], "title": "Accelerated gradient methods for stochastic optimization and online learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "Prateek Jain", "Dheeraj Nagaraj", "Praneeth Netrapalli" ], "title": "Making the last iterate of sgd information theoretically optimal", "venue": "In Annual Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "The unusual effectiveness of averaging in gan training", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Chaoyue Liu", "Mikhail Belkin" ], "title": "Accelerating sgd with momentum for over-parameterized learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Mahesh Chandra Mukkamala", "Matthias Hein" ], "title": "Variants of rmsprop and adagrad with logarithmic regret bounds", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Arkadi Semenovich Nemirovsky", "David Borisovich Yudin" ], "title": "Problem complexity and method efficiency in optimization", "venue": "Soviet Mathematics Doklady,", "year": 1983 }, { "authors": [ "Peter Ochs", "Yunjin Chen", "Thomas Brox", "Thomas Pock" ], "title": "ipiano: Inertial proximal algorithm for nonconvex optimization", "venue": "SIAM Journal on Imaging Sciences,", "year": 2014 }, { "authors": [ "Antonio Orvieto", "Jonas Köhler", "A. Lucchi" ], "title": "The role of memory in stochastic optimization. ArXiv", "venue": null, "year": 1907 }, { "authors": [ "Boris T Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Alexander Rakhlin", "Ohad Shamir", "Karthik Sridharan" ], "title": "Making gradient descent optimal for strongly convex stochastic optimization", "venue": "arXiv preprint arXiv:1109.5647,", "year": 2011 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Othmane Sebbouh", "Robert Mansel Gower", "Aaron Defazio" ], "title": "On the convergence of the stochastic heavy ball method", "venue": "ArXiv, abs/2006.07867,", "year": 2020 }, { "authors": [ "Ohad Shamir" ], "title": "Open problem: Is averaging needed for strongly convex stochastic gradient descent", "venue": "In Anual Conference on Learning Theory, pp", "year": 2012 }, { "authors": [ "Tao Sun", "Dongsheng Li", "Zhe Quan", "Hao Jiang", "Shengguo Li", "Yong Dou" ], "title": "Heavy-ball algorithms always escape saddle points", "venue": "arXiv preprint arXiv:1907.09697,", "year": 2019 }, { "authors": [ "Tao Sun", "Penghang Yin", "Dongsheng Li", "Chun Huang", "L. Guan", "Hao Jiang" ], "title": "Non-ergodic convergence analysis of heavy-ball algorithms", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Wei Tao", "Zhisong Pan", "Gaowei Wu", "Qing Tao" ], "title": "The strength of nesterov’s extrapolation in the individual convergence of nonsmooth optimization", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Wei Tao", "Zhisong Pan", "Gaowei Wu", "Qing Tao" ], "title": "Primal averaging: A new gradient evaluation step to attain the optimal individual convergence", "venue": "IEEE Transactions on Cybernetics,", "year": 2020 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop, coursera: Neural networks for machine learning", "venue": "University of Toronto, Technical Report,", "year": 2012 }, { "authors": [ "Guanghui Wang", "Shiyin Lu", "Weiwei Tu", "Lijun Zhang" ], "title": "Sadam: A variant of adam for strongly convex functions", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tianbao Yang", "Qihang Lin", "Zhe Li" ], "title": "Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization", "venue": "arXiv preprint arXiv:1604.03257,", "year": 2016 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In Proceedings of the International Conference on Machine Learning,", "year": 2003 }, { "authors": [ "Fangyu Zou", "Li Shen", "Zequn Jie", "Ju Sun", "Wei Liu" ], "title": "Weighted adagrad with unified momentum", "venue": "arXiv preprint arXiv:1808.03408,", "year": 2018 } ]
[ { "heading": null, "text": "t ), as opposed to that of O( log t√ t ) of SGD, where t is the number of\niterations. Our new analysis not only shows how the HB momentum and its timevarying weight help us to achieve the acceleration in convex optimization but also gives valuable hints how the momentum parameters should be scheduled in deep learning. Empirical results validate the correctness of our convergence analysis in optimizing convex functions and demonstrate the improved performance of the adaptive HB methods in training deep networks." }, { "heading": "1 INTRODUCTION", "text": "One of the most popular optimization algorithms in deep learning is the momentum method (Krizhevsky et al., 2012). The first momentum can be traced back to the pioneering work of Polyak’s heavy-ball (HB) method (Polyak, 1964), which helps accelerate stochastic gradient descent (SGD) in the relevant direction and dampens oscillations (Ruder, 2016). Recent studies also find that the HB momentum has the potential to escape from the local minimum and saddle points (Ochs et al., 2014; Sun et al., 2019a). From the perspective of theoretical analysis, HB enjoys a smaller convergence factor than SGD when the objective function is twice continuously differentiable and strongly convex (Ghadimi et al., 2015). In nonsmooth convex cases, with suitably chosen step size, HB attains an optimal convergence rate of O( 1√\nt ) in terms of the averaged output (Yang et al., 2016),\nwhere t is the number of iterations.\n∗Equal contribution †Corresponding author\nTo overcome the data-independent limitation of predetermined step size rules, some adaptive gradient methods have been proposed to exploit the geometry of historical data. The first algorithm in this line is AdaGrad (Duchi et al., 2011). The intuition behind AdaGrad is that the seldom-updated weights should be updated with a larger step size than the frequently-updated weights. Typically, AdaGrad rescales each coordinate and estimates the predetermined step size by a sum of squared past gradient values. As a result, AdaGrad has the same convergence rate as vanilla SGD but enjoys a smaller factor especially in sparse learning problems. The detailed analysis of AdaGrad (Mukkamala & Hein, 2017) implies that one can derive similar convergence rates for the adaptive variants of the predetermined step size methods without additional difficulties.\nUnfortunately, experimental results illustrate that AdaGrad under-performed when applied to training deep neural newtworks (Wilson et al., 2017). Practical experience has led to the development of adaptive methods that is able to emphasize the more recent gradients. Specifically, an exponential moving average (EMA) strategy was proposed in RMSProp to replace the cumulative sum operation (Tieleman & Hinton, 2012). Adam (Kingma & Ba, 2014), which remains one of the most popular optimization algorithms in deep learning till today, built upon RMSProp together with updating the search directions via the HB momentum. Generally speaking, the gradient-based momentum algorithms that simultaneously update the search directions and learning rates using the past gradients are referred to as the Adam-type methods (Chen et al., 2019). These kinds of methods have achieved several state-of-the-art results on various learning tasks (Sutskever et al., 2013).\nCompared with HB and AdaGrad, the main novelty of Adam lies in applying EMA to gradient estimate (first-order) and to element-wise square-of-gradients (second-order), with the momentum parameter β1t and step size parameter β2t (see (6)) (Alacaoglu et al., 2020). However, the use of EMA causes a lot of complexities to the convergence analysis. For example, in the online setting, (Kingma & Ba, 2014) offered a proof that Adam would converge to the optimum. Despite its remarkable practicality, Adam suffers from the non-convergence issue. To overcome its advantages, several variants such as AMSGrad and AdamNC were proposed (Reddi et al., 2018). Unfortunately, the regret bound of AMSGrad in (Reddi et al., 2018) is O( √ log t √ t) for nonsmooth convex problems, as opposed to that of O( √ t) of SGD. On the other hand, EMA uses the current step size in exponential moving averaging while the original HB can use the previous information (Zou et al., 2018). This will lead the update to stagnate when β1t is very close to 1. Fortunately, such a dilemma will not appear in Polyak’s HB method and a simple proof on the convergence of this kind of Adams in smooth cases has been provided (Défossez et al., 2020).\nIn this paper, we will focus on the adaptive Polyak’s HB method, in which the step size is only updated using EMA. Despite various reported practical performance for the Adam-type methods, there still exist some gaps between theoretical guarantees and empirical success.\n• First of all, some important regret bounds have been established to guarantee the performance of online Adam-type algorithms. Nevertheless, the online-to-batch conversion can inevitably lead the solution of the induced stochastic algorithm to take the form of averaging of all the past iterates. In practice, the last iterate is popularly used as the final solution, which has the advantage of readily enforcing the learning structure (Chen et al., 2012). For SGD, the convergence of the last iterate, which is referred to as individual convergence in (Tao et al., 2020b), was posed as an open problem (Shamir, 2012). Only recently, its optimal individual convergence rate is proved to be O( log t√\nt ) and O( log tt ) for general and\nstrongly convex problems respectively (Harvey et al., 2019; Jain et al., 2019). Despite enjoying the optimal averaging convergence (Yang et al., 2016), as far as we know, the individual convergence about the adaptive HB has not been discussed.\n• Secondly, the momentum technique is often claimed as an accelerated strategy in machine learning community. However, almost all the theoretical analysis is only limited to the Nesterov’s accelerated gradient (NAG) (Nesterov, 1983) method especially in smooth cases (Hu et al., 2009; Liu & Belkin, 2020), which accelerates the rate of SGD from O( 1t ) to O( 1t2 ). While the individual convergence of HB is also concerned in some papers (Sebbouh et al., 2020; Sun et al., 2019b), the considered problem is limited to smooth and the derived rate is not optimal in convex cases. It is discovered that NAG is capable of accelerating the rate of individual convergence of SGD from O( log t√\nt ) to O( 1√ t ) (Tao et al.,\n2020a) in nonsmooth convex cases. Nevertheless, there is still a lack of the report about the acceleration of the adaptive HB. • Finally, in practice, almost all the momentum and Adam-type algorithms are often used\nwith a constant momentum parameter β1t (typically between 0.9 and 0.99). In theory, regret guarantees in the online Adam require a rapidly decaying β1t → 0 schedule, which is also considered in (Sutskever et al., 2013; Orvieto et al., 2019). This gap is recently bridged by getting the same regret bounds as that in (Reddi et al., 2018) with a constant β1t (Alacaoglu et al., 2020). In each state-of-the-art deep learning library (e.g. TensorFlow, PyTorch, and Keras), HB is named as SGD with momentum and β1t is empirically set to 0.9 (Ruder, 2016). Despite its intuition in controlling the number of forgotten past gradients and guarantee in optimal averaging convergence (Yang et al., 2016), how β1t affects individual convergence has not been discussed (Gitman et al., 2019).\nThe goal of this paper is to close a theory-practice gap when using HB to train the deep neural networks as well as optimize the convex objective functions. Specifically,\n• By setting β1t = tt+2 , we prove that the adaptive HB attains an individual convergence rate of O( 1√\nt ) (Theorem 5), as opposed to that of O( log t√ t ) of SGD. Our proof is different\nfrom all the existing analysis of averaging convergence. It not only provides a theoretical guarantee for the acceleration of HB but also clarifies how the momentum and its parameter β1t help us to achieve the optimal individual convergence. • If 0 ≤ β1t ≡ β < 1, we prove that the adaptive HB attains optimal averaging convergence\n(Theorem 6). To guarantee the optimal individual convergence, Theorem 5 suggests that time-varying β1t can be adopted. Note β1t = tt+2 → 1, thus our new convergence analysis not only offers an interesting explanation why we usually restrict β1t → 1 but also gives valuable hints how the momentum parameters should be scheduled in deep learning.\nWe mainly focus on the proof of individual convergence of HB (Theorem 3, Appendix A.1). The analysis of averaging convergence (Theorem 4) is simpler. Their extensions to adaptive cases are slightly more complex (Theorem 5 and 6), but it is similar to the proof of AdaGrad (Mukkamala & Hein, 2017) and the details can be found in the supplementary material." }, { "heading": "2 PROBLEM STATEMENT AND RELATED WORK", "text": "Consider the following optimization problem,\nmin f(w), s.t. w ∈ Q. (1) where Q ⊆ Rd is a closed convex set and f(w) is a convex function. Denote that w∗ is an optimal solution and P is the projection operator on Q. Generally, the averaging convergence is defined as\nf(w̄t)− f(w∗) ≤ (t), (2) where w̄t = 1t ∑t i=1 wi and (t) is the convergence bound about t. By contrast, the individual convergence is described as f(wt)− f(w∗) ≤ (t). (3)\nThroughout this paper, we use g(wt) to denote the subgradient of f at wt. Projected subgradient descent (PSG) is one of the most fundamental algorithms for solving problem (1) (Dimitri P. et al., 2003), and the iteration of which is\nwt+1 = P [wt − αtg(wt)], where αt > 0 is the step size. To analyze the convergence, we need the following assumption.\nAssumption 1. Assume that there exists a number M > 0 such that\n‖g(w)‖ ≤M, ∀w ∈ Q. It is known that the optimal bound for the nonsmooth convex problem (1) is O( 1√\nt ) (Nemirovsky &\nYudin, 1983). PSG can attain this optimal convergence rate in terms of the averaged output while its optimal individual rate is only O( log t√\nt ) (Harvey et al., 2019; Jain et al., 2019).\nWhen Q = RN , the regular HB for solving the unconstrained problem (1) is wt+1 = wt − αtg(wt) + βt(wt −wt−1). (4)\nIf 0 ≤ βt ≡ β < 1, the key property of HB is that it can be reformulated as (Ghadimi et al., 2015)\nwt+1 + pt+1 = wt + pt − αt\n1− β g(wt),where pt =\nβ\n1− β (wt −wt−1). (5)\nThus its convergence analysis makes almost no difference to that of PSG. Especially, if αt ≡ α√T , its averaging convergence rate is O( 1√\nT ) (Yang et al., 2016), where T is the total number of iterations.\nSimply speaking, the regular Adam (Kingma & Ba, 2014) takes the form of\nwt+1 = wt − α√ t V − 12 t ĝt,\nwhere ĝ(wt) is a unbiased estimation of g(wt) and ĝt = β1tĝt−1 + (1− β1t)ĝ(wt), Vt = β2tVt−1 + (1− β2t)diag [ ĝ(wt)ĝ(wt) >] . (6)" }, { "heading": "3 INDIVIDUAL CONVERGENCE OF HB", "text": "To solve the constrained problem (1), HB can be naturally reformulated as\nwt+1 = PQ[wt − αtg(wt) + βt(wt −wt−1)]. (7) We first prove a key lemma, which extends (5) to the constrained and time-varying cases. Lemma 1. (Dimitri P. et al., 2003) For w ∈ Rd,w0 ∈ Q,\n〈w −w0,u−w0〉 ≤ 0, for all u ∈ Q if and only if w0 = P (w). Lemma 2. Let {wt}∞t=1 be generated by HB (7). Let\npt = t(wt −wt−1), βt = t\nt+ 2 , αt =\nα\n(t+ 2) √ t ." }, { "heading": "Then HB (7) can be reformulated as", "text": "wt+1 + pt+1 = PQ[wt + pt − α√ t g(wt)]. (8)\nProof. The projection operation can be rewritten as an optimization problem (Duchi, 2018), i.e., wt+1 = PQ[wt − αtg(wt) + βt(wt −wt−1)] is equivalent to\nwt+1 = arg min w∈Q {αt〈g(wt),w〉+\n1 2 ‖w −wt − βt(wt −wt−1)‖2}. (9)\nThen, ∀w ∈ Q, we have 〈wt+1 −wt − βt(wt −wt−1) + αtg(wt),wt+1 −w〉 ≤ 0.\nThis is 〈wt+1 + pt+1 − (wt + pt) +\nα√ t g(wt),wt+1 −w〉 ≤ 0. (10)\nSpecifically, 〈wt+1 + pt+1 − (wt + pt) +\nα√ t g(wt),wt+1 −wt〉 ≤ 0. (11)\nFrom (10) and (11),\n〈wt+1 + pt+1 − (wt + pt) + α√ t g(wt),wt+1 −w + (t+ 1)(wt+1 −wt)〉 ≤ 0.\ni.e., 〈wt+1 + pt+1 − (wt + pt) +\nα√ t g(wt),wt+1 + pt+1 −w〉 ≤ 0.\nUsing Lemma 1, Lemma 2 is proved.\nDue to the non-expansive property of PQ (Dimitri P. et al., 2003), Lemma 2 implies that the convergence analysis for unconstrained problems can be applied to analyze the constrained problems.\nTheorem 3. Assume that Q is bounded. Let {wt}∞t=1 be generated by HB (7). Set\nβt = t\nt+ 2 and αt =\nα\n(t+ 2) √ t .\nThen f(wt)− f(w∗) ≤ O(\n1√ t ).\nProof. According to Lemma 2,\n‖w∗ − (wt+1 + pt+1)‖2 ≤ ‖w∗ − (wt + pt) + α√ t g(wt)‖2.\n‖w∗ − (wt + pt) + α√ t g(wt)‖2\n= ‖w∗ − (wt + pt)‖2 + ‖ α√ t g(wt)‖2 + 2〈 α√ t g(wt),w ∗ −wt〉+ 2〈 αt√ t g(wt),wt−1 −wt〉\nNote\n〈g(wt),w∗ −wt〉 ≤ f(w∗)− f(wt), 〈g(wt),wt−1 −wt〉 ≤ f(wt−1)− f(wt).\nThen\n(t+ 1)[f(wt)− f(w∗)]\n≤ t[f(wt−1)− f(w∗)] + √ t\n2α ‖w∗ − (wt + pt)‖2 −\n√ t 2α ‖w∗ − (wt+1 + pt+1)‖2 + α 2 √ t ‖g(wt)‖2.\nSumming this inequality from k = 1 to t, we obtain\n(t+ 1)[f(wt)− f(w∗)]\n≤ f(w0)− f(w∗) + t∑\nk=1\nα\n2 √ k ‖g(wk)‖2 + t∑ k=1 [√k 2α (‖w∗ − (wk + pk)‖2 − ‖w∗ − (wk+1 + pk+1)‖2) ] .\nNote t∑\nk=1\n1\n2 √ k ‖g(wk)‖2 ≤\n√ tM2.\nand t∑\nk=1\n[√k 2 (‖w∗ − (wk + pk)‖2 − ‖w∗ − (wk+1 + pk+1)‖2) ] .\n≤ 1 2 ‖w∗ − (w1 + p1)‖2 −\n√ t\n2 ‖w∗ − (wt+1 + pt+1)‖2 + t∑ k=2 ( √ k 2 − √ k − 1 2 )‖w∗ − (wk + pk)‖2.\nSince Q is a bounded set, there exists a positive number M0 > 0 such that\n‖w∗ − (wt+1 + pt+1)‖2 ≤M0,∀t ≥ 0.\nTherefore\n(t+ 1)[f(wt)− f(w∗)] ≤ f(w0)− f(w∗) + α √ tM2 +\n√ t 2α M0.\nThis completes the proof of Theorem 3.\nIt is necessary to give some remarks about Theorem 3.\n• In nonsmooth convex cases, Theorem 3 shows that the individual convergence rate of SGD can be accelerated from O( log t√\nt ) to O( 1√ t ) via the HB momentum. The proof here clarifies\nhow the HB-type momentum wt −wt−1 and its time-varying weight βt help us to derive the optimal individual convergence.\n• The convergence analysis in Theorem 3 is obviously different from the regret analysis in all the available papers, this is because the connection between f(wt) − f(w∗) and f(wt−1) − f(w∗) should be established here. It can be seen that seeking an optimal individual convergence is more difficult than the analysis of averaging convergence in many papers such as (Zinkevich, 2003) and (Yang et al., 2016). • We can get a stochastic HB by replacing the subgradient g(wt) in (7) with its unbiased\nestimation ĝ(wt). Such substitution will not influence our convergence analysis. This means that we can get E[f(wt)− f(w∗)] ≤ O( 1√t ) under the same assumptions.\nIf βt remains a constant, we can get the averaging convergence rate, in which the proof of the first part is similar to Lemma 2 and that of the second part is similar to online PSG (Zinkevich, 2003). Theorem 4. Assume that Q is bounded and 0 ≤ βt ≡ β < 1. Let {wt}∞t=1 be generated by HB (7). Set\npt = β\n1− β (wt −wt−1) and αt = α√ t .\nThen we have\nwt+1 + pt+1 = PQ[wt + pt − αt\n1− β g(wt)], f(\n1\nt t∑ k=1 wk)− f(w∗) ≤ O( 1√ t ).\nIf Q is not bounded, the boundness of sequence ‖w∗ − (wt+1 + pt+1)‖ can not be ensured, which may lead to the failure of Theorem 4. Fortunately, like that in (Yang et al., 2016), E[f( 1T ∑T k=1 wk)− f(w∗)] ≤ O( 1√ T ) still holds, but we need to set αt ≡ α√T ." }, { "heading": "4 EXTENSION TO ADAPTIVE CASES", "text": "It is easy to find that HB (8) is in fact a gradient-based algorithm with predetermined step size α√ t . Thus its adaptive variant with EMA can be naturally formulated as\nwt+1 = PQ[wt − αβ1t t √ t V − 12 t ĝ(wt) + β1t(wt −wt−1)]. (12)\nwhere β1t = t\nt+ 2 , Vt = β2tVt−1 + (1− β2t)diag\n[ ĝ(wt)ĝ(wt) >] . The detailed steps of the adaptive HB are shown in Algorithm 1.\nAlgorithm 1 Adaptive HB Input: momentum parameters β1t, β2t, constant δ > 0, the total number of iterations T\n1: Initialize w0 = 0, V0 = 0d×d 2: repeat 3: ĝt(wt) = ∇ft(wt), 4: Vt = β2tVt−1 + (1− β2t)diag(ĝt(wt)ĝt(wt)>), 5: V̂t = V 1 2 t + δ√ t Id, 6: wt+1 = PQ[wt − αβ1t\nt √ t V̂t −1 ĝ(wt) + β1t(wt −wt−1)],\n7: until t = T Output: wT\nTheorem 5. Assume that Q is a bounded set. Let {wt}∞t=1 be generated by the adaptive HB (Algorithm 1). Denote pt = t(wt −wt−1). Suppose that β1t = tt+2 and 1− 1 t ≤ β2t ≤ 1− γ t for some 0<γ ≤ 1. Then wt+1 + pt+1 = PQ[wt + pt −\nα√ t V̂t −1 ĝ(wt)] (13)\nE[f(wt)− f(w∗)] ≤ O( 1√ t ).\nThe proof of (13) is identical to that of Lemma 2. It is easy to find that (13) is an adaptive variant of (8). This implies that the proof of the second part is similar to that of AdaGrad (Mukkamala & Hein, 2017). When 0 ≤ β1t ≡ β < 1, the adaptive variant of HB (7) is\nwt+1 = PQ[wt − α√ t V − 12 t ĝ(wt) + β(wt −wt−1)]. (14)\nwhere Vt = β2tVt−1 + (1− β2t)diag(ĝ(wt)ĝ(wt)>).\nSimilar to the proof of Theorem 5, we can get the following averaging convergence.\nTheorem 6. Assume that Q is bounded and 0 ≤ β1t ≡ β < 1 in Algorithm 1. Let {wt}∞t=1 be generated by the adaptive HB (Algorithm 1). Suppose that 1− 1t ≤ β2t ≤ 1− γ t for some 0<γ ≤ 1. Denote pt = β1−β (wt −wt−1). Then\nwt+1 + pt+1 = PQ[wt + pt − α (1− β) √ t V̂t −1 ĝ(wt)], E[f( 1 t t∑ k=1 wk)− f(w∗)] ≤ O( 1√ t ).\nIt is necessary to give some remarks about Theorem 5 and Theorem 6.\n• The adaptive HB is usually used with a constant β1t in deep learning. However, according to Theorem 6, the constant β1t only guarantees the optimal data-dependent averaging convergence. The convergence property of the last iterate still remains unknown.\n• In order to assure the optimal individual convergence, according to Theorem 5, β1t has to be time-varying. β1t = tt+2 can explain why we usually restrict β1t → 1 in practice. It also offers a new schedule about the selection of momentum parameters in deep learning." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present some empirical results. The first two experiments are to validate the correctness of our convergence analysis and investigate the performance of the suggested parameters schedule. For fair comparison, we independently repeat the experiments five times and report the averaged results. The last experiment (Appendix A.4) is to show the effective acceleration of HB over GD in terms of the individual convergence." }, { "heading": "5.1 EXPERIMENTS ON OPTIMIZING GENERAL CONVEX FUNCTIONS", "text": "This experiment is to optimize hinge loss with the l1-ball constraints. Let τ denotes the radius of the l1-ball. For implementation of the l1 projection operation, we use SLEP package1.\nmin f(w), s.t. w ∈ {w : ‖w‖1 ≤ τ}. (15)\nDatasets: A9a, W8a, Covtype, Ijcnn1, Rcv1, Realsim (available at LibSVM2 website).\nAlgorithms: PSG (αt = α√t ), HB (αt = α (t+2) √ t , βt = tt+2 ), NAG (Tao et al., 2020a) and adaptive HB (12) (β1t = tt+2 ).\nThe relative function value f(wt) − f(w∗) v.s. epoch is illustrated in Figure 1. As expected, the individual convergence of the adaptive HB has almost the same behavior as the averaging output of PSG, and the individual output of HB and NAG. Since the three stochastic methods have the optimal convergence for general convex problems, we conclude that the stochastic adaptive HB attains the optimal individual convergence.\n1http://yelabs.net/software/SLEP/ 2http://www.csie.ntu.edu.tw/˜cjlin/libsvmtools/datasets/" }, { "heading": "5.2 TRAINING DEEP NEURAL NETWORKS", "text": "These experiments are conducted on 4-layer CNN and ResNet-18 using a server with 2 NVIDIA 2080Ti GPUs.\nDatasets: MNIST (60000 training samples, 10000 test samples), CIFAR10 (50000 training samples, 10000 test samples), and CIFAR100 (50000 training samples, 10000 test samples).\nAlgorithms: Adam (α, β1t ≡ 0.9, β2t ≡ 0.999, = 10−8) (Kingma & Ba, 2014), SGD (αt ≡ α), SGD-momentum (αt ≡ α, βt ≡ 0.9), AdaGrad (αt ≡ α) (Duchi et al., 2011), RMSprop (αt ≡ α, β2t ≡ 0.9, = 10−8) (Tieleman & Hinton, 2012). For our adaptive HB, γ = 0.1 and δ = 10−8. Different from the existing methods, we set β1t = tt+2 and β2t = 1 − γ t in Algorithm 1. Within each epoch, β1t and β2t remain unchanged.\nNote that all methods have only one adjustable parameter α, we choose α from the set of {0.1, 0.01, 0.001, 0.0001} for all experiments. Following (Mukkamala & Hein, 2017) and (Wang et al., 2020), we design a simple 4-layer CNN architecture that consists two convolutional layers (32 filters of size 3 × 3), one max-pooling layer (2 × 2 window and 0.25 dropout) and one fully connected layer (128 hidden units and 0.5 dropout). We also use weight decay and batch normalization to reduce over-fitting. The optimal rate is always chosen for each algorithm separately so that one achieves either best training objective or best test performance after a fixed number of epochs.\nThe loss function is the cross-entropy. The training loss results are illustrated in Figure 2 and 4, and the test accuracy results are presented in Figure 3 and 5. As can be seen, the adaptive HB achieves the improved training loss. Moreover, this improvement also leads to good performance on test accuracy. The experimental results show that our suggested schedule about the momentum parameters could gain improved practical performance even in deep learning tasks." }, { "heading": "6 CONCLUSION", "text": "In this paper, we prove that the adaptive HB method attains an optimal data-dependent individual convergence rate in the constrained convex cases, which bridges a theory-practice gap in using momentum methods to train the deep neural networks as well as optimize the convex functions. Our new analysis not only clarifies how the HB momentum and its time-varying weight β1t = tt+2 help us to achieve the acceleration but also gives valuable hints how its momentum parameters should be scheduled in deep learning. Empirical results on optimizing convex functions validate the\ncorrectness of our convergence analysis and several typical deep learning experiments demonstrate the improved performance of the adaptive HB." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "This work was supported in part by National Natural Science Foundation of China under Grants (62076252, 61673394, 61976213) and in part by Beijing Advanced Discipline Fund." }, { "heading": "A SUPPLEMENTARY MATERIAL", "text": "" }, { "heading": "A.1 PROOF FOR THEOREM 4", "text": "Let {wt}∞t=1 be generated by HB (7). Set\npt = β\n1− β (wt −wt−1) and αt = α√ t .\nThen, ∀w ∈ Q, according to Lemma 1, we have 〈wt+1 −wt − β(wt −wt−1) + αtg(wt),wt+1 −w〉 ≤ 0.\nThis is 〈 1 1− β (wt+1 −wt)− pt + αt 1− β g(wt),wt+1 −w〉 ≤ 0.\ni.e., 〈wt+1 + pt+1 − (wt + pt) +\nαt 1− β g(wt),wt+1 −w〉 ≤ 0 (16)\nSpecifically,\n〈wt+1 + pt+1 − (wt + pt) + αt\n1− β g(wt), β(wt+1 −wt) 1− β 〉 ≤ 0 (17)\nFrom (16) and (17),\n〈wt+1 + pt+1 − (wt + pt) + αt\n1− β g(wt),wt+1 + pt+1 −w〉 ≤ 0.\nUsing Lemma 1, we have\nwt+1 + pt+1 = PQ[wt + pt − αt\n1− β g(wt)].\nThen\n‖w∗ − (wt+1 + pt+1)‖2 ≤‖w∗ − (wt + pt) + αt\n1− β g(wt)‖2\n=‖w∗ − (wt + pt)‖2 + ‖ αt\n1− β g(wt)‖2 + 2〈 αt 1− β g(wt),w ∗ −wt〉\n+2〈 αtβ (1− β)2 g(wt),wt−1 −wt〉\nNote\n〈g(wt),w∗ −wt〉 ≤ f(w∗)− f(wt), 〈g(wt),wt−1 −wt〉 ≤ f(wt−1)− f(wt). Then\n‖w∗ − (wt+1 + pt+1)‖2\n≤‖w∗ − (wt + pt)‖2 + α2t\n(1− β)2 ‖g(wt)‖2\n+ 2αt 1− β [f(w∗)− f(wt)] + 2αtβ (1− β)2 [f(wt−1)− f(wt)].\nRearrange the inequality, we have\n2αt 1− β [f(wt)− f(w∗)] ≤ 2αtβ (1− β)2 [f(wt−1)− f(wt)] + ‖w∗ − (wt + pt)‖2\n−‖w∗ − (wt+1 + pt+1)‖2 + α2t\n(1− β)2 ‖g(wt)‖2.\ni.e.,\nf(wt)− f(w∗) ≤ β\n1− β [f(wt−1)− f(wt)] + 1− β 2αt [‖w∗ − (wt + pt)‖2\n− ‖w∗ − (wt+1 + pt+1)‖2] + αt\n2(1− β) ‖g(wt)‖2.\nSumming this inequality from k = 1 to t, we obtain t∑\nk=1\n[f(wk)− f(w∗)]\n≤ β 1− β [f(w0)− f(wt)] + 1− β 2α1 ‖w∗ − (w1 + p1)‖2\n−1− β 2αt\n‖w∗ − (wt+1 + pt+1)‖2 + t∑\nk=1\nαk 2(1− β) ‖g(wk)‖2\n+ t∑ k=2 ‖w∗ − (wk + pk)‖2( 1− β 2αk − 1− β 2αk−1 ).\ni.e.,\nt∑ k=1 [f(wk)− f(w∗)]\n≤ β 1− β [f(w0)− f(wt)] + 1− β 2α ‖w∗ − (w1 + p1)‖2\n+ t∑ k=2 ‖w∗ − (wk + pk)‖2( (1− β) √ k 2α − (1− β) √ k − 1 2α )\n+ t∑ k=1\nα\n2(1− β) √ k ‖g(wk)‖2.\n(18)\nNote t∑\nk=1\n1\n2 √ k ‖g(wk)‖2 ≤\n√ tM2. (19)\nand since Q is a bounded set, there exists a positive number M0 > 0 such that ‖w∗ − (wt+1 + pt+1)‖2 ≤M0,∀t ≥ 0. (20) From (18)(19)(20) we have,\nt∑ k=1 [f(wk)− f(w∗)] ≤ β 1− β [f(w0)− f(wt)] + (1− β) √ tM0 2α + α √ tM2 1− β .\nBy convexity of f(w), we obtain\nf( 1\nt t∑ k=1 wk)− f(w∗) ≤ β (1− β)t [f(w0)− f(wt)] + (1− β)M0 2α √ t + αM2 (1− β) √ t .\nThis completes the proof of Theorem 4." }, { "heading": "A.2 PROOF FOR THEOREM 5", "text": "Notation. For a positive definite matrix H ∈ Rd×d, the weighted `2-norm is defined by ‖x‖2H = x>Hx. The H-weighted projection PHQ (x) of x onto Q is defined by P H Q (x) = argminy∈Q ‖y− x‖2H . We use g(wk) to denote the subgradient of fk(·) at wk. For the diagonal matrix sequence {Mk}tk=1, we use mk,i to denote the i-th element in the diagonal of Mk. We introduce the notation, g1:k,i = (g1,i, g2,i, .., gk,i) >, where gk,i is the i-th element of g(wk).\nLemma 7. (Mukkamala & Hein, 2017) Suppose that 1− 1t ≤ β2t ≤ 1− γ t for some 0<γ ≤ 1, and t ≥ 1, then d∑ i=1 t∑ k=1 g2k,i√ kvk,i + δ ≤ d∑ i=1 2(2− γ) γ ( √ tvt,i + δ).\nProof for Theorem 5. Without loss of generality, we only prove Theorem 5 in the full gradient setting. It can be extended to stochastic cases using the regular technique in (Rakhlin et al., 2011).\nNote that the projection operation can be rewritten as an optimization problem (Duchi, 2018), i.e., wt+1 = PQ[wt − αtV̂ −1t g(wt) + β1t(wt −wt−1)] is equivalent to\nwt+1 = arg min w∈Q {αt〈V̂ −1t g(wt),w〉+\n1 2 ‖w −wt − β1t(wt −wt−1)‖2}. (21)\nThen, ∀u ∈ Q, we have\n〈wt+1 −wt − βt(wt −wt−1) + αtV̂ −1t g(wt),wt+1 −w〉 ≤ 0.\nThis is 〈wt+1 + pt+1 − (wt + pt) +\nα√ t V̂ −1t g(wt),wt+1 −w〉 ≤ 0. (22)\nSpecifically, 〈wt+1 + pt+1 − (wt + pt) +\nα√ t V̂ −1t g(wt),wt+1 −wt〉 ≤ 0. (23)\nFrom (22) and (23),\n〈wt+1 + pt+1 − (wt + pt) + α√ t V̂ −1t g(wt),wt+1 −wt + (t+ 1)(wt+1 −wt)〉 ≤ 0.\ni.e., 〈wt+1 + pt+1 − (wt + pt) +\nα√ t V̂ −1t g(wt),wt+1 + pt+1 −wt〉 ≤ 0.\nUsing Lemma 1, we have\nwt+1 + pt+1 = P V̂t Q [wt + pt − α√ t V̂ −1t g(wt)].\nThen\n‖w∗ − (wt+1 + pt+1)‖2V̂t ≤ ‖w ∗ − (wt + pt) + α√ t V̂ −1t g(wt)‖2V̂t\n= ‖w∗ − (wt + pt)‖2V̂t + ‖ α√ t g(wt)‖2V̂t\n+ 2〈 α√ t g(wt),w ∗ −wt〉+ 2〈 αt√ t g(wt),wt−1 −wt〉.\nNote\n〈g(wt),w∗ −wt〉 ≤ f(w∗)− f(wt), 〈g(wt),wt−1 −wt〉 ≤ f(wt−1)− f(wt).\nThen\n(t+ 1)[f(wt)− f(w∗)] ≤t[f(wt−1)− f(w∗)] + √ t\n2α ‖w∗ − (wt + pt)‖2V̂t\n− √ t\n2α ‖w∗ − (wt+1 + pt+1)‖2V̂t +\nα\n2 √ t ‖g(wt)‖2V̂ −1t .\nSumming this inequality from k = 1 to t, we obtain\n(t+ 1)[f(wt)− f(w∗)] ≤ f(w0)− f(w∗) + t∑\nk=1\nα\n2 √ k ‖g(wk)‖2V̂ −1k\n+ t∑ k=1 [√k 2α (‖w∗ − (wk + pk)‖2V̂k − ‖w ∗ − (wk+1 + pk+1)‖2V̂k) ] .\nUsing Lemma 7, we have t∑\nk=1\nα\n2 √ k ‖g(wk)‖2V̂ −1k ≤ d∑ i=1 α(2− γ) γ ( √ tvt,i + δ).\nNote t∑\nk=1\n[√k 2α (‖w∗ − (wk + pk)‖2V̂k − ‖w ∗ − (wk+1 + pk+1)‖2V̂k) ] =\nd∑ i=1 v̂1,i 2α (w∗i − (w1,i + p1,i))2 − d∑ i=1 √ tv̂t,i 2α (w∗i − (wt+1,i + pt+1,i))2\n+ d∑ i=1 t∑ k=2 1 2α ( √ kv̂k,i − √ k − 1v̂k−1,i)(w∗i − (wk,i + pk,i))2.\n(24)\nSince Q is a bounded set, there exists a positive number M1 > 0 such that\n(w∗i − (wt+1,i + pt+1,i))2 ≤M1,∀t ≥ 0, i = 1, 2, ..., d.\nand vk,i = β2kvk−1,i + (1− β2k)g2k,i as well as β2k ≥ 1− 1k which implies kβ2k ≥ k − 1, we get √ kv̂k,i = √ kvk,i + δ\n= √ kβ2kvk−1,i + k(1− β2k)g2k,i + δ\n≥ √ (k − 1)vk−1,i + δ = √ k − 1v̂k−1,i.\nTherefore t∑\nk=1\n[√k 2α (‖w∗ − (wk + pk)‖2V̂k − ‖w ∗ − (wk+1 + pk+1)‖2V̂k) ] ≤\nd∑ i=1 v̂1,i 2α M1 + d∑ i=1 t∑ k=2 1 2α ( √ kv̂k,i − √ k − 1v̂k−1,i)M1\n= d∑ i=1 v̂1,iM1 2α + d∑ i=1 √ tv̂t,iM1 2α − d∑ i=1 v̂1,iM1 2α\n= M1 2α d∑ i=1 ( √ tvt,i + δ).\n(25)\nSince √ tvt,i = ‖g1:t,i‖, therefore\n(t+ 1)[f(wt)− f(w∗)] ≤f(w0)− f(w∗) + M1 2α d∑ i=1 ( √ tvt,i + δ) + d∑ i=1 α(2− γ) γ ( √ tvt,i + δ)\n=f(w0)− f(w∗) + ( M1 2α + α(2− γ) γ ) d∑ i=1 (‖g1:t,i‖+ δ).\nThis proves\nf(wt)− f(w∗) ≤ O( 1√ t )." }, { "heading": "A.3 PROOF FOR THEOREM 6", "text": "Let {wt}∞t=1 be generated by the adaptive HB (Algorithm 1). Set\npt = β\n1− β (wt −wt−1) and αt = α√ t .\nThen, ∀u ∈ Q, according to Lemma 1, we have\n〈wt+1 −wt − β(wt −wt−1) + αtV̂ −1t g(wt),wt+1 −w〉 ≤ 0. This is\n〈 1 1− β (wt+1 −wt)− pt + αtV̂\n−1 t\n1− β g(wt),wt+1 −w〉 ≤ 0.\ni.e.,\n〈wt+1 + pt+1 − (wt + pt) + αtV̂\n−1 t\n1− β g(wt),wt+1 −w〉 ≤ 0 (26)\nSpecifically,\n〈wt+1 + pt+1 − (wt + pt) + αtV̂\n−1 t\n1− β g(wt), β(wt+1 −wt) 1− β 〉 ≤ 0 (27)\nFrom (26) and (27),\n〈wt+1 + pt+1 − (wt + pt) + αtV̂\n−1 t\n1− β g(wt),wt+1 + pt+1 −w〉 ≤ 0.\nUsing Lemma 1, we have\nwt+1 + pt+1 = P V̂t Q [wt + pt −\nαtV̂ −1 t\n1− β g(wt)].\nAccording to Lemma 2,\n‖w∗ − (wt+1 + pt+1)‖2V̂t\n≤‖w∗ − (wt + pt) + αtV̂\n−1 t\n1− β g(wt)‖2V̂t\n=‖w∗ − (wt + pt)‖2V̂t + ‖ αt 1− β g(wt)‖2V̂t\n+2〈 αt 1− β g(wt),w ∗ −wt〉+ 2〈 αtβ (1− β)2 g(wt),wt−1 −wt〉\nNote\n〈g(wt),w∗ −wt〉 ≤ f(w∗)− f(wt), 〈g(wt),wt−1 −wt〉 ≤ f(wt−1)− f(wt). Then\n‖w∗ − (wt+1 + pt+1)‖2V̂t\n≤‖w∗ − (wt + pt)‖2V̂t + α2t (1− β)2 ‖g(wt)‖2V̂ −1t\n+ 2αt 1− β [f(w∗)− f(wt)] + 2αtβ (1− β)2 [f(wt−1)− f(wt)].\nRearrange the inequality, we have 2αt 1− β [f(wt)− f(w∗)]\n≤ 2αtβ (1− β)2 [f(wt−1)− f(wt)] + ‖w∗ − (wt + pt)‖2V̂t\n−‖w∗ − (wt+1 + pt+1)‖2V̂t + α2t (1− β)2 ‖g(wt)‖2V̂ −1t .\ni.e., f(wt)− f(w∗)\n≤ β 1− β [f(wt−1)− f(wt)] + 1− β 2αt [‖w∗ − (wt + pt)‖2V̂t −‖w∗ − (wt+1 + pt+1)‖2V̂t ] + αt 2(1− β) ‖g(wt)‖2V̂ −1t .\nSumming this inequality from k = 1 to t, we obtain\nt∑ k=1 [f(wk)− f(w∗)]\n≤ β 1− β [f(w0)− f(wt)] + 1− β 2α1 ‖w∗ − (w1 + p1)‖2V̂1\n−1− β 2αt ‖w∗ − (wt+1 + pt+1)‖2V̂t + t∑\nk=1\nαk 2(1− β) ‖g(wk)‖2V̂ −1k\n+ d∑ i=1 t∑ k=2 (w∗i − (wk,i + pk,i))2( (1− β)v̂k,i 2αk − (1− β)v̂k−1,i 2αk−1 ).\ni.e., t∑\nk=1\n[f(wk)− f(w∗)]\n≤ β 1− β [f(w0)− f(wt)] + 1− β 2α ‖w∗ − (w1 + p1)‖2V̂1\n+ d∑ i=1 t∑ k=2 (w∗i − (wk,i + pk,i))2 1− β 2α ( √ kv̂k,i − √ k − 1v̂k−1,i)\n+ t∑ k=1\nα\n2(1− β) √ k ‖g(wk)‖2V̂ −1k .\n(28)\nUsing Lemma 7, we have\nt∑ k=1\nα\n2 √ k(1− β) ‖g(wk)‖2V̂ −1k ≤ d∑ i=1 α(2− γ) γ(1− β) ( √ tvt,i + δ) = α(2− γ) γ(1− β) d∑ i=1 (‖g1:t,i‖+ δ). (29)\nand since Q is a bounded set, there exists a positive number M0 > 0 such that\n‖w∗ − (wt+1 + pt+1)‖2 ≤M0,∀t ≥ 0. (30)\nFrom (28)(29)(30) we have,\nt∑ k=1 [f(wk)− f(w∗)]\n≤ β 1− β [f(w0)− f(wt)] + d∑ i=1 (1− β)v̂1,iM0 2α + α(2− γ) γ(1− β) d∑ i=1 (‖g1:t,i‖+ δ)\n+ d∑ i=1 (1− β)v̂t,i √ tM0 2α − d∑ i=1 (1− β)v̂1,iM0 2α .\ni.e., t∑ k=1 [f(wk)− f(w∗)] ≤ β 1− β [f(w0)−f(wt)]+ α(2− γ) γ(1− β) d∑ i=1 (‖g1:t,i‖+δ)+ (1− β)M0 2α d∑ i=1 (‖g1:t,i‖+δ).\nBy convexity of f(wk), we obtain\nf( 1\nt t∑ k=1 wk)−f(w∗) ≤ β (1− β)t [f(w0)−f(wt)]+ α(2− γ) γ(1− β)t d∑ i=1 (‖g1:t,i‖+δ)+ (1− β)M0 2αt d∑ i=1 (‖g1:t,i‖+δ).\nThis completes the proof of Theorem 6." }, { "heading": "A.4 EXPERIMENTS ON OPTIMIZING A SYNTHETIC CONVEX FUNCTION", "text": "A constrained convex optimization problem was constructed in (Harvey et al., 2019) to show that the optimal individual convergence rate of SGD is O( log t√\nt ). We will use example to illustrate the\nacceleration of HB.\nLet Q be unit ball in RT . For i ∈ [T ] and c ≥ 1, define the positive scalar parameters\nai = 1\n8c(T − i+ 1) bi =\n√ i\n2c √ T\nDefine f : Q→ R and hi ∈ RT for i ∈ [T + 1] by\nf(w) = max i∈[T ]\nh>i w where hi,j = aj , 1 ≤ j < i −bj , i = j < T\n0, i < j ≤ T\nObviously, the minimum value of f on the unit ball is non-positive because f(0) = 0. It can be proved f(wT ) ≥ log T32c√T . Set c = 2, the function value f(wt) v.s. iteration is illustrated in Figure 6, where the step size of GD is c√\nt and the parameters of the constrained HB (7) (α = 8) and AdaHB\n(12) (α = 0.08, γ = 0.9, δ = 10−8) are selected according to Theorem 3 and Theorem 5. As expected, the individual convergence of HB is much faster than that of PSG. We thus conclude that HB is an effective acceleration of GD in terms of the individual convergence." } ]
2,021
OPTIMAL CONVERGENCE OF ADAPTIVE POLYAK’S HEAVY-BALL METHODS
SP:26a9ea5bc6af46b1e59b1e34390a1bdb5a660312
[ "In this submission, the authors study the inversion of ReLU networks (where the output of the network is subject to an invertible activation function). This is an important task, for example for inverse problems using generative priors. The authors introduce spark-based conditions for the invertibility of each layer of the network, leveraging sparsity that is induced by ReLUs. The authors also introduce a novel layer wise inversion algorithm and provide provable recovery guarantees in both noisy and noiseless settings. Empirical results demonstrate the superiority of the proposed algorithm relative to baselines for inversion in particular parameter regimes. " ]
Deep generative models (e.g. GANs and VAEs) have been developed quite extensively in recent years. Lately, there has been an increased interest in the inversion of such a model, i.e. given a (possibly corrupted) signal, we wish to recover the latent vector that generated it. Building upon sparse representation theory, we define conditions that rely only on the cardinalities of the hidden layer and are applicable to any inversion algorithm (gradient descent, deep encoder, etc.), under which such generative models are invertible with a unique solution. Importantly, the proposed analysis is applicable to any trained model, and does not depend on Gaussian i.i.d. weights. Furthermore, we introduce two layer-wise inversion pursuit algorithms for trained generative networks of arbitrary depth, where one of them is accompanied by recovery guarantees. Finally, we validate our theoretical results numerically and show that our method outperforms gradient descent when inverting such generators, both for clean and corrupted signals.
[]
[ { "authors": [ "Aviad Aberdam", "Jeremias Sulam", "Michael Elad" ], "title": "Multi-layer sparse coding: the holistic way", "venue": "SIAM Journal on Mathematics of Data Science,", "year": 2019 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Jonas Wulff", "William Peebles", "Hendrik Strobelt", "Bolei Zhou", "Antonio Torralba" ], "title": "Seeing what a gan cannot generate", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Amir Beck" ], "title": "First-order methods in optimization, volume 25", "venue": null, "year": 2017 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM journal on imaging sciences,", "year": 2009 }, { "authors": [ "Piotr Bojanowski", "Armand Joulin", "David Lopez-Paz", "Arthur Szlam" ], "title": "Optimizing the latent space of generative networks", "venue": "arXiv preprint arXiv:1707.05776,", "year": 2017 }, { "authors": [ "Ashish Bora", "Ajil Jalal", "Eric Price", "Alexandros G Dimakis" ], "title": "Compressed sensing using generative models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Il Yong Chun", "Jeffrey A Fessler" ], "title": "Convolutional analysis operator learning: acceleration and convergence", "venue": "IEEE Transactions on Image Processing,", "year": 2019 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "arXiv preprint arXiv:1605.09782,", "year": 2016 }, { "authors": [ "David L Donoho", "Michael Elad" ], "title": "Optimally sparse representation in general (nonorthogonal) dictionaries via `1 minimization", "venue": "Proceedings of the National Academy of Sciences,", "year": 2003 }, { "authors": [ "Michael Elad" ], "title": "Sparse and redundant representations: from theory to applications in signal and image processing", "venue": "Springer Science & Business Media,", "year": 2010 }, { "authors": [ "Simon Foucart", "Holger Rauhut" ], "title": "An invitation to compressive sensing. In A mathematical introduction to compressive sensing", "venue": null, "year": 2013 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Paul Hand", "Vladislav Voroninski" ], "title": "Global guarantees for enforcing deep generative priors by empirical risk", "venue": "IEEE Transactions on Information Theory,", "year": 2019 }, { "authors": [ "Paul Hand", "Oscar Leong", "Vlad Voroninski" ], "title": "Phase retrieval under a generative prior", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wen Huang", "Paul Hand", "Reinhard Heckel", "Vladislav Voroninski" ], "title": "A provably convergent scheme for compressive sensing under random generative priors", "venue": "arXiv preprint arXiv:1812.04176,", "year": 2018 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Fabian Latorre", "Armin eftekhari", "Volkan Cevher" ], "title": "Fast and provable admm for learning with generative priors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Qi Lei", "Ajil Jalal", "Inderjit S Dhillon", "Alexandros G Dimakis" ], "title": "Inverting deep generative models, one layer at a time", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Andrei Nicolae" ], "title": "Plu: The piecewise linear unit activation function", "venue": "arXiv preprint arXiv:1809.09534,", "year": 2018 }, { "authors": [ "Vardan Papyan", "Yaniv Romano", "Michael Elad" ], "title": "Convolutional neural networks analyzed via convolutional sparse coding", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Yaniv Romano", "Aviad Aberdam", "Jeremias Sulam", "Michael Elad" ], "title": "Adversarial noise attacks of deep learning architectures: Stability analysis via sparse-modeled signals", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2019 }, { "authors": [ "Viraj Shah", "Chinmay Hegde" ], "title": "Solving linear inverse problems using gan priors: An algorithm with provable guarantees", "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Dror Simon", "Aviad Aberdam" ], "title": "Barycenters of natural images constrained wasserstein barycenters for image morphing", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jeremias Sulam", "Vardan Papyan", "Yaniv Romano", "Michael Elad" ], "title": "Multilayer convolutional sparse modeling: Pursuit and dictionary learning", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Jeremias Sulam", "Aviad Aberdam", "Amir Beck", "Michael Elad" ], "title": "On multi-layer basis pursuit, efficient algorithms and convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 2019", "venue": null, "year": 2019 }, { "authors": [ "Joel A Tropp" ], "title": "Just relax: Convex programming methods for identifying sparse signals in noise", "venue": "IEEE transactions on information theory,", "year": 2006 }, { "authors": [ "Yan Wu", "Mihaela Rosca", "Timothy Lillicrap" ], "title": "Deep compressed sensing", "venue": "arXiv preprint arXiv:1905.06723,", "year": 2019 }, { "authors": [ "Bo Xin", "Yizhou Wang", "Wen Gao", "David Wipf", "Baoyuan Wang" ], "title": "Maximal sparsity with deep networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A Efros" ], "title": "Generative visual manipulation on the natural image manifold", "venue": "In European Conference on Computer Vision,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the past several years, deep generative models, e.g. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and Variational Auto-Encoders (VAEs) (Kingma & Welling, 2013), have been greatly developed, leading to networks that can generate images, videos, and speech voices among others, that look and sound authentic to humans. Loosely speaking, these models learn a mapping from a random low-dimensional latent space to the training data distribution, obtained in an unsupervised manner.\nInterestingly, deep generative models are not used only to generate arbitrary signals. Recent work rely on the inversion of these models to perform visual manipulations, compressed sensing, image interpolation, and more (Zhu et al., 2016; Bora et al., 2017; Simon & Aberdam, 2020). In this work, we study this inversion task. Formally, denoting the signal to invert by y ∈ Rn, the generative model as G : Rn0 → Rn, and the latent vector as z ∈ Rn0 , we study the following problem:\nz∗ = argmin z\n1 2 ‖G(z)− y‖22, (1)\nwhere G is assumed to be a feed-forward neural network.\nThe first question that comes to mind is whether this model is invertible, or equivalently, does Equation 1 have a unique solution? In this work, we establish theoretical conditions that guarantee the invertibility of the model G. Notably, the provided theorems are applicable to general non-random generative models, and do not depend on the chosen inversion algorithm.\nOnce the existence of a unique solution is recognized, the next challenge is to provide a recovery algorithm that is guaranteed to obtain the sought solution. A common and simple approach is to draw a random vector z and iteratively update it using gradient descent, opting to minimize Equation 1 (Zhu et al., 2016; Bora et al., 2017). Unfortunately, this approach has theoretical guarantees only in limited scenarios (Hand et al., 2018; Hand & Voroninski, 2019), since the inversion problem is generally non-convex. An alternative approach is to train an encoding neural network that maps images to their latent vectors (Zhu et al., 2016; Donahue et al., 2016; Bau et al., 2019; Simon & Aberdam, 2020); however, this method is not accompanied by any theoretical justification.\nWe adopt a third approach in which the generative model is inverted in an analytical fashion. Specifically, we perform the inversion layer-by-layer, similar to Lei et al. (2019). Our approach is based on the observation that every hidden layer is an outcome of a weight matrix multiplying a sparse\nvector, followed by a ReLU activation. By utilizing sparse representation theory, the proposed algorithm ensures perfect recovery in the noiseless case and bounded estimation error in the noisy one. Moreover, we show numerically that our algorithm outperforms gradient descent in several tasks, including reconstruction of noiseless and corrupted images.\nMain contributions: The contributions of this work are both theoretical and practical. We derive theoretical conditions for the invertiblity of deep generative models by ensuring a unique solution for the inversion problem defined in Equation 1. In short, these conditions rely on the growth of the non-zero elements of consecutive hidden layers by a factor of 2 for trained networks and by any constant greater than 1 for random models. Then, by leveraging the inherent sparsity of the hidden layers, we introduce a layerwise inversion algorithm with provable guarantees in the noiseless and noisy settings for fully-connected generators. To the best of our knowledge, this is the first work that provides such guarantees for general (non-random) models, addressing both the conceptual inversion and provable algorithms for solving Equation 1. Finally, we provide numerical experiments, demonstrating the superiority of our approach over gradient descent in various scenarios." }, { "heading": "1.1 RELATED WORK", "text": "Inverting deep generative models: A tempting approach for solving Equation 1 is to use first order methods such as gradient descent. Even though this inversion is generally non-convex, the works in Hand & Voroninski (2019); Hand et al. (2018) show that if the weights are random then, under additional assumptions, no spurious stationary points exist, and thus gradient descent converges to the optimum. A different analysis, given in Latorre et al. (2019), studies the case of strongly smooth generative models that are near isometry. In this work, we study the inversion of general (non-random and non-smooth) ReLU activated generative networks, and provide a provable algorithm that empirically outperforms gradient descent. A close but different line of theoretical work analyzes the compressive sensing abilities of trained deep generative networks (Shah & Hegde, 2018; Bora et al., 2017); however, these works assume that an ideal inversion algorithm, solving Equation 1, exists. Different works Bojanowski et al. (2017); Wu et al. (2019) suggest training procedures that result with generative models that can be easily inverted. Nevertheless, in this work we do not assume anything on the training procedure itself, and only rely on the weights of the trained model.\nLayered-wise inversion: The closest work to ours, and indeed its source of inspiration, is Lei et al. (2019), which proposes a novel scheme for inverting generative models. By assuming that the input signal was corrupted by bounded noise in terms of `1 or `∞, they suggest inverting the model using linear programs layer-by-layer. That said, to assure a stable inversion, their analysis is restricted to cases where: (i) the network weights are Gaussian i.i.d. variables; (ii) the layers expand such that the number of non-zero elements in each layer is larger than the size of the entire layer preceding it; and (iii) that the last activation function is either ReLU or leaky-ReLU. Unfortunately, as mentioned in their work, these three assumptions often do not hold in practice. In this work, we do not rely on the distribution of the weights nor on the chosen activation function of the last layer. Furthermore, we relax the expansion assumption as to rely only on the expansion of the number of non-zero elements. This relaxation is especially needed in the last hidden layer, which is typically larger than the image size.\nNeural networks and sparse representation: In the search for a profound theoretical understanding for deep learning, a series of papers suggested a connection between neural networks and sparse coding, by demonstrating that the forward pass of a neural network is in fact a pursuit for a multilayer sparse representation (Papyan et al., 2017; Sulam et al., 2018; Chun & Fessler, 2019; Sulam et al., 2019; Romano et al., 2019; Xin et al., 2016). In this work, we expand this proposition by showing that the inversion of a generative model is based on sequential sparse coding steps." }, { "heading": "2 THE GENERATIVE MODEL", "text": "Notations: We use bold uppercase letters to represent matrices, and bold lowercase letters to represent vectors. The vector wj represents the jth column in the matrix W. Similarly, the vector wi,j represents the jth column in the matrix Wi. The activation function ReLU is the entry-wise operator ReLU(u) = max{u,0}. We denote by spark(W) the smallest number of columns in W that are linearly-dependent, and by ‖x‖0 the number of non-zero elements in x. The mutual\ncoherence of a matrix W is defined as: µ(W) = maxi 6=j |wTi wj|\n‖wi‖2‖wj‖2 . Finally, we define xS and\nWSi as the supported vector and the row-supported matrix according to the set S, and denote by Sc the complementary set of S. Problem Statement: We consider a typical generative scheme G : Rn0 → Rn of the form:\nx0 = z,\nxi+1 = ReLU(Wixi), for all i ∈ {0, . . . , L− 1}, G(z) = φ(WLxL),\n(2)\nwhere xi ∈ Rni , {xi}L−1i=1 are the hidden layers, Wi ∈ Rni+1×ni are the weight matrices (nL+1 = n), x0 = z ∈ Rn0 is the latent vector that is usually randomly selected from a normal distribution, z ∼ N (0, σ2In0), and φ is an invertible activation function, e.g. tanh, sigmoid, or piece-wise linear. Given a sample x = G(z), that was created by the generative model above, we aim to recover its latent vector z. Note that each hidden vector in the model is produced by a ReLU activation, leading to hidden layers that are inherently sparse. This observation supports our approach to study this model utilizing sparse representation theory. In what follows, we use this observation to derive theoretical statements on the invertibility and the stability of this problem, and to develop pursuit algorithms that are guaranteed to restore the original latent vector." }, { "heading": "3 INVERTIBILITY AND UNIQUENESS", "text": "We start by addressing this question: “Is this generative process invertible?”. In other words, when given a signal that was generated by the model, x = G(z∗), we know that a solution z∗ to the inverse problem exists; however, can we ensure that this is the only one? Theorem 1 below (its proof is given in Appendix A) provides such guarantees, which are based on the sparsity level of the hidden layers and the spark of the weight matrices (see Section 2). Importantly, this theorem is not restricted to a specific pursuit algorithm; it can rather be used for any restoration method (gradient descent, deep encoder, etc.) to determine whether the recovered latent vector is the unique solution.\nDefinition 1 (sub-spark). Define the s-sub-spark of a matrix W as the minimal spark of any subset S of rows of cardinality |S| = s, sub-spark(W, s) = min|S|=s spark(WS). Definition 2 (sub-rank). Define the s-sub-rank of a matrix W as the minimal rank over any subset S of rows of cardinality |S| = s, sub-rank(W, s) = min|S|=s rank(WS). Theorem 1 (Uniqueness). Consider the generative scheme described in Equation 2 and a signal x = G(z∗) with a corresponding set of representations {x∗i }Li=1 that satisfy:\n(i) sL = ‖x∗L‖0 < spark(WL) 2 .\n(ii) si = ‖x∗i ‖0 < sub-spark(Wi,si+1) 2 , for all i ∈ {1, . . . , L− 1}.\n(iii) n0 = sub-rank(W0, s1) ≤ s1.\nThen, z∗ is the unique solution to the inverse problem that meets these sparsity conditions.\nTheorem 1 is the first of its kind to provide uniqueness guarantees for general non-statistical weight matrices. Moreover, it only requires an expansion of the layer cardinalities as opposed to Huang et al. (2018); Hand & Voroninski (2019) and Lei et al. (2019) that require dimensionality expansion that often does not hold for the last layer (typically n < nL).\nA direct corollary of Theorem 1 is in the case of random matrices. In such case, the probability of heaving n linearly dependent columns is essentially zero (Elad, 2010, Chapter 2). Hence, the conditions of Theorem 1 become:\n(i) sL < n+ 1\n2 . (ii) si <\nsi+1 + 1\n2 . (iii) s1 ≥ n0. (3)\nIn fact, since singular square matrices have Lebesgue measure zero, this corollary holds for almost all set of matrices.\nIn practice, to allow for a sufficient increase in the cardinalities of the hidden layers, their dimensions should expand as well, excluding the last layer. For example, if the dimensions of the hidden layers increase by a factor of 2, as long as the hidden layers preserve a constant percentage of non-zero elements, Theorem 1 holds almost surely. Notably, this is the common practice in various generative architectures, such as DC-GAN Radford et al. (2015) and PGAN Karras et al. (2017).\nNevertheless, in the random setting, we can further relax the above conditions by utilizing a theorem by Foucart & Rauhut (2013). This theorem considers a typical sparse representation model with a random dictionary and states that a sparse representation is unique as long as its cardinality is smaller than the signal dimension. Therefore, as presented in Theorem 2, in the random setting the cardinality across the layers need to grow only by a constant, i.e. si < si+1 and sL < n.\nTheorem 2 (Uniqueness for Random Weight Matrices). Assume that the weight matrices comprise of random independent and identically distributed entries (say Gaussian). If the representations of a signal x = G(z∗) satisfy:\n(i) sL = ‖xL‖0 < n.\n(ii) si = ‖xi‖0 < si+1, for all i ∈ {1, . . . , L− 1}.\n(iii) s1 = ‖x1‖0 ≥ n0,\nthen, with probability 1, the inverse problem has a unique solution that meets these conditions.\nThe above theorem states that to ensure a unique global minimum in the stochastic case, the number of nonzero elements should expand by only a single parameter. The proof of this theorem follows the same protocol as Theorem 1’s proof, while replacing the spark-based uniqueness (Donoho & Elad, 2003) with Foucart & Rauhut (2013). As presented in Section 6.1, these conditions are very effective in predicting whether the generative process is invertible or not, regardless of the recovery algorithm used." }, { "heading": "4 PURSUIT GUARANTEES", "text": "In this section we provide an inversion algorithm supported by reconstruction guarantees for the noiseless and noisy settings. To reveal the potential of our approach, we first discuss the performance of an Oracle, in which the true supports of all the hidden layers are known, and only their values are missing. This estimation can be performed by a sequence of simple linear projections on the known supports. Note that already in the first step of estimating xL, we can realize the advantage of utilizing the inherent sparsity of the hidden layers. Here, the reconstruction error of the Oracle is proportional to sL = ‖xL‖0, whereas solving a least square problem, as suggested in Lei et al. (2019), results with an error that is proportional to nL. For more details see Appendix B. Algorithm 1 Layered Basis-Pursuit Input: y = G(z) + e ∈ Rn, where ‖e‖2 ≤ , and sparsity levels {si}Li=1. First step: x̂L = argminx 12 ∥∥φ−1(y)−WLx∥∥22 + λL ‖x‖1, with λL = 2` . Set ŜL = Support(x̂L) and L = (3+ √ 1.5) √ sL minj‖wL,j‖2 ` . General step: For any layer i = L− 1, . . . , 1 execute: 1. x̂i = argminx 1 2 ∥∥∥x̂Ŝi+1i+1 −WŜi+1i x∥∥∥2 2 + λi ‖x‖1, with λi = 2 i+1. 2. Set Ŝi = Support(x̂i) and i = (3+ √ 1.5) √ si minj ∥∥∥∥wŜi+1i,j ∥∥∥∥ 2 i+1. Final step: Set ẑ = argminz 12 ∥∥∥x̂Ŝ11 −WŜ10 z∥∥∥2 2 .\nIn what follows, we propose to invert the model by solving sparse coding problems layer-by-layer, while leveraging the sparsity of all the intermediate feature vectors. Specifically, Algorithm 1 describes a layered Basis-Pursuit approach, and Theorem 3 provides reconstruction guarantees for this algorithm. The proof of this theorem is given in Appendix C. In Corollary 1 we provide guarantees for this algorithm when inverting non-random generative models in the noiseless case.\nDefinition 3 (Mutual Coherence of Submatrix). Define µs(W) as the maximal mutual coherence of any submatrix of W with s rows, µs(W) = max|S|=s µ(WS).\nTheorem 3 (Layered Basis-Pursuit Stability). Suppose that y = x + e, where x = G(z) is an unknown signal with known sparsity levels {si}Li=1, and ‖e‖2 ≤ . Let ` be the Lipschitz constant of φ−1 and define L+1 = ` . If in each midlayer i ∈ {1, . . . , L}, si < 13µsi+1 (Wi) , then,\n• The support of x̂i is a subset of the true support, Ŝi ⊆ Si;\n• The vector x̂i is the unique solution for the basis-pursuit; • The midlayer’s error satisfies ‖x̂i − xi‖2 < i, where i = (3+ √ 1.5) √ si\nminj ∥∥∥∥wŜi+1i,j ∥∥∥∥ 2 i+1.\n• the recovery error on the latent space is upper bounded by\n‖ẑ− z‖2 < ` √ ϕ L∏ i=1 (3 + √ 1.5) √ sj minj ∥∥∥wŜi+1i,j ∥∥∥ 2 , where ϕ = λmin ( (WŜ10 ) TWŜ10 ) > 0. (4)\nCorollary 1 (Layered Basis-Pursuit – Noiseless Case). Let x = G(z) with sparsity levels {si}Li=1, and assume that si < 1/3µsi+1(Wi) for all i ∈ {1, . . . , L}, and that ϕ = λmin((W Ŝ1 0 )\nTWŜ10 ) > 0. Then Algorithm 1 recovers the latent vector ẑ = z perfectly." }, { "heading": "5 THE LATENT-PURSUIT ALGORITHM", "text": "While Algorithm 1 provably inverts the generative model, it only uses the non-zero elements xŜi+1i+1 to estimate the previous layer xi. Here we present the Latent-Pursuit algorithm, which expands the Layered Basis-Pursuit algorithm by imposing two additional constraints. First, the Latent-Pursuit sets inequality constraints, WS c i+1\ni xi ≤ 0, that emerge from the ReLU activation. Second, recall that the ReLU activation constrains the midlayers to have nonnegative values, xi ≥ 0. Furthermore, we refrain from inverting the activation function directly φ−1 since practically, this inversion might be unstable, e.g. when using tanh. In what follows, we describe each of the three parts of the proposed algorithm: (i) the image layer; (ii) the middle layers; and (iii) the first layer.\nStarting with the inversion of the last layer, i.e. the image layer, we need to solve\nxL = argmin x\n1 2 ‖y − φ(WLx)‖22 + λL1 Tx, s. t. x ≥ 0, (5)\nwhere 1Tx represents an `1 regularization term under the nonnegative constraint. Assuming that φ is smooth and strictly monotonic increasing, this problem is a smooth convex function with separable constraints, and therefore, it can be solved using a projected gradient descent algorithm. In particular, we employ FISTA (Beck & Teboulle, 2009), as described in Algorithm 5 in Appendix D.\nWe move on to the middle layers, i.e. estimating xi for i ∈ {1, . . . , L− 1}. Here, both the approximated vector and the given signal are assumed to result from a ReLU activation function. This leads us to the following problem:\nxi = argmin x\n1\n2 ∥∥∥xŜi+1 −WŜi x∥∥∥2 2 + λi1 Tx, s. t. x ≥ 0, WŜ c i x ≤ 0 (6)\nwhere Ŝ = Ŝi+1 is the support of the output of the layer to be inverted, and Ŝc = Ŝci+1 is its complementary. To solve this problem we introduce an auxiliary variable a = WS c\ni x, leading to the following augmented Lagrangian form:\nmin x,a,u\n1\n2 ∥∥xSi+1 −WSi x∥∥22 + λi1Tx + ρi2 ∥∥∥a−WSci x + u∥∥∥22 , s. t. x ≥ 0, a ≤ 0. (7) This optimization problem could be solved using ADMM (Boyd et al., 2011), however, it would require inverting a matrix of size ni × ni, which might be costly. Alternatively, we employ a more general method, called alternating direction proximal method of multipliers (Beck, 2017, Chapter 15), in which a quadratic proximity term, 12 ∥∥x− x(k)∥∥ Q , is added to Equation 7. By setting\nQ = αI−WSi T WSi +βI−ρiWS c i T WS c i , with α+β ≥ λmax(WSi T WSi +ρiW Sc i T WS c\ni ), (8) we get that Q is positive semidefinite. This leads to the Linearized-ADMM algorithm in Algorithm 2 which is guaranteed to converge to the optimal solution of Equation 7 (see details in Appendix D).\nWe now recovered all the hidden layers, and only the latent vector z is left to be estimated. For this inversion step we adopt a MAP estimator utilizing the fact that z is drawn from a normal distribution:\nz = argmin z\n1\n2 ∥∥xS1 −WS0 z∥∥22 + γ2 ‖z‖22 , s. t. WSc0 z ≤ 0, (9) with γ > 0. This problem can be solved by the Linearized-ADMM algorithm described above (see details in Appendix D), except for the update of x in Algorithm 2, which becomes:\nz(k+1) ← 1 α+ β + γ\n( (α+β)z(k)−WS0 T (WS0 z (k)−xS1 )−ρ1WS c 0 T (WS c 0 z (k)−a(k)−u(k)) ) .\n(10)\nOnce the latent vector z and all the hidden layers {xi}Li=1 are recovered, we propose an optional step to improve the final estimation. In this step, which we refer to as debiasing, we freeze the recovered supports and only optimize over the non-zero values in an end-to-end fashion. This is equivalent to computing the Oracle, only here the supports are not known, but rather estimated using the proposed pursuit. Algorithm 3 provides a short description of the entire proposed inversion method.\nAlgorithm 2 Latent Pursuit: Midlayer Inversion Initialization: x(0) ∈ Rni , u(0),a(0) ∈ Rsi+1 , ρi > 0, and α, β satisfying Equation 8. Until converged: for k = 0, 1, . . . execute:\n1. x(k+1) ← ReLU ( x(k) − 1\nα+β WSi T (WSi x (k) −\nxSi+1)− ρiα+βW Sc i T (WS c i x (k)−a(k)−u(k))− λi α+β\n) .\n2. a(k+1) ← −ReLU ( u(k) −WS c i x (k+1) ) .\n3. u(k+1) ← u(k) + a(k+1) −WS c i x (k+1).\nAlgorithm 3 The Latent-Pursuit Algorithm\nInitialization: Set λi > 0 and ρi > 0. First step: Estimate xL, i.e. solve Equation 5 using Algorithm 5. Middle step: For layers i = L − 1, . . . , 1, estimate xi using Algorithm 2. Final step: Estimate z using Algorithm 2 but with the x-step of Equation 10. Debiasing (optional): Set z ← argminz 1 2 ∥∥∥y − φ((∏0i=L WŜi+1i ) z)∥∥∥2 2 ." }, { "heading": "6 NUMERICAL EXPERIMENTS", "text": "We demonstrate the effectiveness of our approach through numerical experiments, where our goal is twofold. First, we study random generative models and show the ability of the uniqueness claim above (Theorem 2) to predict when both gradient descent and our approach fail to invert G as the inversion is not unique. In addition, we show that in these random networks and under the conditions of Corollary 1, the latent vector is perfectly recovered by both the Layered Basis-Pursuit and the Latent-Pursuit algorithm. Our second goal is to demonstrate the advantage of Latent-Pursuit over gradient descent for trained generative models, in two settings: noiseless and image inpainting." }, { "heading": "6.1 RANDOM WEIGHTS", "text": "First, we validate the above theorems on random generative models, by considering a framework similar to Huang et al. (2018) and Lei et al. (2019). Here, the generator is composed of two layers:\nx = G(z) = tanh(W2 ReLU(W1z)), (11)\nwhere the dimensions of the network are n = 625, n1 varies between 50 to 1000 and n0 ∈ {100, 200}. The weight matrices W1 and W2 are drawn from an iid Gaussian distribution. We generate 512 signals by feeding the generator with latent vectors from a Gaussian distribution, and then test the performance of the inversion of these signals in terms of SNR for all the layers, using gradient descent, Layered Basis-Pursuit (Algorithm 1), and Latent-Pursuit (Algorithm 3). For gradient descent, we use the smallest step-size from {1e− 1, 1e0, 1e1, 1e2, 1e3, 1e4} for 10, 000 steps that resulted with a gradient norm smaller than 1e − 9. For Layered Basis-Pursuit we use the best λ1 from {1e− 5, 7e− 6, 3e− 6, 1e− 6, 0}, and for Latent-Pursuit, we use λ1 = 0, ρ = 1e− 2 and γ = 0. In Layered Basis-Pursuit and Latent-Pursuit we preform a debiasing step in a similar manner to gradient descent. Figure 2 marks median results in the central line, while the ribbons show 90%, 75%, 25%, and 10% quantiles. In these experiments the sparsity level of the hidden layer is approximately 50%, s1 = ‖x1‖0 ≈ n1 2 , due to the weights being random. In what follows, we split the analysis of Figure 2 to three segments. Roughly, these segments are s1 < n0, n0 < s1 < n, and n < s1 as suggested by the theoretical results given in Theorem 2 and Corollary 1.\nIn the first segment, Figure 2 shows that all three methods fail. Indeed, as suggested by the uniqueness conditions introduced in Theorem 2, when s1 < n0, the inversion problem of the first layer does not have a unique global minimizer. The dashed vertical line in Figure 2 marks the spot where n1 2 = n0. Interestingly, we note that the conclusions in (Huang et al., 2018; Lei et al., 2019), suggesting that large latent spaces cause gradient descent to fail, are imprecise and valid only for fixed hidden layer size. This can be seen by comparing n0 = 100 to n0 = 200. As a direct outcome of our uniqueness study and as demonstrated in Figure 2, gradient descent (and any other algorithm) fails when the ratio between the cardinalities of the layers is smaller than 2. Nevertheless, Figure 2\nexposes an advantage for using our approach over gradient descent. Note that our methods successfully invert the model for all the layers that follow the layer for which the sparsity assumptions do not hold, and fail only past that layer, since only then uniqueness is no longer guaranteed. However, since gradient descent starts at a random location, all the layers are poorly reconstructed.\nFor the second segment, we recall Theorem 3 and in particular Corollary 1. There we have shown that Layered Basis-Pursuit and Latent-Pursuit are guaranteed to perfectly recover the latent vector as long as the cardinality of the midlayer s1 = ‖x1‖0 satisfies n0 ≤ s1 ≤ 1/3µ(W1). Indeed, Figure 2 demonstrates the success of these two methods even when s1 ≈ n12 is greater than the worst-case bound 1/3µ(W1). Moreover, this figure validates that Latent-Pursuit, which leverages additional properties of the signal, outperforms Layered Basis-Pursuit, especially when s1 is large. Importantly, while the analysis in Lei et al. (2019) suggests that n has to be larger than n1, in practice, all three methods succeed to invert the signal even when n1 > n. This result highlights the strength of the proposed analysis that leans on the cardinality of the layers rather than their size.\nWe move on to the third and final segment, where the size of hidden layer is significantly larger than the dimension of the image. Unfortunately, in this scenario the layer-wise methods fail, while gradient descent succeeds. Note that, in this setting, inverting the last layer solely is an ambitious (actually, impossible) task; however, since gradient descent solves an optimization problem of a much lower dimension, it succeeds in this case as well. This experiment and the accompanied analysis suggest that a hybrid approach, utilizing both gradient descent and the layered approach, might be of interest. We defer a study of such an approach for future work." }, { "heading": "6.2 TRAINED NETWORK", "text": "To demonstrate the practical contribution of our work, we experiment with a generative network trained on the MNIST dataset. Our architecture is composed of fully connected layers of sizes 20, 128, 392, and finally an image of size 28 × 28 = 784. The first two layers include batchnormalization1 and a ReLU activation function, whereas the last one includes a piecewise linear unit (Nicolae, 2018). We train this network in an adversarial fashion using a fully connected discriminator and spectral normalization (Miyato et al., 2018). We should note that images produced by fully connected models are typically not as visually appealing as ones generated by convolutional architectures. However, since the theory provided here focuses on fully connected models, this setting was chosen for the experimental section, similar to other previous work (Huang et al., 2018; Lei et al., 2019) that study the inversion process.\nNetwork inversion: We start with the noiseless setting and compare the Latent-Pursuit algorithm to the Oracle (which knows the exact support of each layer) and to gradient descent. To invert a signal and compute its reconstruction quality, we first invert the entire model and estimate the latent vector. Then, we feed this vector back to the model to estimate the hidden representations and the reconstructed image. For our algorithm we use ρ = 1e − 2 for all layers and 10, 000 iterations of debiasing. For gradient-descent run, we use 10, 000 iterations, momentum of 0.9 and a step size of 1e− 1 that gradually decays to assure convergence. Overall, we repeat this experiment 512 times. Figure 3a demonstrates the reconstruction error of the latent vector. First, we observe that the performance of our inversion algorithm is on par with those of the Oracle. Moreover, not only does our approach performs much better than gradient descent, but in many experiments the latter fails utterly. In Appendix E.1 we provide reconstruction error for all the layers followed by image samples.\nA remark regarding the run-time of these algorithms is in place. Using an Nvidia 1080Ti GPU, the proposed Latent-Pursuit algorithm took approximately 15 seconds per layer to converge for a total of approximately 75 seconds to complete, including the debiasing step, for all 512 experiments. On the other hand, gradient-descent took approximately 30 seconds to conclude.\nImage inpainting: We continue our experiments with image inpainting, i.e. inverting the network and reconstructing a clean signal when only some of its pixels are known. First, we apply a random mask in which 45% of the pixels are randomly concealed. Since the number of known pixels is still larger than the number of non-zero elements in the layer preceding it, our inversion algorithm usually reconstructs the image successfully as suggested by Figure 3b. In this experiment, we perform slightly worse than the Oracle, which is not surprising considering the information disparity between\n1Note that after training, batch-normalization is a simple linear operation.\nthe two. As for gradient descent, we see similar results to the ones received in the non-corrupted setting. Appendix E.2 provides image samples and reconstruction comparison across all layers. Finally, we repeat the above experiment using a deterministic mask that conceals the upper ∼ 45% of each image (13 out of 28 rows). The results of this experiment, which are provided in Figure 3c and Appendix E.3, lead to similar conclusions as in the previous experiment. Indeed, since the model contains fully connected layers, we expect the last two experiments to show comparable results." }, { "heading": "7 CONCLUSIONS", "text": "In this paper we have introduced a novel perspective regarding the inversion of deep generative networks and its connection to sparse representation theory. Building on this, we have proposed novel invertibility guarantees for such a model for both random and trained networks. We have accompanied our analysis by novel pursuit algorithms for this inversion and presented numerical experiments that validate our theoretical claims and the superiority of our approach compared to the more classic gradient descent. We believe that the insights underlining this work could lead to a broader activity which further improves the inversion of these models in a variety of tasks." }, { "heading": "A THEOREM 1: PROOF", "text": "Proof. The main idea of the proof is to show that under the conditions of Theorem 1 the inversion task at every layer i ∈ {1, . . . , L + 1} has a unique global minimum. For this goal we utilize the well-known uniqueness guarantee from sparse representation theory.\nLemma 1 (Sparse Representation - Uniqueness Guarantee Donoho & Elad (2003); Elad (2010)). If a system of linear equations y = Wx has a solution x satisfying ‖x‖0 < spark(W)/2, then this solution is necessarily the sparset possible.\nUsing the above Lemma, we can conclude that if xL obeys ‖xL‖0 = sL < spark(WL)/2, then xL is the unique vector that has at most sL nonzeros, while satisfying the equation φ−1(x) = WLxL.\nMoving on to the previous layer, we can employ again the above Lemma for the supported vector xSLL . This way, we can ensure that xL−1 is the unique sL−1-sparse solution of x SL L = W SL L−1xL−1 as long as\nsL−1 = ‖xL−1‖0 < spark(WSLL−1)\n2 . (12)\nHowever, the condition sL−1 = ‖xL−1‖0 < sub-spark(WL−1,sL)\n2 implies that the above necessarily holds. This way we can ensure that each layer i, i ∈ {1, . . . , L− 1} is the unique sparse solution.\nFinally, in order to invert the first layer we need to solve xS11 = W S1 0 z. If W S1 0 has full columnrank, this system either has no solution or a unique one. In our case, we do know that a solution exists, and thus, necessarily, it is unique. A necessary but insufficient condition for this to be true is s1 ≥ n0. The additional requirement sub-rank(W0, s1) = n0 ≤ s1 is sufficient for z to be the unique solution, and this concludes the proof." }, { "heading": "B THE ORACLE ESTIMATOR", "text": "The motivation for studying the recovery ability of the Oracle is that it can reveal the power of utilizing the inherent sparsity of the feature maps. Therefore, we analyze the layer-wise Oracle estimator described in Algorithm 4, which is similar to the layer-by-layer fashion we adopt in both the Layered Basis-Pursuit (Algorithm 1) and in the Latent-Pursuit (Algorithm 3). In this analysis we assume that the contaminating noise is white additive Gaussian.\nThe noisy signal y carries an additive noise with energy proportional to its dimension, σ2n. Theorem 4 below suggests that the Oracle can attenuate this noise by a factor of n0n , which is typically much smaller than 1. Moreover, the error in each layer is proportional to its cardinality σ2si. These results are expected, as the Oracle simply projects the noisy signal on low-dimensional subspaces of known\nAlgorithm 4 The Layered-Wise Oracle Input: y = G(z) + e ∈ Rn, and supports of each layer {Si}Li=1. First step: x̂L = argminx 12\n∥∥φ−1(y)− W̄Lx∥∥22, where W̄L is the column supported matrix WL[:,SL]. Intermediate steps: For any layer i = L−1, . . . , 1, set x̂i = argminx 12 ∥∥∥x̂Si+1i+1 − W̄ix∥∥∥2 2 , where W̄i is the row and column supported matrix Wi[Si+1,Si]. Final step: Set ẑ = argminz 12 ∥∥∥x̂S11 −WS10 z∥∥∥2 2 .\ndimension. That said, this result reveals another advantage of employing the sparse coding approach over solving least squares problems, as the error can be proportional to si rather than to ni.\nTheorem 4 (The Oracle). Given a noisy signal y = G(z)+e, where e ∼ N (0, σ2I), and assuming known supports {Si}Li=1, the recovery errors satisfy 2:\nσ2∏L j=i λmax(W̄ T j W̄j) si ≤ E ‖x̂i − xi‖22 ≤ σ2∏L j=i λmin(W̄ T j W̄j) si, (13)\nfor i ∈ {1, . . . , L}, where W̄i is the row and column supported matrix, Wi[Si+1,Si]. The recovery error bounds for the latent vector are similarly given by:\nσ2∏L j=0 λmax(W̄ T j W̄j) n0 ≤ E ‖ẑ− z‖22 ≤ σ2∏L j=0 λmin(W̄ T j W̄j) n0. (14)\nProof. Assume y = x + e with x = G(z), then the Oracle for the Lth layer is x̂SL = W̄ † Ly. Since y = W̄Lx S L+e, we get that x̂ S L = x S L+ ẽL, where ẽL = W̄ † Le, and ẽL ∼ N (0, σ2(W̄ T LW̄L)\n−1). Therefore, using the same proof technique as in Aberdam et al. (2019), the upper bound on the recovery error in the Lth layer is:\nE ‖x̂L − xL‖22 = σ 2 trace((W̄ T LW̄L) −1) ≤ σ2 sL λmin(W̄ T LW̄L) . (15)\nUsing the same approach we can derive the lower bound by using the largest eigenvalue of W̄TLW̄L. In a similar fashion, we can write x̂Si = x S i + ẽi for all i ∈ {0, . . . , L − 1}, where ẽi = A[i,L]e and A[i,L] , W̄ † iW̄ † i+1 · · ·W̄ † L. Therefore, the upper bound for the recovery error in the ith layer\n2For simplicity we assume here that φ is the identity function.\nbecomes:\nE ‖x̂i − xi‖22 = E ∥∥A[i,L]e∥∥22\n= σ2 trace ( A[i,L]A T [i,L] ) = σ2 trace ( A[i,L−1]W̄ † L(W̄ † L) TAT[i,L−1]\n) = σ2 trace ( A[i,L−1](W̄ T LW̄L) −1AT[i,L−1]\n) ≤ σ 2\nλmin(W̄ T LW̄L)\ntrace ( A[i,L−1]A T [i,L−1] ) ≤ · · ·\n≤ σ 2∏L\nj=i+1 λmin(W̄ T j W̄j)\ntrace ( A[i,i]A T [i,i] ) =\nσ2∏L j=i+1 λmin(W̄ T j W̄j)\ntrace ( (W̄\nT i W̄i)\n−1 )\n≤ σ 2∏L\nj=i λmin(W̄ T j W̄j)\nsi,\n(16)\nand this concludes the proof." }, { "heading": "C THEOREM 3: PROOF", "text": "Proof. We first recall the stability guarantee from Tropp (2006) for the basis-pursuit.\nLemma 2 (Basis Pursuit Stability Tropp (2006)). Let x∗ be an unknown sparse representation with known cardinality of ‖x∗‖0 = s, and let y = Wx∗+e, where W is a matrix with unit-norm columns and ‖e‖2 ≤ . Assume the mutual coherence of the dictionary W satisfies s < 1/(3µ(W)). Let x̂ = argminx 1 2 ‖y −Wx‖ 2 2 + λ ‖x‖1, with λ = 2 . Then, x̂ is unique, the support of x̂ is a subset of the support of x∗, and ‖x∗ − x̂‖∞ < (3 + √ 1.5) . (17)\nIn order to use the above lemma in our analysis we need to modify it such that W does not need to be column normalized and that the error is `2- and not `∞-bounded. For the first modification we decompose a general unnormalized matrix W as W̃D, where W̃ is the normalized matrix, w̃i = wi/ ‖wi‖2, and D is a diagonal matrix with di = ‖wi‖2. Using the above lemma we get that\n‖D(x∗ − x̂)‖∞ < (3 + √ 1.5) . (18)\nThus, the error in x̂ is bounded by\n‖x− x̂‖∞ < (3 +\n√ 1.5)\nmini ‖wi‖2 . (19)\nSince Lemma 2 guarantees that the support of x̂ is a subset of the support of x∗, we can conclude that\n‖x− x̂‖2 < (3 +\n√ 1.5)\nmini ‖wi‖2 √ s. (20)\nUnder the conditions of Theorem 3, we can use the above conclusion to guarantee that estimating xL from the noisy input y using Basis-Pursuit must lead to a unique x̂L such that its support is a subset of that of xL. Also,\n‖xL − x̂L‖2 < L = (3 +\n√ 1.5)\nminj ‖wL,j‖2 L+1\n√ sL, (21)\nAlgorithm 5 Latent Pursuit: Last Layer Inversion Input: y ∈ Rn,K ∈ N, λL ≥ 0, µ ∈ (0, 2` ), φ(·) is `-smooth and strictly monotonic increasing. Initialization: u(0) ← 0,x(0)L ← 0, t(0) ← 1. General step: for any k = 0, 1, . . . ,K execute the following:\n1. g←WTLφ′ ( WLx (k) L ) [ φ ( WLx (k) L ) − y ] 2. u(k+1) ← ReLU ( x (k) L − µ · (g + λL1)\n) 3. t(k+1) ← 1+ √ 1+4t(k)2\n2\n4. x(k+1)L ← u(k+1) + t(k)−1 t(k+1) (u(k+1) − u(k))\nReturn: x(K)L\nwhere wL,j is the jth column in WL, and L+1 = ` as φ−1(y) can increase the noise by a factor of `.\nMoving on to the estimation of the previous layer, we have that x̂ŜLL = W ŜL L−1xL−1 + eL, where ‖eL‖2 ≤ L. According to Theorem 3 assumptions, the mutual coherence condition holds, and therefore, we get that the support of x̂L−1 is a subset of the support of xL−1, x̂L−1 is unique, and that\n‖xL−1 − x̂L−1‖2 < L−1 = (3 +\n√ 1.5)\nminj ∥∥∥wŜLL−1,j∥∥∥ 2 L √ sL−1. (22)\nUsing the same technique proof for all the hidden layers results in\n‖xi − x̂i‖2 < i = (3 +\n√ 1.5)\nminj ∥∥∥wŜi+1i,j ∥∥∥ 2 i+1 √ si, for all i ∈ {1, . . . , L− 1}, (23)\nwhere wŜi+1i,j is the jth column in W Ŝi+1 i .\nFinally, we have that x̂Ŝ11 = W Ŝ1 0 z + e1, where ‖e1‖2 ≤ 1. Therefore, if ϕ = λmin((W Ŝ1 0 ) TWŜ10 ) > 0, and\nẑ = argmin z\n1\n2 ∥∥∥x̂Ŝ11 −WŜ10 z∥∥∥2 2 . (24)\nThen,\n‖ẑ− z‖22 = e T 1 ( (WŜ10 ) TWŜ10 )−1 e1 ≤ 1\nϕ 21, (25)\nwhich concludes Theorem 3 guarantees." }, { "heading": "D DETAILS ON THE LATENT-PURSUIT ALGORITHM", "text": "Here we provide additional details on the Latent-Pursuit algorithm described in Section 5.\nIn order to estimate the last layer we aim to solve\nxL = argmin x\n1 2 ‖y − φ(WLx)‖22 + λL1 Tx, s. t. x ≥ 0. (26)\nFor this goal we make use of FISTA (Beck & Teboulle, 2009) algorithm as described in Algorithm 5.\nAs describe in Section 5, for estimating the middle layers we aim to solve:\nxi = argmin x\n1\n2 ∥∥∥xŜi+1 −WŜi x∥∥∥2 2 + λi1 Tx, s. t. x ≥ 0, WŜ c i x ≤ 0. (27)\nUsing the auxiliary variable a = WS c i x and the positive semidefinite matrix Q = αI−WSi T WSi + βI− ρiWS c\ni\nT WS c\ni , we get that the Linearized-ADMM aims to solve:\nmin x,a,u\n1\n2 ∥∥xSi+1 −WSi x∥∥22+λi1Tx+ρi2 ∥∥∥a−WSci x + u∥∥∥22+12 ∥∥∥x− x(k)∥∥∥Q , s. t. x ≥ 0, a ≤ 0. (28)\nThis leads to an algorithm that alternates through the following steps:\nx(k+1) ← argmin x\nα\n2 ∥∥∥∥x− (x(k) − 1αWSi T (WSi x(k) − xSi+1) )∥∥∥∥2\n2\n+ λix + (29)\nβ\n2 ∥∥∥∥x− (x(k) − ρiβ WSci T (WSci x(k) − a(k) − u(k)) )∥∥∥∥2\n2\n, s. t. x ≥ 0.\na(k+1) ← argmin a ρi 2 ∥∥∥a−WSci x(k+1) + u(k)∥∥∥2 2 , s. t. a ≤ 0. (30)\nu(k+1) ← u(k) + ( a(k+1) −WS c\ni x (k+1)\n) . (31)\nThus, the Linearized-ADMM algorithm, described in Algorithm 2 is guaranteed to converge to the optimal solution of Equation 7.\nAfter recovering all the hidden layers, we aim to estimate the latent vector z. For this inversion step we adopt a MAP estimator as described in Section 5:\nz = argmin z\n1\n2 ∥∥xS1 −WS0 z∥∥22 + γ2 ‖z‖22 , s. t. WSc0 z ≤ 0, (32) with γ > 0. In fact, this problem can be solved by a similar Linearized-ADMM algorithm described above, expect for the update of x (Equation 29), which becomes:\nz(k+1) ← argmin z\nα\n2 ∥∥∥∥z− (z(k) − 1αWS0 T (WS0 z(k) − xS1 ) )∥∥∥∥2\n2\n+\nβ\n2 ∥∥∥∥z− (z(k) − ρiβ WSc0 T (WSc0 z(k) − a(k) − u(k)) )∥∥∥∥2\n2\n+ γ\n2 ‖z‖22 .\n(33)\nEquivalently, for the latent vector z, the first step of Algorithm 2 is changed to to:\nz(k+1) ← 1 α+ β + γ\n( (α+β)z(k)−WS0 T (WS0 z (k)−xS1 )−ρ1WS c 0 T (WS c 0 z (k)−a(k)−u(k)) ) .\nE INVERSION RESULTS FOR TRAINED NETWORKS\nHere we provide detailed results for the various inversion experiments described in Section 6.2.\nE.1 CLEAN IMAGES\nFigure 4 demonstrates the reconstruction error for all the layers when inverting clean images. In Figures 5 and 6 we demonstrate successful and failure cases of the gradient-descent algorithm and compare them to our approach.\nE.2 RANDOM MASK INPAINTING\nFigures 7-9 demonstrate the performance of our approach compared to gradient descent in terms of SNR and image quality respectively for the randomly-generated mask experiment.\nE.3 NON-RANDOM MASK INPAINTING\nFigures 10-12 demonstrate the performance of our approach compared to gradient descent in terms of SNR and image quality respectively for the non-random mask experiment." } ]
2,020
null
SP:bba4f71cb381146e980c7cb32dd2510e1bcdb226
[ "The architecture of the tracker is standard siamese. The novelty is at a technical level, modules of the \"cross-guided\" type have been proposed. It does bring an improvement, but not to the state-of-the-art level. There is no significant insight, training, updating novelty or theoretical. Recent short-term trackers output segmentation, the proposed tracker outputs a bounding box." ]
Most traditional Siamese trackers are used to regard the location of the max response map as the center of target. However, it is difficult for these traditional methods to calculate response value accurately when face the similar object, deformation, background clutters and other challenges. So how to get the reliable response map is the key to improve tracking performance. Accordingly, a simple yet effective short-term tracking framework (called SiamCAN),by which bridging the information flow between search branch and template branch, is proposed to solve the above problem in this paper. Moreover, in order to get more accurate target estimation, an anchor-free mechanism and specialized training strategy are applied to narrow the gap between the predicted bounding box and groundtruth. The proposed method achieves state-of-the-art performance on four visual tracking benchmarks including UAV123, OTB100, VOT2018 and VOT2019, outperforming the strong baseline, SiamBAN, by 0.327→ 0.331 on VOT2019 and 0.631 → 0.638 success score, 0.833→ 0.850 precision score on UAV123.
[]
[ { "authors": [ "Mohamed H Abdelpakey", "Mohamed S Shehata", "Mostafa M Mohamed" ], "title": "Denssiam: End-to-end densely-siamese network with self-attention model for object tracking", "venue": "In International Symposium on Visual Computing,", "year": 2018 }, { "authors": [ "Luca Bertinetto", "Jack Valmadre", "Joao F Henriques", "Andrea Vedaldi", "Philip HS Torr" ], "title": "Fullyconvolutional siamese networks for object tracking", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Goutam Bhat", "Martin Danelljan", "Luc Van Gool", "Radu Timofte" ], "title": "Learning discriminative model prediction for tracking", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zedu Chen", "Bineng Zhong", "Guorong Li", "Shengping Zhang", "Rongrong Ji" ], "title": "Siamese box adaptive network for visual tracking", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Martin Danelljan", "Gustav Hager", "Fahad Shahbaz Khan", "Michael Felsberg" ], "title": "Convolutional features for correlation filter based visual tracking", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2015 }, { "authors": [ "Martin Danelljan", "Gustav Hager", "Fahad Shahbaz Khan", "Michael Felsberg" ], "title": "Learning spatially regularized correlation filters for visual tracking", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Martin Danelljan", "Goutam Bhat", "Fahad Shahbaz Khan", "Michael Felsberg" ], "title": "Eco: Efficient convolution operators for tracking", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Martin Danelljan", "Goutam Bhat", "Fahad Shahbaz Khan", "Michael Felsberg" ], "title": "Atom: Accurate tracking by overlap maximization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Martin Danelljan", "Luc Van Gool", "Radu Timofte" ], "title": "Probabilistic regression for visual tracking", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xingping Dong", "Jianbing Shen" ], "title": "Triplet loss in siamese network for object tracking", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Heng Fan", "Haibin Ling" ], "title": "Siamese cascaded region proposal networks for real-time visual tracking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Peng Gao", "Ruyue Yuan", "Fei Wang", "Liyi Xiao", "Hamido Fujita", "Yan Zhang" ], "title": "Siamese attentional keypoint network for high performance visual tracking", "venue": "Knowledge-based systems,", "year": 2020 }, { "authors": [ "Dongyan Guo", "Jun Wang", "Ying Cui", "Zhenhua Wang", "Shengyong Chen" ], "title": "Siamcar: Siamese fully convolutional classification and regression for visual tracking", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Anfeng He", "Chong Luo", "Xinmei Tian", "Wenjun Zeng" ], "title": "A twofold siamese network for realtime object tracking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks, 7132–7141", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City,", "year": 2018 }, { "authors": [ "Lianghua Huang", "Xin Zhao", "Kaiqi Huang" ], "title": "Got-10k: A large high-diversity benchmark for generic object tracking in the wild", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "Tao Kong", "Fuchun Sun", "Huaping Liu", "Yuning Jiang", "Jianbo Shi" ], "title": "Foveabox: Beyond anchorbased object detector", "venue": "arXiv preprint arXiv:1904.03797,", "year": 2019 }, { "authors": [ "Matej Kristan", "Ales Leonardis", "Jiri Matas", "Michael Felsberg", "Roman Pflugfelder", "Luka Cehovin Zajc", "Tomas Vojir", "Goutam Bhat", "Alan Lukezic", "Abdelrahman Eldesokey" ], "title": "The sixth visual object tracking vot2018 challenge results", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Matej Kristan", "Jiri Matas", "Ales Leonardis", "Michael Felsberg", "Roman Pflugfelder", "Joni-Kristian Kamarainen", "Luka Cehovin Zajc", "Ondrej Drbohlav", "Alan Lukezic", "Amanda Berg" ], "title": "The seventh visual object tracking vot2019 challenge results", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Hei Law", "Jia Deng" ], "title": "Cornernet: Detecting objects as paired keypoints", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bo Li", "Junjie Yan", "Wei Wu", "Zheng Zhu", "Xiaolin Hu" ], "title": "High performance visual tracking with siamese region proposal network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bo Li", "Wei Wu", "Qiang Wang", "Fangyi Zhang", "Junliang Xing", "Junjie Yan" ], "title": "Siamrpn++: Evolution of siamese visual tracking with very deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Matthias Mueller", "Neil Smith", "Bernard Ghanem" ], "title": "A benchmark and simulator for uav tracking", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Esteban Real", "Jonathon Shlens", "Stefano Mazzocchi", "Xin Pan", "Vincent Vanhoucke" ], "title": "Youtubeboundingboxes: A large high-precision human-annotated data set for object detection in video", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Hamid Rezatofighi", "Nathan Tsoi", "JunYoung Gwak", "Amir Sadeghian", "Ian Reid", "Silvio Savarese" ], "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Ran Tao", "Efstratios Gavves", "Arnold WM Smeulders" ], "title": "Siamese instance search for tracking", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Zhi Tian", "Chunhua Shen", "Hao Chen", "Tong He" ], "title": "Fcos: Fully convolutional one-stage object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "Ardhendu Shekhar Tripathi", "Martin Danelljan", "Luc Van Gool", "Radu Timofte" ], "title": "Tracking the known and the unknown by leveraging semantic information", "venue": "In BMVC,", "year": 2019 }, { "authors": [ "Jack Valmadre", "Luca Bertinetto", "Joao Henriques", "Andrea Vedaldi", "Philip HS Torr" ], "title": "End-toend representation learning for correlation filter based tracking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Guangting Wang", "Chong Luo", "Zhiwei Xiong", "Wenjun Zeng" ], "title": "Spm-tracker: Series-parallel matching for real-time visual object tracking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Guangting Wang", "Chong Luo", "Xiaoyan Sun", "Zhiwei Xiong", "Wenjun Zeng" ], "title": "Tracking by instance detection: A meta-learning approach", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Qiang Wang", "Zhu Teng", "Junliang Xing", "Jin Gao", "Weiming Hu", "Stephen Maybank" ], "title": "Learning attentions: residual attentional siamese network for high performance online visual tracking", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Qilong Wang", "Banggu Wu", "Pengfei Zhu", "Peihua Li", "Wangmeng Zuo", "Qinghua Hu" ], "title": "Eca-net: Efficient channel attention for deep convolutional neural networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sanghyun Woo", "Jongchan Park", "Joon-Young Lee", "In So Kweon" ], "title": "Cbam: Convolutional block attention module", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Yi Wu", "Jongwoo Lim", "Ming-Hsuan Yang" ], "title": "Online object tracking: A benchmark", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2013 }, { "authors": [ "Yinda Xu", "Zeyu Wang", "Zuoxin Li", "Ye Yuan", "Gang Yu" ], "title": "Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Tianyu Yang", "Pengfei Xu", "Runbo Hu", "Hua Chai", "Antoni B Chan" ], "title": "Roam: Recurrently optimizing tracking model", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Kaihua Zhang", "Lei Zhang", "Ming-Hsuan Yang" ], "title": "Fast compressive tracking", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2014 }, { "authors": [ "Zhipeng Zhang", "Houwen Peng" ], "title": "Deeper and wider siamese networks for real-time visual tracking", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhaohui Zheng", "Ping Wang", "Wei Liu", "Jinze Li", "Rongguang Ye", "Dongwei Ren" ], "title": "Distance-iou loss: Faster and better learning for bounding box regression", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Zheng Zhu", "Qiang Wang", "Bo Li", "Wei Wu", "Junjie Yan", "Weiming Hu" ], "title": "Distractor-aware siamese networks for visual object tracking", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zheng Zhu", "Wei Wu", "Wei Zou", "Junjie Yan" ], "title": "End-to-end flow correlation tracking with spatialtemporal attention", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Visual object tracking is the fundamental task of computer vision, aiming at tracking unknown object of which the information is given by the first frame. Although great progress has been achieved in recent years, a robust tracker is still in desperate demand due to tricky challenge such as scale variation, appearance deformation and similar object with complex background which can deteriorate tracking performance (Wu et al. (2013); Zhang et al. (2014)).\nRecently, Siamese Network based trackers have taken a vital place in SOT field due to its accuracy and speed. Since (Tao et al. (2016)) and (Bertinetto et al. (2016)) introduced Siamese networks in visual tracking, Siamese structure has been adopted as baseline for researchers to design efficient trackers (Li et al. (2018); Zhu et al. (2018a); Zhang & Peng (2019); Li et al. (2019); Xu et al. (2020); Chen et al. (2020)). After siamRPN (Li et al. (2018)) being proposed to gain more accurate anchor boxes, region proposal network has become an essential part of tracker. However, the anchor scales are manual-set which go against the fact that the tracking target is unknown. Besides, the performance of the Siamese based trackers depends greatly on offline training by using massive frame pairs. Therefore, it highly increases the risk of tracking drift when facing significant deformation, similar object distractors, or complex background, due to the undiscriminating feature learned from the target when the category of the target is excluded from the training dataset.\nIn these years, the attention mechanism has become the spotlight in computer vision which inspires the relative works not only in detection task but also in visual tracking (He et al. (2018); Abdelpakey et al. (2018); Wang et al. (2018); Zhu et al. (2018b)). The attention mechanism includes channel attention and spatial attention, the former tends to generate a set of channel-weights for modeling interdependencies between channels while the latter focuses on finding the informative part by utilizing the inter-spatial relationship of features. Considering these benefits, Siamese based trackers try to introduce attention module to distinguish target from complex background. Nevertheless, the performance of these trackers is not satisfactory for exploiting the expressive power of the attention mechanism inappropriately.\nBased on the limitations discussed above, we design a simple Cross-attention Guided Siamese network (SiamCAN) based tracker with anchor-free strategy which performs better than the state-ofthe-art trackers when facing the similar object challenge. SiamCAN takes template channel attention\nto guide the feature extraction of search image by which can strengthen the ability of tracker to overcome distractors and complex backgrounds, performing better than most of Siamese-based trackers, as shown in Figure 1. The main contributions of this work are:\n• We formulate a cross-attention guided Siamese framework (SiamCAN) including crosschannel attention and self-spatial attention. The cross-channel attention builds an interactive bridge between the target template and search frame to share the identical channel weights. The self-spatial attention focuses on the discriminative part of the correlated feature map, which is complementary to the cross-channel attention.\n• The proposed tracker is adaptive box regression, without numerous hyper-parameters setting. In order to get more accurate bounding box, we adopt the proper strategy to utilize the merits of anchor-free at the stage of training.\n• SiamCAN achieves state-of-the-art results on four large tracking benchmarks, including OTB100 (Wu et al. (2013)), UAV123 (Mueller et al. (2016)), VOT2018 (Kristan et al. (2018)) and VOT2019 (Kristan et al. (2019)). The speed of tracker can also achieve 35 FPS." }, { "heading": "2 RELATED WORK", "text": "In this section, we briefly review the recent Siamese based trackers, the anchor-free approaches and attention mechanism in both tracking and detection filed." }, { "heading": "2.1 SIAMESE NETWORK BASED TRACKER", "text": "The pioneering works, SINT (Tao et al. (2016)) and SiamFC (Bertinetto et al. (2016)), first introduce the siamese network in tracking filed. Due to its fast speed with light structure, Siamese network draws great attention from the visual tracking community. SiamFC tries to use siamese network to learn the feature of both target template and search frame, and compare the similarity of them to find the most confident candidates. Although tracks fast, it cannot handle the scale variation problem by applying several scales of feature map. Inspired by Faster-RCNN (Ren et al. (2015)) from object detection, SiamRPN (Li et al. (2018)) draws on the region proposal network to get more various scale ratio bounding boxes. Since then, the RPN module has become an essential part of the tracker (Zhu et al. (2018a); Zhang & Peng (2019); Li et al. (2019); Dong & Shen (2018); Fan & Ling (2019)). However, the complexity of anchor design makes the performance of trackers depend greatly on the effect of anchor training." }, { "heading": "2.2 ANCHOR-FREE APPROACHES", "text": "In recent time, Anchor-free approaches have developed fast. The achor-free work can be divided into two categories. The first one (Kong et al. (2019); Law & Deng (2018)) aims to estimate the keypoints of the objects, while, the other (Redmon et al. (2016); Tian et al. (2019))tends to predict the bounding box for each pixel which can avoid presetting the scale ratio of anchors. Not only is anchor-free approach popular in detection field, but it is suitable for target estimation in tracking field due to its high efficiency. SiamFC++ takes example by FCOS (Tian et al. (2019)) to design regression subnetwork and add centerness branch to eliminate the low quality samples. SiamCAR (Guo et al. (2020)) changes the basic network structure additionally, merging the multi-layers features before correlation. Different from SiamCAR, SiamBAN (Chen et al. (2020)) puts emphasis on the label assignment which improves the tracking performance. Our method differs from the above trackers in details (Section4.3)." }, { "heading": "2.3 ATTENTION MECHANISM", "text": "Attention mechanism has been the focus of the detection filed, on account to its powerful ability of enhancing deep CNNs. SE-Net (Hu et al. (2018)) firstly puts forward the mechanism to generate channel weights in return to direct the learning of channel attention. After that, CBAM (Woo et al. (2018)) utilizes both max-pooling and average-pooling to generate the merged attention, includes channel and spatial attention. Recently, ECA-Net (Wang et al. (2020b)) finds that avoiding dimensionality is of great importance for channel attention learning, and propose a cross-channel interaction strategy which performs better than SE-Net. In the tracking field, the recent trackers began to equip with the attention mechanism to get better performance. SA Siam (He et al. (2018)) simply combines the SE-Net and SiamFC to get both discriminative and general features which boost the tracking performance. RASNet (Wang et al. (2018)) designs residual attention, general attention and channel attention to learn target feature better. SATIN (Gao et al. (2020)) uses hourglass network as backbone and designs a cross-attention module for exemplar branch to combine the channel and spatial attention from shallow and deep layers. However, these trackers only calculate alongside each branch and neglect the information flow between them, as a result, the ability of attention mechanism can not be fully utilized.\n3 OUR APPROACH\nAs shown in Figure 2, the proposed framework mainly consists of three components, the feature extracted Siamese network, the cross-attention module and anchor-free bounding box regression subnetwork with foreground classification subnetwork." }, { "heading": "3.1 FEATURE EXTRACTED SIAMESE NETWORK", "text": "Like the most Siamese based tracker, we adopt the fully convolution network without padding, which guarantees the accurate location calculation. Feature extracted network composes of two parts, template branch and search branch. Both of them share the same layer parameters of backbone, by this mean, CNNs can learn the relative feature for them to calculate similarity in the subsequent operation. The template branch intends to encode the exemplar feature in the first frame while the other branch aims to encode the candidates feature which may involve target in the follow-up frames. Set the input in template branch as It, the subsequent frames in search branch as Is. We feed the It and Is into backbone, and get the feature φl(It) and φl(Is) from different l-th backbone layers. Next, the given features are sent to the corresponding branch after having a convolution with a neck layer to reduce feature channel size to 256 and get the template feature ψt(It) with the search feature ψs(Is). At last, crop the 7×7 patch from the center of template feature." }, { "heading": "3.2 CROSS-ATTENTION NETWORK", "text": "Attention mechanism is created for the purpose that enforce CNNs to focus on the special parts which is of great importance, i.e., channels information and spatial information. The channel attention is designed to explore the interdependencies between channels while the spatial attention tends to make CNNs pay more attention to the most critical areas of the feature. Different from (He et al. (2018); Wang et al. (2018)), the channel attention is used between two branches not being applied as self-attention. Moreover, SATIN (Gao et al. (2020)) designs a module also called cross-attention , but the ’cross’ means the combination of different layers which is different from our method. In this paper, the target branch feature ψt(It) is sent to global average pooling to get aggregated feature Yt, i.e.,\nYt = 1\nWH\n∑W,H i=0 ψt(It) (1)\nGiven the aggregated feature, the channel weight is obtained by performing a 1D convolution of size k, i.e., Vi = σ( ∑k j=1 ω jyji ), y j i ∈ Ω k i (2)\nWhere σ is a Sigmoid function, ω indicates the parameters of 1D convolution and Ωki indicates the set of k adjacent channels of yi. To let search branch learns the information from target template, we multiply the channel weights with the search feature, i.e.,\nψ̃s(Is) = ψs(Is) ∗ V (3)" }, { "heading": "3.3 CLASSIFICATION AND ANCHOR-FREE REGRESSION SUBNETWORK", "text": "As shown in Figure 2, the correlation feature map is calculated by the depth-wise correlation operation between ψ̃s(Is) and ψt(It), i.e.,\nF clsw×h×c = ψ̃s(Is) ? ψt(It) (4)\nF regw×h×c = ψ̃s(Is) ? ψt(It) (5) where ? denotes depth-wise convolution operation. Then, we apply self-spatial attention to the feature map in order to focus on discriminative part automatically, i.e.,\nF̃ clsw×h×c = σ(f([AvgP (F cls w×h×c);MaxP (F cls w×h×c)])) (6)\nF̃ regw×h×c = σ(f([AvgP (F reg w×h×c);MaxP (F reg w×h×c)])) (7)\nAfter that, we use two convolution layers with kernel size 1×1 to reduce the number of channel from 256 to 2 and 4 respectively for each branch and concatenate the feature maps from different layers of backbone by the trainable weights α, i.e.,\nP clsw×h×2 = ∑N l=1 αl ∗ F̃ clsl:w×h×2 (8)\nP regw×h×4 = ∑N l=1 αl ∗ F̃ regl:w×h×4 (9)\nwhere N denotes the total number of the backbone layers we use. The classification feature map has two channels, the one represents the foreground and the points (i, j) on Pclsw×h×2(0, i, j) refer to the probability scores of target, the other represents the background and the points (i, j) on Pclsw×h×2(1, i, j) refer to the probability scores of background. The regression feature map has four channels, each of them represents the four direction distances from the points location in search branch input to the four sides of the bounding box respectively, that is to say, each point (i, j) in Pregw×h×2(:, i, j) is a vector which can denoted as (l, r, t, b).\nClassification label and regression label. For anchor based methods, the positive sample and the negative one are classified by the value of Intersection over Union between anchor and groundtruth. In this paper, We use ellipse and circle figure region to design label for points (i, j) in feature map, which is inspired by (Chen et al. (2020)). The ellipse E1 center and axes length are set by groundtruth center (gxc , gyc) of groundtruth size (\ngw 2 , gh 2\n), We also get the circle C2 with 0.5gw ∗ 0.5gh\n(( gw2 ) 2 + ( gh2 ) 2) 1 2\nas radius, i.e.,\n(B(pi)− gxc)2\n( gw2 ) 2\n+ (B(pj)− gyc)2\n( gh2 ) 2\n= 1 (10)\nB(pi) 2 +B(pj) 2 = r2 (11)\nwhere B denotes the calculation for the location of points (i, j) in feature map P clsw×h×2 back to the search frame. If the point B(pi, pj) falls within the C2 region, it will be given a positive label, and if it falls outside the E1 area, it will be given a negative label, i.e.,\nlabel = 1, ifC2(p(i, j)) < r 2\n−1, ifE1(p(i, j)) > 1 0, otherwise\n(12)\nFor regression branch, the regression targets can be defined by:\ndl(i,j) = pi − gx0 , d t (i,j) = pj − gy0 (13)\ndr(i,j) = gx1 − pi, d b (i,j) = gy1 − pj (14)\nwhere (gx0 , gy0), (gx1 , gy1) denote the left-top and right-bottom coordinates location of the groundtruth.\nLoss function. We employ cross entropy loss to train the classification network. To predict more accurate bounding box, we adopt the DIoU loss (Zheng et al. (2020)) to train the regression network, i.e.,\nLreg = 1− IoU + ρ2(p, pgt)\nc (15)\nwhere ρ(.) is the Euclidean distance, p and pgt denote the central points of predicted box and groundtruth and c is the diagonal length of the smallest enclosing box covering the two boxes. For regression branch training, DIoU loss can optimize the bounding faster than GIoU loss (Rezatofighi et al. (2019)). The overall loss function is:\nL = λ1Lcls + λ2Lreg (16)\nwhere constants λ1 and λ2 weight the classification loss and regression loss. During model training, we simply set λ1 = 1, λ2 = 1 without hyper-parameters searching." }, { "heading": "3.4 TRAINING AND INFERENCE", "text": "Training. We train our model by using image pairs, a 127×127 pixels template patch and a 255×255 pixels search patch. The training datasets include ImageNet VID (Russakovsky et al. (2015)), COCO (Lin et al. (2014)), YouTube-Bounding Boxes (Real et al. (2017)), ImageNet Det (Real et al. (2017)) and GOT10k (Huang et al. (2019)). Due to the numbers of negative samples are more than the positive samples, we set at most 16 positive samples and 48 negative samples from the search image.\nBesides, in order to get more accurate regression information, we adopt the DIoU loss to optimize the regression branch.\nInference. We feed the cropped first frame and the subsequent frames to the feature abstract network as template image and search image. Next, the features are sent to the cross-attention module and pass the classification branch and regression branch. After that, we get the classification map. The location of the highest score represents the most probable center of the tracking target. Then, we use the scale change penalty and the cosine window as that introduced in (Li et al. (2018)) to guarantee the smooth movements of target. According to the location p of the final score, we can get the predicted boxes B, i.e.,\nbx1 = pi − dregl , by1 = pj − d reg t (17) bx2 = pi + d reg r , by2 = pj + d reg b (18)\nwhere dregl,r,t,b denotes the predicted values of the regression targets on the regression map, (bx1, by1) and (bx2, by2) are the top-left corner and bottom-left corner of the predicted box." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "Our approach is implemented in Python with Pytorch on 1 RTX2080Ti. The backbone is modified ResNet-50 as in (He et al. (2016)), and the weights are pre-trained on ImageNet (Russakovsky et al. (2015)). During the training phase, the model is optimized by the stochastic gradient descent (SGD), at the meantime, the total number of epochs is 20 and the batch size is set as 28. For the first 10 epochs, we frozen the parameters of the backbone and only train the heads structures, for the last 10 epochs, we unfrozen the last 3 blocks of backbone to be trained together. Besides, we warm up the training during the first 5 epoch with a warmup learning rate of 0.001 to 0.005, and in the last 15 epochs, the learning rate exponentially decayed from 0.005 to 0.00005." }, { "heading": "4.2 RESULTS ON THREE BENCHMARKS", "text": "To affirm the effect of our method, we evaluate our tracker performance with the recent trackers MAML (Wang et al. (2020a)), PriDiMP (Danelljan et al. (2020)), SiamBAN (Chen et al. (2020)), SiamCAR (Guo et al. (2020)), ROAM (Yang et al. (2020)), SiamFC++ (Xu et al. (2020)), STN (Tripathi et al. (2019)), ARTCS (Kristan et al. (2019)), SiamRPN++ (Li et al. (2019)),SATIN (Gao et al. (2020)), ATOM (Danelljan et al. (2019)), DIMP-18 (Bhat et al. (2019)), DaSiamRPN (Zhu et al. (2018a)), SPM (Wang et al. (2019))ECO (Danelljan et al. (2017)), CFNet (Valmadre et al. (2017)), SiamRPN (Li et al. (2018)), DeepSRDCF (Danelljan et al. (2015a)), SRDCF (Danelljan et al. (2015b)) on four benchmarks UAV123 (Mueller et al. (2016)), VOT2018 (Kristan et al. (2018)), VOT2019 (Kristan et al. (2019)) and OTB100 (Wu et al. (2013))(details in Appendix A.1)." }, { "heading": "4.2.1 RESULTS ON UAV123", "text": "UAV123 contains 123 challenging video sequences, which can be divided into 11 categories according to their attributes. The performance of the tracker is evaluated by two evaluation metrics, the precision scores and the AUC scores. The AUC scores reflect the overlap between the predict bounding box and ground-truth box, while the precision scores are relative with the distance between the center of the predict bounding box and ground-truth box. In Figure 3, our method achieves the best performance in precision scores, i.e., 0.850, and the second best AUC scores 0.678. As for the 11 categories of the challenge, SiamCAN ranks 1st or 2nd in Scale Variation, Similar Object, Fast Motion and Low Resolution, in Appendix A.2. The results demonstrate that our tracker can handle the similar object and scale change challenge, due to the learning of cross-attention subnetwork and anchor-free mechanism." }, { "heading": "4.2.2 RESULTS ON VOT2018", "text": "VOT2018 consists of 60 challenging videos. The evaluation metrics to rank the performance of the tracker based on EAO (Expected Average Overlap) which depends on the accuracy and the robustness. As shown in Table 1, our tracker outperforms SiamRPN++ by 2.8 points by introducing\nthe anchor-free regression network. Compared with SiamFC++, we achieve EAO improvements of 4.5 points. Although our method fails to achieve the best EAO, the robustness scores of listed Siamese-based trackers are higher than our method, which affect the EAO scores, in other words, SiamCAN is a robust Siamese-based tracker." }, { "heading": "4.2.3 RESULTS ON VOT2019", "text": "VOT2019 video sequences are 20% different from VOT2018, adding more fast motion and similar object videos. Table 2 reflects the evaluation results on VOT2019 compared with the recent trackers. We can see that the recent proposed MAML gets the highest accuracy scores, while our SiamCAN surpassing MAML by 2.5 points in terms of EAO. Besides, our robustness scores also rank 2nd." }, { "heading": "4.3 ANALYSIS OF THE PROPOSED METHOD", "text": "Discussion on effective sample selection. Anchor-free method has the weakness that network may produce low-quality predicted bounding boxes far away from the center of target, even though the predicted box is accurate. To address this issue, SiamFC++ and SiamCAR introduce centerness branch to get high quality sample, forcing the network focus on the target center. While, SiamBAN uses ellipse figure region to design label which has the same effect. Accordingly, we do several experiments to find which method performs better. As shown in Table 3, baseline tracker consists of cross-attention module and anchor-free network. Ellipse label does better than Circle label ( 2© vs 1©),while the centerness branch with ellipse label even have worse effects ( 2© vs 3©). Based on\nthe performance of 2©, we visualize the tracking of 2© and compare with 4©, details in Figure 4. At the training stage, 2© gives the positive label to the points fall within E2 region, while 4© gives the positive label to the points fall within the C2 region, the more central position. In this aspect, the comparision in Table 3 ( 2© vs 4©) can be explained.\nDiscussion on components of tracker. To verify the role of each components of our tracker, we use the results of components on VOT2018 to analyze the details. As shown in Table 4, the baseline model can get an EAO of 0.351, which consists of a regular backbone, i.e., ResNet50, a classification and an anchor-free regression network. Exchanging DIoU Loss for GIoU Loss during training can get higher scores, due to the more accurate predicted bounding box ( 2© vs 1©). Adding crossattention module can obtain a large improvement, i.e., 4.3 points on EAO ( 4© vs 2©). This demonstrates the information interaction between the template branch and the search branch is of great significance. Finally, the tracker utilizes the self-spatial attention can gain another improvement of 2.4 points on EAO ( 4© vs 5©). Feature visualization. We visualize the features extracted from tracker 1©, tracker 3© and tracker 5© in Figure 5. On the left side of Figure 5, tracker 1© vs tracker 3© shows the tracker not equipped with cross-attention module is easier to lose target and focus on the wrong object when appear similar object and the visualized feature demonstrates that cross-attention module can enable tracker to tell target from similar distactors. On the right side of Figure 5, tracker 5© shows the power of crossattention module combined with proper training strategy." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a simple Siamese-based tracker called SiamCAN, which combined with cross-attention and anchor-free mechanism. The cross-attention module utilizes the target template channel attention to guide the feature learning of search frame, bridging the information flow between each branch. The anchor-free regression discards the fussy design of the anchors, and adjusts the scale ratio of the bounding box automatically. In order to use them to their fullest potential, choosing appropriate label assignment strategy and suitable loss funtion to boost the tracking performance with limited laboratory equipments. Extensive experiments are conducted on four benchmarks, which demonstrate our trackers with light structure yet achieves the state-of-the-art performance, especially in scale variation, background cluster, deformations and similar distractors challenges." }, { "heading": "A APPENDIX", "text": "A.1 EXPERIMENT RESULTS ON OTB100\nOTB100 contains 100 challenging video sequences, and the evaluation metrics are same with UAV123. In Figure 5, our method achieves the best performance in precision scores, i.e., 0.913, and the second best success scores 0.684. As for the 9 categories of the challenge, SiamCAN ranks 1st or 2nd in Deformation, Background Clutters, Scale Varation and Out-of-Plane Rotation. The results demonstrate that our trackers can handle the deformation, background clutters, scale variation and out-of-plane rotation, in Figure 6.\nA.2 MORE EXPERIMENT RESULTS ON UAV123" } ]
2,020
null
SP:2385685fee86534706f021a67f2393812f063415
[ "This paper designs a new loss, called SuNCTt, to speed up the convergence of semi-supervised training. Specifically, the loss involves the computation of similarity between anchor and other images with the same class, and the similarity between anchor and other labeled images. It is claimed to be considered as the form of neighborhood component analysis. Together with the standard contrastive learning loss, it only uses less than half the amount of pre-training and computes to match the accuracy of the previous approaches." ]
We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt , based on noise-contrastive estimation and neighbourhood component analysis, that aims to distinguish examples of different classes in addition to the self-supervised instancewise pretext tasks. On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute. Our main insight is that leveraging even a small amount of labeled data during pre-training, and not only during fine-tuning, provides an important signal that can significantly accelerate contrastive learning of visual representations.
[]
[ { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj Saunshi" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 1902 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Kihyuk Sohn", "Han Zhang", "Colin Raffel" ], "title": "Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring", "venue": "arXiv preprint arXiv:1911.09785,", "year": 2019 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin A Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Gal Chechik", "Varun Sharma", "Uri Shalit", "Samy Bengio" ], "title": "Large scale online learning of image similarity through ranking", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": null, "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "arXiv preprint arXiv:2006.10029,", "year": 2020 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),", "year": 2005 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical data augmentation with no separate search", "venue": null, "year": 1909 }, { "authors": [ "Emily Denton", "Sam Gross", "Rob Fergus" ], "title": "Semi-supervised learning with context-conditional generative adversarial networks", "venue": "arXiv preprint arXiv:1611.06430,", "year": 2016 }, { "authors": [ "Carl Doersch", "Andrew Zisserman" ], "title": "Multi-task self-supervised visual learning", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Yueqi Duan", "Wenzhao Zheng", "Xudong Lin", "Jiwen Lu", "Jie Zhou" ], "title": "Deep adversarial metric learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": "International journal of computer vision,", "year": 2010 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Jacob Goldberger", "Geoffrey E Hinton", "Sam T Roweis", "Russ R Salakhutdinov" ], "title": "Neighbourhood components analysis", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Peter Henderson", "Jieru Hu", "Joshua Romoff", "Emma Brunskill", "Dan Jurafsky", "Joelle Pineau" ], "title": "Towards the systematic reporting of the energy and carbon footprints of machine learning", "venue": null, "year": 2002 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": "In International Workshop on Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on challenges in representation learning, ICML,", "year": 2013 }, { "authors": [ "Junnan Li", "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven CH Hoi" ], "title": "Prototypical contrastive learning of unsupervised representations", "venue": "arXiv preprint arXiv:2005.04966,", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Geoffrey J McLachlan" ], "title": "Discriminant analysis and statistical pattern recognition, volume 544", "venue": null, "year": 2004 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Abhinav Shrivastava", "Abhinav Gupta", "Ross Girshick" ], "title": "Training region-based object detectors with online hard example mining", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Kihyuk Sohn", "David Berthelot", "Chun-Liang Li", "Zizhao Zhang", "Nicholas Carlini", "Ekin D Cubuk", "Alex Kurakin", "Han Zhang", "Colin Raffel" ], "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "venue": null, "year": 2001 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Yaniv Taigman", "Ming Yang", "Marc’Aurelio Ranzato", "Lior Wolf" ], "title": "Deepface: Closing the gap to human-level performance in face verification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Kilian Q Weinberger", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Zhirong Wu", "Alexei A Efros", "Stella X Yu" ], "title": "Improving generalization via scalable neighborhood component analysis", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 }, { "authors": [ "Xiaohua Zhai", "Avital Oliver", "Alexander Kolesnikov", "Lucas Beyer" ], "title": "S4l: Self-supervised semisupervised learning", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2019 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": null, "text": "We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt , based on noise-contrastive estimation and neighbourhood component analysis, that aims to distinguish examples of different classes in addition to the self-supervised instancewise pretext tasks. On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute. Our main insight is that leveraging even a small amount of labeled data during pre-training, and not only during fine-tuning, provides an important signal that can significantly accelerate contrastive learning of visual representations." }, { "heading": "1 INTRODUCTION", "text": "Learning visual representations that are semantically meaningful with limited semantic annotations is a longstanding challenge with the potential to drastically improve the data-efficiency of learning agents. Semi-supervised learning algorithms based on contrastive instance-wise pretext tasks learn representations with limited label information and have shown great promise (Hadsell et al., 2006; Wu et al., 2018b; Bachman et al., 2019; Misra & van der Maaten, 2020; Chen et al., 2020a). Unfortunately, despite achieving state-of-the-art performance, these semi-supervised contrastive approaches typically require at least an order of magnitude more compute than standard supervised training with a cross-entropy loss (albeit without requiring access to the same amount of labeled data). Burdensome computational requirements not only make training laborious and particularly timeand energy-consuming; they also exacerbate other issues, making it more difficult to scale to more complex models and problems, and potentially inducing significant carbon footprints depending on the infrastructure used for training (Henderson et al., 2020).\nIn this work, we investigate a strategy for improving the computational efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt , based on noise-contrastive estimation (Gutmann & Hyvärinen, 2010) and neighbourhood component analysis (Goldberger et al., 2005), that aims at distinguishing examples of different classes in addition to the self-supervised instance-wise pretext tasks. We conduct a case-study with respect to the approach of Chen et al. (2020a) on the ImageNet (Russakovsky et al., 2015) and CIFAR10 (Krizhevsky & Hinton, 2009) benchmarks. We find that using any available labels during pre-training (either in the form of a cross-entropy loss or SuNCEt ) can be used to reduce the amount of pre-training required. Our most notable results on ImageNet are obtained with SuNCEt , where we can match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute, and require no hyper-parameter tuning.\n::: By ::::::::: combining SuNCEt ::: with ::: the ::::::::: contrastive :::::: SwAV :::::: method\n:: of ::::::::::::::: Caron et al. (2020) : , ::: we ::: also ::::::: achieve :::::::::::: state-of-the-art ::::: top-5 ::::::: accuracy ::: on :::::::: ImageNet :::: with :::: 10% :::::: labels, :::: while :::::: cutting ::: the :::::::::: pre-training :::::: epochs :: in ::::: half." }, { "heading": "2 BACKGROUND", "text": "The goal of contrastive learning is to learn representations by comparison. Recently, this class of approaches has fueled rapid progress in unsupervised representation learning of images through selfsupervision (Chopra et al., 2005; Hadsell et al., 2006; Bachman et al., 2019; Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019; Misra & van der Maaten, 2020; He et al., 2019; Arora et al., 2019; Chen et al., 2020a; Caron et al., 2020; Grill et al., 2020; Chen et al., 2020b). In that context, contrastive approaches usually learn by maximizing the agreement between representations of different views of the same image, either directly, via instance discrimination, or indirectly through, cluster prototypes. Instance-wise approaches perform pairwise comparison of input data to push representations of similar inputs close to one another while pushing apart representations of dissimilar inputs, akin to a form of distance-metric learning.\nSelf-supervised contrastive approaches typically rely on a data-augmentation module, an encoder network, and a contrastive loss. The data augmentation module stochastically maps an image xi ∈ R3×H×W to a different view. Denote by x̂i,1, x̂i,2 two possible views of an image xi, and denote by fθ the parameterized encoder, which maps an input image x̂i,1 to a representation vector zi,1 = fθ(x̂i,1) ∈ Rd. The encoder fθ is usually parameterized as a deep neural network with learnable parameters θ. Given a representation zi,1, referred to as an anchor embedding, and the representation of an alternative view of the same input zi,2, referred to as a positive sample, the goal is to optimize the encoder fθ to output representations that enable one to easily discriminate between the positive sample and noise using multinomial logistic regression. This learning by picking out the positive sample from a pool of negatives is in the spirit of noise-contrastive estimation (Gutmann & Hyvärinen, 2010). The noise samples in this context are often taken to be the representations of other images. For example, suppose we have a set of images (xi)i∈[n] and apply the stochastic data-augmentation to construct a new set with two views of each image, (x̂i,1, x̂i,2)i∈[n]. Denote by Z = (zi,1, zi,2)i∈[n] the set of representations corresponding to these augmented images. Then the noise samples with respect to the anchor embedding zi,1 ∈ Z are given by Z\\{zi,1, zi,2}. In this work, we minimize the normalized temperature-scaled cross entropy loss (Chen et al., 2020a) for instance-wise discrimination\n`inst(zi,1) = − log exp(sim(zi,1, zi,2)/τ)∑\nz∈Z\\{zi,1} exp(sim(zi,1, z)/τ) , (1)\nwhere sim(a, b) = a T b\n‖a‖‖b‖ denotes the cosine similarity and τ > 0 is a temperature parameter.\nIn typical semi-supervised contrastive learning setups, the encoder fθ is learned in a fully unsupervised pre-training phase. The goal of this pre-training is to learn a representation invariant to common data augmentations (cf. Hadsell et al. (2006); Misra & van der Maaten (2020)) such as random crop/flip, resizing, color distortions, and Gaussian blur. After pre-training on unlabeled data, labeled training instances are leveraged to fine-tune fθ, e.g., using the canonical cross-entropy loss." }, { "heading": "3 METHODOLOGY", "text": "Our goal is to investigate a strategy for improving the computational efficiency of contrastive learning of visual representations by leveraging the available supervised information during pre-training. Here we explore a contrastive approach for utilizing available labels, but we also include additional numerical evaluations with a cross-entropy loss and a parametric classifier in Section 4.\nContrastive approach. Consider a set S of labeled samples operated upon by the stochastic dataaugmentation module. The associated set of parameterized embeddings are given by ZS(θ) = (fθ(x̂))x̂∈S . Let x̂ ∈ S denote an anchor image view with representation z = fθ(x̂) and class label y. By slight overload of notation, denote by Zy(θ) the set of embeddings for images in S with class label y (same class as the anchor z). We define the Supervised Noise Contrastive Estimation (SuNCEt ) loss as\n`(z) = − log ∑\nzj∈Zy(θ) exp(sim(z, zj)/τ)∑ zk∈ZS(θ)\\{z} exp(sim(z, zk)/τ) , (2)\nwhich is then averaged over all anchors 1|S| ∑ z∈ZS(θ) `(z).\nIn each iteration of training we sample a few unlabeled images to compute the self-supervised instance-discrimination loss equation 1, and sample a few labeled images to construct the set S and compute the SuNCEt loss equation 2. We sum these two losses together and backpropagate through the encoder network. By convention, when “sampling unlabeled images,” we actually sample images from the entire training set (labeled and unlabeled). This simple procedure bears some similarity to unsupervised data augmentation (Xie et al., 2019), where a supervised cross-entropy loss and a parametric consistency loss are calculated at each iteration.\nMotivation. We motivate the form of the SuNCEt loss by leveraging the relationship between contrastive representation learning and distance-metric learning. Specifically, the SuNCEt loss can be seen as a form of neighborhood component analysis (Goldberger et al., 2005) with an alternative similarity metric. Consider a classifier that predicts an image’s class based on the similarity of the image’s embedding z to those of other labeled images zj using a temperature-scaled cosine similarity metric d(z, zj) = z\nT zj/(‖z‖‖zj‖τ). Specifically, let the classifier randomly choose one point as its neighbour, with distribution as described below, and adopt the neighbour’s class. Given the query embedding z, denote the probability that the classifier selects point zj ∈ ZS(θ)\\{z} as its neighbour by\np(zj |z) = exp(d(z, zj))∑\nzk∈ZS(θ)\\{z} exp(d(z, zk)) .\nUnder mutual exclusivity (since the classifier only chooses one neighbour) and a uniform prior, the probability that the classifier predicts the class label ŷ equal to some class c, given a query image x with embedding z, is\np(ŷ = c|z) = ∑\nzj∈Zc(θ)\np(zj |z) = ∑\nzj∈Zc(θ) exp(d(z, zj))∑ zk∈ZS(θ)\\{z} exp(d(z, zk)) , (3)\nwhere Zc(θ) ⊂ ZS(θ) is the set of embeddings of labeled images from class c. Minimizing the KL divergence between p(ŷ|z) and the true class distribution (one-hot vector on the true class y), one arrives at the SuNCEt loss in equation 2. Assuming independence between labeled samples, the aggregate loss with respect to all labeled samples S decomposes into the simple sum ∑ z∈ZS(θ) `(z). Numerical experiments in Appendix G show that using SuNCEt during pre-training optimizes this aforementioned non-parametric stochastic nearest-neighbours classifier and significantly out-performs inference with the more common K-Nearest Neighbours strategy.\nPractical considerations. Rather than directly using the outputs of encoder fθ to contrast samples, we feed the representations into a small multi-layer perceptron (MLP), hθproj , to project the representations into a lower dimensional subspace before evaluating the contrastive loss, following Chen et al. (2020a). That is, instead of using z = fθ(x̂) directly in equation 1 and equation 2, we use hθproj(z) = hθproj(fθ(x̂)). The projection network hθproj is only used for optimizing the contrastive loss, and is discarded at the fine-tuning phase. In general, adding SuNCEt to a pre-training script only takes a few lines of code. See Listing 2 in Appendix A for the pseudo-code used to compute SuNCEt loss on a mini-batch of labeled images." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we investigate the computational effects of SuNCEt when combined with the SimCLR self-supervised instance-wise pretext task defined in Section 2.1 We report results on the ImageNet (Russakovsky et al., 2015) and CIFAR10 (Krizhevsky & Hinton, 2009) benchmarks for comparison with related work.\n::: We ::: also :::::::: examine ::: the :::::::::: combination ::: of SuNCEt ::: with ::: the ::::::::: contrastive\n::::: SwAV ::::::: method :: of ::::::::::::::::: Caron et al. (2020) : in ::::::: Section :: 5, :::: and :::::: achieve ::::::::::::: state-of-the-art ::::: top-5 :::::::: accuracy :: on :::::::: ImageNet :::: with :::: 10% :::::: labels, ::::: while :::::: cutting ::: the :::::::::: pre-training ::::::: epochs :: in :::: half. : All methods are trained using the LARS optimizer (You et al., 2017) along with a cosine-annealing learning-rate schedule (Loshchilov & Hutter, 2016). The standard procedure when evaluating semi-supervised learning methods on these data sets is to assume that some percentage of the data is labeled, and treat the rest of the data as unlabeled. On ImageNet we directly use the same 1% and 10% data splits used\n1The SuNCEt loss can certainly be combined with other instance-wise pretext tasks as well.\nby Chen et al. (2020a). On CIFAR10, we create the labeled data sets by independently selecting each point to be in the set of labeled training points with some probability p; we run experiments for each p in {0.01, 0.05, 0.1, 0.2, 0.5, 1.0}.\nArchitecture & data. The encoder network in our experiments is a ResNet-50. On CIFAR10 we modify the trunk of the encoder following Chen et al. (2020a). While this network may not be optimal for CIFAR10 images, it enables fair comparison with previous work. For the projection network hθproj we use an MLP with a single hidden-layer; the hidden layer has 2048 units and the output of the projection network is a 128-dimensional real vector. The stochastic data augmentation module employs random cropping, random horizontal flips, and color jitter. On ImageNet, we also make use of Gaussian blur.\nFine-tuning. Upon completion of pre-training, all methods are fine-tuned on the available set of labeled data using SGD with Nesterov momentum (Sutskever et al., 2013). We adopt the same finetuning procedure as Chen et al. (2020a). Notably, when fine-tuning, we do not employ weight-decay and only make use of basic data augmentations (random cropping and random horizontal flipping). Additional details on the fine-tuning procedure are provided in Appendix B." }, { "heading": "4.1 IMAGENET", "text": "Experimental setup. Our default setup on ImageNet makes use of distributed training; we train each run on 64 V100 GPUs and 640 CPU cores. We aggregate gradients using the standard all-reduce primitive and contrast representations across workers using an efficient all-gather primitive. We also synchronize batch-norm statistics across all workers in each iteration to prevent the models\nfrom leaking local information to improve the loss without improving representations (cf. Chen et al. (2020a)). We linearly warm-up the learning-rate from 0.6 to 4.8 during the first 10 epochs of training and use a cosine-annealing schedule thereafter. We use a momentum value of 0.9, weight decay 10−6, and temperature 0.1. These hyper-parameters are tuned for SimCLR (Chen et al., 2020a), but we also apply them to the SimCLR + SuNCEt combination.\n: a :::::::::: ResNet50 ::::::::::: pre-trained ::::: on ::::::: ImageNet ::::: with ::::::: access ::: to ::::: 10% : of ::: the :::::: labels. :::::: Orange :::::: markers ::::: depict :::::: SimCLR :::::::::::: self-supervised ::::::::: pre-training :::::: followed ::::: by :::::::::: fine-tuning. :::::: Blue :::::: markers ::::: depict ::: the :::::::::: combination ::: of :::::: SimCLR :: + SuNCEt : . :::: Using : SuNCEt : to ::::::: leverage :::::::: available ::::: labels ::::::\nduring :::::::: pre-training ::::: (not :::: only :::::::::: fine-tuning) :::::::: accelerates ::::::::: convergence ::: and ::::::: produces :::: better :::::: models.\nWe use a batch-size of 4096 (8192 contrastive samples) for SimCLR; each worker processes 128 contrastive samples per iteration. When implementing the SimCLR + SuNCEt combination, we aim to keep the cost per-iteration roughly the same as the baseline, so we use a smaller unsupervised batch-size. Specifically, each worker processes 88 unlabeled samples per iteration, and 40 labeled samples (sub-sampling 20 classes in each iteration and sampling 2 images from each of the subsampled classes). With 10% of the images labeled, we turn off the SuNCEt loss after epoch 250; with 1% of the images labeled, we turn off the SuNCEt loss after epoch 30. We explore the effect of the switch-off epoch and the supervised batch-size (the fraction of labeled data in the sampled mini-batch) in Appendix D, and find the ImageNet results to be relatively robust to these parameters.\nSuNCEt . Figure 1 shows the top-5 accuracy as a function of the amount pre-training when 10% of the data is labeled. Orange markers denote SimCLR self-supervised pre-training followed by fine-tuning. Blue markers denote the SimCLR + SuNCEt combination followed by fine-tuning. Using SuNCEt to leverage available labels during pre-training accelerates convergence and produces better models with much less compute. The orange shaded region in the right sub-figure explicitly shows the amount of compute saved by using SuNCEt during pre-training. To put these results in the context of our 64 GPU setup, one epoch of SimCLR corresponds to 312 updates per GPU. SimCLR+SuNCEt matches the best SimCLR top-5 accuracy while using only 44% of the compute, and matches the best SimCLR top-1 accuracy while using only 45% of the compute. It may be possible to push these savings further by optimizing the hyper-parameters for SuNCEt .\n:::::::: Similarly, :::::: Figure : 2 :::::: shows ::: the ::::: top-1 :::::::: accuracy :: as :: a ::::::: function :: of ::: the ::::::: amount :::::::::: pre-training ::::: when ::::: 10% :: of ::: the :::: data :: is ::::::: labeled. ::::::: Orange ::::::: markers :::::: denote :::::::: SimCLR ::::::::::::: self-supervised :::::::::: pre-training :::::::: followed ::: by ::::::::: fine-tuning. ::::: Blue ::::::: markers ::::: denote ::: the :::::::: SimCLR :: + SuNCEt :::::::::: combination :::::::: followed :: by :::::::::: fine-tuning. :: It : is :::::::: interesting :: to :::: note :::: that ::: the :::::::::::: improvements :: in ::::::: accuracy ::: are ::::::::: significant ::::: under ::: the :::: same ::::::: training ::::::: epochs; ::: e.g. :: in ::: the ::::: 10% :::: label :::::: setting, ::::: with ::: 100 :::::: epochs :: of ::::::: training ::: we ::::::: observe : a :::::: +1.6% ::::::::::: improvement ::::: (from ::::: 61.8% :: to ::::::: 63.4%) :: in ::::: top-1 ::::::::: ImageNet :::::::: accuracy ::::::::: (run-to-run :::::::: variation :: is ::: on ::: the ::::: order :: of :::::::::: 0.1-0.2%); ::::::: similarly :::: with :::: 500 :::::: epochs :: of ::::::: training ::: we ::::::: observe : a :::::: +1.3% ::::::::::: improvement ::::: (from :::::: 65.5% :: to :::::: 66.7%) :: in :::: top-1 ::::::::: ImageNet :::::::: accuracy.\nWith 1% labeled data we find that SuNCEt matches the best SimCLR 500-epoch top-5 accuracy while using only 81% of the compute, and matches the best SimCLR top-1 accuracy while using only 83% of compute.2 While these savings are significant when considering the overall cost of performing 500-epochs of pre-training on 64 V100 GPUs, we note that the improvements are slightly more modest compared to the 10% labeled data setting. This observation supports the hypothesis that improvements in convergence can be related to the availability of labeled data during pre-training. Table 1 shows the top-1 and top-5 model accuracies with 10% labeled data (left sub-table) and with 1% labeled data (right sub-table).\nCross-entropy. Next we experiment with leveraging labeled samples during pre-training using a cross-entropy loss and a parametric linear classifier (as opposed the non-parametric SuNCEt loss). Similarly to the SuNCEt experiments, we use the same hyper-parameters as in Chen et al. (2020a) for pre-training. Figure 3 reports savings with respect to our SimCLR 1000-epoch baseline; the\n2Note that with 1% labeled data, our 500-epoch re-implementation of SimCLR outperforms the original 1000-epoch results of Chen et al. (2020a).\ncross-entropy approach matches the best SimCLR 1000-epoch top-1 and top-5 validation accuracy while using only 63% of the compute. These savings are lower than those provided by SuNCEt (which only requires 44% of pre-training to match the best SimCLR top-5 accuracy and 45% of pre-training to match the best SimCLR top-1 accuracy), but are significant nonetheless.\nWith 1% labeled data, despite low training loss, SimCLR + cross-entropy does not obtain significantly greater than random validation accuracy with the SimCLR hyper-parameters (even if we only leave the cross-entropy term on for 30 epochs to avoid overfitting). With only 12 samples per class in the 1% data setting, it is quite easy to overfit with a cross-entropy loss, suggesting that more fine-grained tuning may be required. In contrast, recall that we observe 19% compute savings out of the box with SimCLR + SuNCEt in this scenario with the default SimCLR hyper-parameters.\nTransfer. Our previous results show that leveraging labeled data during pre-training can result in computational savings. Next we investigate the effect of this procedure on downstream transfer tasks. We evaluate transfer learning performance of the 500-epoch pre-trained ImageNet models on Pascal VOC07 (Everingham et al., 2010) (11-mAP), and CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009) (top-1), using the fine-tuning procedure described in Chen et al. (2020a) (cf. Appendix B for details). Transfer results are reported in Table 2. Using the SuNCEt loss to leverage available labels during pre-training always improves transfer over pure self-supervised pre-training for the same number of epochs. Moreover, on Pascal VOC07, the SimCLR + SuNCEt combination with only 500 epochs of pre-training significantly outperforms 1000 epochs of SimCLR pre-training." }, { "heading": "4.2 CIFAR10", "text": "compute the SuNCEt loss in each iteration. We turn off the SuNCEt loss after the first 100 epochs and revert back to completely self-supervised learning for the remaining 400 epochs of pre-training to avoid overfitting to the small fraction of available labeled data; we explore this point in Appendix E.3\nResults. Figure 4a shows the convergence of SimCLR with various amounts of labeled data, both in terms of epochs (left sub-figure) and in terms of computation (right sub-figure). Both the sample efficiency (left sub-figure) and computational efficiency (right sub-figure) of SimCLR improve with the availability of labeled data, even if labeled data is only used for fine-tuning. Figure 4b shows the convergence of the SimCLR + SuNCEt combination with various amounts of labeled data, both in terms of epochs (left sub-figure) and in terms of computation (right sub-figure). Epochs are counted with respect to the number of passes through the unsupervised data-loader. We observe a similar trend in the SimCLR + SuNCEt combination, where both the sample efficiency and computational efficiency improve with the availability of labeled data. Figure 4c shows the improvement in Top-1\n3The only exception to this rule is the set of experiments where 100% of the training data is labeled, in which case we keep SuNCEt on for the entire 500 epochs. We only observed overfitting on CIFAR10, not ImageNet.\ntest accuracy throughout training (relative to SimCLR) when using SuNCEt during pre-training. Not only does SuNCEt accelerate training from a sample efficiency point of view, but it also leads to better models at the end of training. Figure 4d teases apart the computational advantages by showing the amount of computation saved by the SimCLR + SuNCEt combination in reaching the best SimCLR accuracy. SuNCEt saves computation for any given amount of supervised samples. With only 1% of the training data labeled, SuNCEt can reach the best SimCLR test accuracy while conserving roughly 50 petaflops of computation and over 10000 model updates.4 In the best case, SuNCEt , with the same exact hyper-parameters as the self-supervised baseline, only requires 22% of SimCLR pre-training to match the best SimCLR test accuracy. It may be possible to push these savings further by optimizing hyper-parameters for SuNCEt ." }, { "heading": "5 RELATED WORK", "text": "Table 3: ::::::: Validation :::::::: accuracy :: of :: a :::::::: ResNet50 ::::::::: pre-trained ::: on :::::::: ImageNet :::: with :::::: access :: to ::::: 10% :: of :::::: labels.\n:::::::: Contrastive ::::::: methods ::: like ::::::: SimCLR :::::::::::::::: (Chen et al., 2020a) :: and :::::: SwAV ::::::::::::::: (Caron et al., 2020) :: can ::::::: leverage :::::: SuNCEt ::::: during ::::::::: pre-training :: to :::::: surpass :::: their ::::::: baseline ::::::::::::: semi-supervised ::::::: accuracy :: in :::: half ::: the :::::: number :: of ::::::::: pre-training ::::: epochs. :::::::::::::: SuNCEt+SwAV ::: is :::: also ::::::::: competitive :::: with ::::: other ::::::::::::: semi-supervised ::::::::: approaches :::: and ::::::::: outperforms ::::::::::::::::::: FixMatch+RandAugment : in ::::: terms :: of :::: top-5 ::::::: accuracy.\n::::::: Method\n:::::: Epochs : ::::: Top-1 : ::::: Top-5 :\n::::::::: Supervised ::::::::::::::: (Zhai et al., 2019)\n::: 200\n::: 56.4 : ::: 80.4 :\n::::::: NPID++ ::::::::::::::::::::::::::::::::::::::::: (Wu et al., 2018b; Misra & van der Maaten, 2020)\n::: 800\n: – : ::: 81.5 :\n::::: PIRL :::::::::::::::::::::::::: (Misra & van der Maaten, 2020)\n::: 800\n: – : ::: 83.8 :\n:::: UDA :: + :::::::::::: RandAugment ::::::::::::::: (Xie et al., 2019)\n: –\n::: 68.8 : ::: 88.5 :\n:::::::: FixMatch :: + :::::::::::: RandAugment :::::::::::::::: (Sohn et al., 2020)\n::: 300\n::: 71.5 : ::: 80.1 :\n::::::::: SimCLRv2 ::::::::::::::::: (Chen et al., 2020b)\n:::: 1200 : ::: 68.4 : ::: 89.2 :\n::::::: SimCLR ::::::::::::::::: (Chen et al., 2020a)\n:::: 1000 : ::: 65.6 : ::: 87.8 :\n::::::::::::::: SimCLR+SuNCEt :::::: (ours)\n::: 500\n::: 66.7 : ::: 88.2 :\n::::: SwAV ::::::::::::::::: (Caron et al., 2020)\n::: 800\n::: 70.2 : ::: 89.9 :\n::::::::::::: SwAV+SuNCEt :::::: (ours)\n::: 400\n::: 70.8 : ::: 89.9 :\nSelf-supervised learning. There are a number of other self-supervised learning approaches in the literature, besides the instance-discrimination pretext task in SimCLR (Chen et al., 2020a;b). Some non-contrastive approaches learn feature representations by relative patch prediction (Doersch et al., 2015), by solving jigsaws (Noroozi & Favaro, 2016), by applying and predicting image rotations (Gidaris et al., 2018), by inpainting or colorization (Denton et al., 2016; Pathak et al., 2016; Zhang et al., 2016; 2017), by parametric instance-discrimination (Dosovitskiy et al., 2014), and sometimes by combinations thereof (Doersch & Zisserman, 2017; Kolesnikov et al., 2019). Of the contrastive approaches, Contrastive Predictive Coding (CPC) (Oord et al., 2018; Hénaff et al., 2019) compares representations from neighbouring patches of the same image to produce representations with a local regularity that are discriminative of particular samples. Non-Parametric Instance Discrimination (NPID) (Wu et al., 2018b) aims to learn representations that enable each input image to be uniquely distinguished from the others, and makes use of a memory bank to train with many contrastive samples. The NPID training objective offers a non-parametric adaptation of Exemplar CNN (Dosovitskiy et al., 2014). Misra & van der Maaten (2020) generalizes the NPID method as Pretext-Invariant Representation Learning (PIRL) to contrast images both with and without data augmentations, and combine the method with other instance-wise pretext tasks. He et al. (2019) proposes Momentum Contrast (MoCo) to build even larger memory banks by using an additional slowly progressing key encoder, thus benefiting from more contrastive samples while avoiding computational issues with large-batch training. Grill et al. (2020) also contrasts representations with those of a slowly progressing target encoder, but eliminates negative samples all together. There is also recent work (Li et al., 2020), which makes use of EM (McLachlan, 2004) and clustering algorithms for estimating cluster prototypes (Snell et al., 2017), and the recent SwAV method of Caron et al. (2020), which contrasts image representations with random cluster\n4One model update refers to the process of completing a forward-backward pass, computing the loss, and performing an optimization step.\nprototypes. :: We :::::: report ::::: results ::: for :::::::::::::: SwAV+SuNCEt :: in ::::: Table :: 3, :::::: trained :::: with ::: the :::: same ::::: exact :::::::: batch-size ::: and ::::::::::: learning-rate :: as :: for :::::::::::::::: SuNCEt+SimCLR. :::: The :::::: results :: are ::::::::: consistent :::: with ::: the ::::::: SimCLR :::::::::: experiments :: in :::::: Section :: 4; ::: we ::: can ::::: match ::: the ::::::: baseline :::::::::::::: semi-supervised ::::::::: contrastive :::::::: accuracy :::: with ::: less :::: than :::: half :: the ::::::::: pre-training ::::::: epochs. :\nSemi-supervised learning. Self-supervised learning methods are typically extended to the semisupervised setting by fine-tuning the model on the available labeled data after completion of selfsupervised pre-training. S4L (Zhai et al., 2019) is a recent exception to this general procedure, using a cross-entropy loss during self-supervised pre-training. While Zhai et al. (2019) does not study contrastive approaches, nor the computational efficiency of S4L, it shows that S4L can be combined in a stage-wise approach with other semi-supervised methods such as Virtual Adversarial Training (Miyato et al., 2018), Entropy Regularization (Grandvalet & Bengio, 2006), and PseudoLabel (Lee, 2013) to improve the final accuracy of their model (see follow-up work (Tian et al., 2020; Hendrycks et al., 2019)). Chen et al. (2020a) reports that the SimCLR approach with selfsupervised pre-training and supervised fine-tuning outperforms the strong baseline combination of S4L with other semi-supervised tasks. Other semi-supervised learning methods not based on self-supervised learning include Unsupervised Data Augmentation (UDA) (Xie et al., 2019) and the MixMatch trilogy of work (Berthelot et al., 2019a;b; Sohn et al., 2020). FixMatch (Sohn et al., 2020) makes predictions on weakly augmented images and (when predictions are confident enough) uses those predictions as labels for strongly augmented views of those same images. An additional key feature of FixMatch is the use of learned data augmentations (Berthelot et al., 2019a; Cubuk et al., 2019). Of the non-contrastive methods, FixMatch sets the current state-of-the-art on established semi-supervised learning benchmarks.\n:::: Note :::: that ::::::::::::: SwAV+SuNCEt :: is :::::::::: competitive :::: with\n:: the ::::: other :::::::::::::: semi-supervised ::::::::: approaches :::: and :::::::::: outperforms ::::::::::::::::::::: FixMatch+RandAugment :: in ::::: terms ::: of :::: top-5 ::::::: accuracy ::: (cf. ::::: Table ::: 3).\nSupervised contrastive loss functions. Supervised contrastive losses have a rich history in the distance-metric learning literature. Classically, these methods utilized triplet losses (Chechik et al., 2010; Hoffer & Ailon, 2015; Schroff et al., 2015) or max-margin losses (Weinberger & Saul, 2009; Taigman et al., 2014), and required computationally expensive hard-negative mining (Shrivastava et al., 2016) or adversarially-generated negatives (Duan et al., 2018) in order to obtain informative contrastive samples that reveal information about the structure of the data. One of the first works to overcome expensive hard-negative mining is that of Sohn (2016), which suggests using several negative samples per anchor.\n:::: Most :::::: similar :: to ::: the : SuNCEt ::: loss :: is :::: that :: of ::::::::::::::: Wu et al. (2018a)\n: , ::::: which :::::::::: investigates ::::::::::: neighborhood :::::::::: component ::::::: analysis :::::: (NCA) ::::::::::::::::::::: (Goldberger et al., 2005) : in ::: the ::::: fully ::::::::: supervised :::::: setting. ::::::::: However, :::: their ::::::: method :::::::::::: approximates ::: the ::::: NCA ::: loss ::: by :::::: storing ::: an ::::::::: embedding ::::: tensor ::: for ::::: every ::::: single ::::: image :: in ::: the ::::::: dataset, :::::: adding ::::::::: non-trivial ::::::: memory :::::::: overhead. :::: The SuNCEt ::: loss :::::: instead :::: relies ::: on ::::: noise ::::::::: contrastive ::::::::: estimation ::: and :::: does ::: not ::::: have ::: this ::::::::: limitation. Another more recent supervised contrastive loss is that proposed in Khosla et al. (2020) for the fully supervised setting; while their proposed method is more computationally draining than training with a standard crossentropy loss, it is shown to improve model robustness. As mentioned, the loss can be seen as a form of neighborhood component analysis (Goldberger et al., 2005) with an alternative similarity metric. ::: The ::::::: SuNCEt :::: loss :: is :::::::: different :::: from :::: the ::: loss ::: of ::::::::::::::::::: Khosla et al. (2020, v1) : . ::::::::: However, :::: after ::: the ::::: initial :::::: preprint ::: of ::: our ::::: work :::::::: appeared ::: on ::::::::::: OpenReview, :::::::::::::::::::::::::::::::::::::::: Khosla et al. (2020, v2, Section 15-Change Log) ::: was ::::::: updated :::: with :: an ::::::::: additional ::::::::: contrastive :::: loss :: of : a ::::::: similar ::::: format :: to : SuNCEt : . ::: We ::::::: provide : a :::: brief ::::::::: comparison ::: of ::: the ::: loss :: in :::::::::::::::::::: Khosla et al. (2020, v1) ::: and : SuNCEt ::::: using ::: the ::: full ::: set :: of :::::: labeled :::: data :: in :::::::: Appendix :: H. ::: In ::::: short, ::::: when ::::: using ::: the ::::: losses :: in :::::::::: conjunction ::::: with ::::::: SimCLR :::: and : a ::::: small ::::::::: supervised ::::::::: batch-size, :::: both ::::::: methods ::::::: perform :::::::: similarly. :::::::: However, ::::: when :::: used :::::::::::: independently :::: with ::::: larger :::::: batches ::: and :::: more ::::::: positive ::::::: samples ::: per ::::::: anchor, :::: their :::::::::: performance ::::::: differs." }, { "heading": "6 CONCLUSION", "text": "This work demonstrates that a small amount of supervised information leveraged during contrastive pre-training (not just fine-tuning) can accelerate convergence. We posit that new methods and theory rethinking the role of supervision — to not only improve model accuracy, but also learning efficiency — is an exciting direction towards addressing the computational limitations of existing methods while utilizing limited semantic annotations." }, { "heading": "A PSEUDO-CODE", "text": "Listing 1: Pseudo-code for main training script computing SimCLR+SuNCEt when a small fraction of labeled data is available during pre-training.\n# -- init image sampler for instance-discrimination unsupervised_data_loader = ...\n# -- init (labeled) image sampler for SuNCEt supervised_data_loader = ...\nfor epoch in range(num_epochs):\nfor itr, imgs in enumerate(unsupervised_data_loader):\n# -- compute instance-discrimination loss z = mlp(encoder(imgs)) ssl_loss = simclr(z)\n# -- compute supervised-contrastive loss on labeled data imgs, labels = next(supervised_data_loader) z = mlp(encoder(imgs)) supervised_loss = suncet(z, labels)\n# -- compute aggregate loss and update encoder & mlp loss = supervised_loss + ssl_loss loss.backward() optimizer.step() lr_scheduler.step()\nListing 2: Pseudo-code for computing SuNCEt on a given tensor of image embeddings.\ndef suncet(z, labels):\n# -- normalize embeddings: [n x d] z = z.div(z.norm(dim=1).unsqueeze(1))\n# -- compute similarity between embeddings: [n x n] exp_cs = torch.exp(torch.mm(z, z.t()) / temperature).fill_diag(0)\n# -- compute loss for each sampled class and accumulate loss = 0. num_classes = 0 for l in set(labels):\n# -- batch-size of embeddings with class-label ‘l’ bs_cls = (labels == l).sum() num_classes += 1\npos_cls = torch.sum(exp_cs[labels == l][:, labels == l], dim=1) den_cls = torch.sum(exp_cs[labels == l], dim=1) loss += - torch.sum(torch.log(pos_cls.div(den_cls))) / bs_cls\nloss /= num_classes return loss" }, { "heading": "B ADDITIONAL DETAILS ABOUT FINE-TUNING", "text": "We follow the fine-tuning procedure of Chen et al. (2020a). Upon completion of pre-training, all methods are fine-tuned on the available labeled data using SGD with Nesterov momentum. We do not employ weight-decay during fine-tuning, and only make use of basic data augmentations (random cropping and random horizontal flipping). The weights of the linear classifier used to fine-tune the encoder network are initialized to zero. On CIFAR10, models are fine-tuned for 90 epochs. All results are reported on the standard CIFAR10 test set. We use a batch-size of 256, along with a momentum value of 0.9 and an initial learning-rate of 0.05 coupled with a cosine-annealing learning-rate schedule. On ImageNet, in the 10% labeled data setting, models are fine-tuned for 30 epochs; in the 1% labeled data setting, models are fine-tuned for 60 epochs. We use a batch-size of 4096, along with a momentum value of 0.9 and an initial learning-rate of 0.8 coupled with a cosine-annealing learning-rate schedule. All results are reported on the standard ImageNet validation set using a single center-crop." }, { "heading": "C ADDITIONAL DETAILS ABOUT TRANSFER", "text": "We follow the fine-tuning transfer procedure outlined in Chen et al. (2020a). Specifically, we fine-tune the pre-trained model for 20,000 steps using Nesterov momentum. We use a batch-size of 256 and set the momentum value to 0.9. We perform random resized crops and horizontal flipping, and select the learning rate and weight decay by performing grid search with a grid of 7 logarithmically spaced learning rates between 0.0001 and 0.1 and 7 logarithmically spaced values of weight decay between 10−6 and 10−3, as well as no weight decay. We divide the weight decay values by the learning rate.\nD EFFECT OF SUPERVISED BATCH-SIZE ON IMAGENET\nWhat fraction of our sampled mini-batches should correspond to labeled images for computing the SuNCEt loss? We fix the total numbers of passes through the labeled data and vary the fraction of labeled data sampled per mini-batch. Therefore, runs that sample less labeled data per mini-batch keep the SuNCEt loss on for more updates, whereas runs that sample more labeled data per mini-batch keep the SuNCEt loss on for less updates.\nWe train a ResNet50 on ImageNet for 500 epochs on 64 V100 GPUs using the SimCLR + SuNCEt combination with the default SimCLR optimization parameters described in Section 4. In one setting, 10% of the images are labeled, and, in the other, 1% of the images are labeled. The left sub-plots in Figure 5 show how the best top-1 and top-5 validation accuracy vary as we change the fraction of labeled data per\nmini-batch. The right sub-plots show the compute (petaflops) used to obtain the corresponding models in the left subplots vary as we change the fraction of labeled data per mini-batch. If we only use a small fraction of labeled data in each mini-batch, then the best model accuracy drops. However, in general, the best final model accuracies, and the corresponding computational requirements to obtain said models, are not significantly affected by the fraction of labeled data per mini-batch and the corresponding switch-off epoch." }, { "heading": "E LIMITATIONS ON CIFAR10", "text": "The amount of time that we can leave the SuNCEt loss on without degrading performance on CIFAR10 is positively correlated with the amount of labeled data. To shed light on this limitation, we conduct experiments where we switch-off the SuNCEt loss at a certain epoch, and revert to fully self-supervised learning for the remainder training. All models are trained for a total of 500 epochs; epochs are counted with respect to the number of passes through the unsupervised data loader.\nThe left subplots in Figure 6 report the final model test-accuracy on CIFAR10 as a function of the switch-off epoch, for various percentages of available labeled data. The right subplots in Figure 6 report the amount of petaflops needed to train the corresponding models in the left subplots.\nTo study the potential accuracy degradation as a function of the switch-off epoch, we first restrict our focus to the left subplots in Figure 6. When 20% or more of the data is labeled (bottom three subplots), the final model accuracy is relatively invariant to the switch-off epoch (lines are roughly horizontal). However, when less labeled data is available, the final model accuracy can degrade if we leave the SuNCEt loss on for too log (top three subplots). The magnitude of the degradation is positively correlated with the amount of available labeled data (lines become progressively more horizontal from top subplot to bottom subplot).\nFrom a computational perspective, it may also beneficial to turn off the SuNCEt loss at some point, even if leaving it on does not degrade performance. We hypothesize that once we have squeezed out all the information that we can from the labeled data, it is best to revert all computational resources to optimizing the (more slowly-convergent) self-supervised instance-discrimination task. We see that leaving the SuNCEt loss on for more epochs does not provide any significant improvement in model accuracy (left subplots in Figure 6), but the corresponding computational requirements still increase (right subplots in Figure 6).\nSwitching-off the SuNCEt loss when it has roughly plateaued provides a good strategy for balancing gains in model accuracy with computational costs. We switch off the SuNCEt loss at epoch 100 in all of our CIFAR10 experiments in the main paper (except the experiment with 100% labeled data, where SuNCEt is left on for all 500 epochs of training). Figure 7 depicts the supervised SuNCEt loss during training for various percentages of available labeled data (left subplots), and the self-supervised InfoNCE loss during training for various percentages of available labeled data (right subplots). The SuNCEt loss has roughly plateaued after 100 training epochs (left subplots). Figure 7 also suggests that the rate at which the SuNCEt loss plateaus is negatively correlated with the available amount of labeled data. This observation supports the intuition that one should turn off the SuNCEt loss earlier in training if less labeled data is available (cf. Figure 6). In general, the strategy we adopt is simply to keep the number of passes through the labeled data fixed; meaning that less data will require less updates." }, { "heading": "F ADDITIONAL EXPERIMENTS FOR SIMCLR + CROSS-ENTROPY (CIFAR10)", "text": "Experimental setup. Our training setup for SimCLR + cross-entropy on CIFAR10 is identical to that used in Section 4 for SimCLR + SuNCEt . Specifically, we use a single V100 GPU and 10 CPU cores. We use a learning-rate of 1.0, momentum 0.9, weight decay 10−6, and temperature 0.5. These hyper-parameters are tuned for SimCLR (Chen et al., 2020a), and we also apply them to the SimCLR + cross-entropy combination.\n50 100 150 200 250 300 350 400 450 500 Training Epochs\n65\n70\n75\n80\n85\n90\n95\nTo p\n1\n1% 5% 10% 20% 50% 100%\n100.0 200.0 300.0 400.0 500.0 600.0 700.0 800.0 900.0 Petaflops\n65\n70\n75\n80\n85\n90\n95\nTo p\n1\n1% 5% 10% 20% 50% 100%\n(a) SimCLR test-set convergence with fine-tuning on various percentages of labeled data.\n50 100 150 200 250 300 350 400 450 500 Training Epochs\n65\n70\n75\n80\n85\n90\n95\nTo p\n1\n1% 5% 10% 20% 50% 100%\n100.0 200.0 300.0 400.0 500.0 600.0 700.0 800.0 900.0 Petaflops\n65\n70\n75\n80\n85\n90\n95\nTo p\n1\n1% 5% 10% 20% 50% 100%\n(b) SimCLR + cross-entropy test-set convergence with fine-tuning on various percentages of labeled data.\n50 100 150 200 250 300 350 400 450 500 Training Epochs\n-2.0%\n0.0%\n+2.0%\n+4.0%\n+6.0%\n+8.0%\nTo p-\n1 Im\npr ov\nem en\nt\n1% 5% 10% 20% 50% 100%\n(c) SimCLR + cross-entropy improvement in test-set convergence (relative to plain SimCLR) with fine-tuning\non various percentages of labeled data.\n0 15000 30000 45000 60000 75000−15000 Model Updates Saved\n0.0 63.0 125.9 188.9 251.9 314.8-63.0 Petaflops Saved\n1% 5% 10% 20% 50% 100%\n(d) Computation saved by SimCLR + cross-entropy in reaching the best SimCLR test accuracy with fine-tuning\non various percentages of labeled data.\nWe turn off the cross-entropy loss after the first 100 epochs and revert back to completely selfsupervised learning for the remaining 400 epochs of pre-training to avoid overfitting to the small fraction of available labeled data.5\nResults. Figure 8a shows the convergence of SimCLR with various amounts of labeled data, both in terms of epochs (left sub-figure) and in terms of computation (right sub-figure). Both the sample efficiency and computational efficiency of SimCLR improve with the availability of labeled data, even if labeled data is only used for fine-tuning.\n5The only exception to this rule is the set of experiments where 100% of the training data is labeled, in which case we keep SuNCEt on for the entire 500 epochs.\nFigure 8b shows the convergence of the SimCLR + cross-entropy combination with various amounts of labeled data, both in terms of epochs (left sub-figure) and in terms of computation (right sub-figure). Epochs are counted with respect to the number of passes through the unsupervised data-loader. We observe a similar trend in the SimCLR + cross-entropy combination, where both the sample efficiency and computational efficiency improve with the availability of labeled data.\nFigure 8c shows the improvement in Top 1 test accuracy throughout training (relative to SimCLR) when using cross-entropy. Similarly to SuNCEt , we see that cross-entropy accelerates training from a sample efficiency point of view, and also leads to better models at the end of training.\nFigure 8d shows the amount of computation saved by the SimCLR + cross-entropy combination in reaching the best SimCLR accuracy. There are two x-axes in this figure. The top shows the petaflops saved and the bottom shows the number of model updates saved to reach the best SimCLR test accuracy. Similarly to SuNCEt , cross-entropy saves computation for any given amount of supervised samples. These results provide further evidence for our hypothesis, namely, that leveraging labeled data during self-supervised pre-training can accelerate convergence." }, { "heading": "G NON-PARAMETRIC INFERENCE", "text": "In Section 3 we showed that, from a theoretical perspective, the SuNCEt loss optimizes a nonparametric classifier based on a type of stochastic nearest neighbours. Here we empirically evaluate this connection on ImageNet by classifying validation images using the inference procedure described in Section 3, and comparing to a K-Nearest Neighbours (KNN) classifier with the same similarity metric.\nWe consider the 10% labeled data setting and use the 400-epoch pre-trained+fine-tuned SimCLR+SuNCEt models to compute image embeddings. Specifically, we classify each point in the validation set by computing the SuNCEt class probabilities in equation 3 with respect to the small set of available labeled training images, and choosing the class with the highest probability. We refer to this non-parameteric inference procedure as SuNCEt -NPI. We employ basic data augmentations (random cropping and random horizontal flipping) to the labeled training images before computing their corresponding embeddings, and apply a single center-crop to the validation images. When performing inference using SuNCEt -NPI, we find it best to use the temperature parameter used during training, τ = 0.1 in this case, and, surprisingly, to also use the image embeddings obtained before the MLP projection head. We also find it best to use the image embeddings obtained before the MLP projection head when using the KNN classifier. We experiment with various values of K for the KNN classifier, and find K = 10 to work best (surprisingly, better than larger values of K).\nTable 4 shows the validation accuracy of these non-parametric classifiers. We consider (i) KNearest Neighbours (K=10); (ii) SuNCEt -NPI (Single-View), where we compute the SuNCEt class probabilities in equation 3 and use one embedding for each available labeled training image; (iii)\nSuNCEt -NPI (Multi-View), where we compute the SuNCEt -NPI class probabilities in equation 3 and use multiple embedding for each available labeled training image. The validation accuracies obtained by using SuNCEt -NPI are significantly greater than those obtained using K-Nearest Neighbours; suggesting that using SuNCEt during pre-training optimizes for the non-parametric stochastic nearest classifier described in Section 3.\nAs a final observation, we find that using multiple views of training images for inference has no significant effect on the classification accuracy; this is likely due to the invariance induced by selfsupervised instance-discrimination. It should also be noted that the accuracies in Table 4 are obtained by comparing the validation images to only the 10% of labeled images used during pre-training. It is almost certainly possible to increase the accuracies for all methods in this table by conducting inference with respect to the entire training set. Moreoever\n:::::::: Moreover, it may be possible to further\nincrease the SuNCEt -NPI accuracies by fine-tuning the pre-trained models using the SuNCEt loss : .\nH ::::::::::::::: CONTRASTIVE :::::::: LOSSES\n:::: Here ::: we ::::: briefly :::::::: compare ::: our SuNCEt ::: loss :: to ::: the ::: loss ::::: Lsupout ::::::::::::::::(Khosla et al., 2020):,:::::which::::was:::::::proposed :: for ::: the :::: fully ::::::::: supervised ::::::: setting. :::: The ::: loss ::::: Lsupout ::::with ::::::respect ::to ::an::::::anchor::z ::::with ::::class::::label::y::is ::::given :: by :\n− 1 |Zy(θ)| ∑ zj∈Zy(θ) log exp(sim(z, zj)/τ)∑ zk∈ZS(θ)\\{z} exp(sim(z, zk)/τ) ,\n:::::::::::::::::::::::::::::::::::::::::::::::\n::::::: whereas :: the : SuNCEt ::: loss :: is ::::: given :: by : − log ∑\nzj∈Zy(θ) exp(sim(z, zj)/τ)∑ zk∈ZS(θ)\\{z} exp(sim(z, zk)/τ) .\n::::::::::::::::::::::::::::::::::\n::::: When ::::: using :: the :::::: losses :: in ::::::::: conjunction :::: with :::::::: SimCLR ::: and :: a :::: small ::::::::: supervised ::::::::: batch-size, :::: both ::::::: methods :::::: perform :::::::: similarly. ::::::::: However, :::: when ::::: used ::::::::::: independently :::: with ::::: many ::::::: positive ::::::: samples ::: per :::::: anchor, ::::: their :::::::::: performance :::::: differs. :::: We :::: train : a ::::::::: ResNet50 ::: for ::: 100 :::::: epochs ::: on :::::::: ImageNet ::::: using ::: the ::: full ::: set :: of :::::: labeled :::: data, ::::::: followed ::: by :: 15 :::::: epochs ::: of ::::::::: fine-tuning ::: the ::::: entire ::::::: network :::::: weights :::: with :: a ::::::::::: cross-entropy :::: loss. ::: We ::: use ::: the :::::: default ::::::: SimCLR :::::::::::::::: data-augmentations :::: and :::::::::::::: hyper-parameters :::::::: (learning ::: rate ::::::: = 4.8).\nTable 5: ::::::: Validation ::::::: accuracy :: of : a :::::::: ResNet50 :::::::: pre-trained :: on :::::::: ImageNet :::: with ::::: access :: to :::: 100% :: of ::::: labels, ::::: using :: the\n::::: default ::::::: SimCLR ::::::::::::::: data-augmentations ::: and ::::::::::::: hyper-parameters ::::::: (learning ::: rate :::::: = 4.8). ::::: (Left ::::: table): ::::::: training ::: with ::::::: SimCLR, :::: using ::: an :::::::::: unsupervised :::::::: batch-size :: of ::::: 4,096 :::::: samples ::: and :: a :::::::: supervised :::::::: batch-size :: of ::::: 1,280 ::::::: samples. ::::: (Right ::::: table): :::::: training :::: using :::: only ::: the :::::::: supervised ::::: losses ::: and : a :::::::: batch-size :: of ::: 16k :::: (125 :::::: classes, ::: 128 ::::::: instances ::: per ::::: class). ::::: When :::: using ::: the ::::: losses :: in ::::::::: conjunction :::: with ::::::: SimCLR ::: and : a :::: small ::::::::: supervised :::::::: batch-size, :::: both :::::: methods :::::: perform ::::::: similarly. :::::::: However, :::: when :::: using ::: the ::::: losses ::::::::::: independently ::: with ::::: many :::::: positive :::::: samples ::: per :::::: anchor, :: the SuNCEt ::: loss ::::: top-1 ::::::: validation ::::::: accuracy :: is :::: more ::: than :::::: +10.0% :::::: higher.\n:::::::: Pre-train :::: loss\n::::: Top-1 : ::::: Top-5 :\n:::: rand. :::: init\n::: 51.8 : ::: 76.1 :\n::::::: SimCLR :: + :::: Lsupout: :::75.4: :::92.9:\n:::::::: SimCLR+SuNCEt ::::: (ours) : ::: 75.4 : ::: 93.0 :\n:::::::: Pre-train :::: loss\n::::: Top-1 : ::::: Top-5 :\n:::: rand. :::: init\n::: 51.8 : ::: 76.1 :\n:::: Lsupout: :::64.4: :::86.5: SuNCEt\n:::::: (ours)\n::: 75.6 : ::: 92.8 :\n:: In ::: the ::: left ::::::: sub-table :: in ::::: Table :: 5 :: we :::::: trained :::: both :::::::: methods ::::: jointly :::: with :::::::: SimCLR, ::::: using ::: an :::::::::: unsupervised :::::::: batch-size :: of ::::: 4,096 :::::::: samples, ::: and :: a ::::::::: supervised :::::::: batch-size :: of :::::: 1,280. :::: Both : SuNCEt ::: and ::::: Lsupout ::::::perform ::::::: similarly. ::: In ::: the ::::: right :::::::: sub-table :: in ::::: Table :: 5, ::: we :::: train :::: the ::::::: methods :::::::::::: independently :::: with :: a ::::: batch ::: size :: of ::::: 16K; ::: 125 ::::::: classes :::: and ::: 128 ::::::::: instances ::: per ::::: class. ::::: This ::: is :::::: similar :: to :::: the ::::::::::: experimental ::::: setup :: in :::::: Section :: 4, :::::: where ::: we :::: each :::::::::: mini-batch ::::::: contains :::: 128 ::::::: samples ::: per ::::: class :: (2 :::::::: sampled ::: by :::: each :::::: GPU). ::: We ::: had :::::::: difficulty :::::: getting ::: the ::::::::::::::::: Khosla et al. (2020) ::: loss :: to :::::::: converge :::: with ::: this ::::: many ::::::: positive :::::::: samples, :::: even ::::: when ::: we :::: made ::: the ::::::::::: learning-rate :::::: small, :: so ::: we ::::: added :: a ::::::::::::::::: batch-normalization :::: (BN) ::::: layer ::::: before :: the ::::: final :::: layer ::: of ::: the ::::::::: projection ::::: head, :::: and ::: this ::::: fixed ::: the ::::: issue. :::: We ::::::::: evaluated : SuNCEt ::: loss :::: both :::: with ::: and ::::::: without ::: the ::: BN ::::: layer, ::: and :: it ::: did ::: not ::::: affect ::::::::::: performance, ::: so ::: we ::: left : it ::: in :: for ::: the ::::::: purpose :: of :::::::::: comparison. :::: The SuNCEt ::::::: accuracy :: is ::::: more :::: than ::::: +10% :::::: higher." } ]
2,020
null
SP:e3942da570a78a6c9668db22ab5d6ddce52f756f
[ "This paper aims to propose a benchmark for voce-face matching and retrieval problem. As shown by the test confidence analysis, the model is suggested to be evaluated on a large dataset or multiple datasets to avoid the large deviation in the accuracy. A baseline method TriNet and joint matching & retrieval are proposed. Improved results are reported in the experiment section." ]
Cross-modal associations between a person’s voice and face can be learned algorithmically, and this is a useful functionality in many audio and visual applications. The problem can be defined as two tasks: voice-face matching and retrieval. Recently, this topic has attracted much research attention, but it is still in its early stages of development, and evaluation protocols and test schemes need to be more standardized. Performance metrics for different subtasks are also scarce, and a benchmark for this problem needs to be established. In this paper, a baseline evaluation framework is proposed for voice-face matching and retrieval tasks. Test confidence is analyzed, and a confidence interval for estimated accuracy is proposed. Various state-of-the-art performances with high test confidence are achieved on a series of subtasks using the baseline method (called TriNet) included in this framework. The source code will be published along with the paper. The results of this study can provide a basis for future research on voice-face cross-modal learning.
[ { "affiliations": [], "name": "A BENCHMARK" } ]
[ { "authors": [ "R Arandjelovic", "P Gronat", "A Torii", "T Pajdla", "J Sivic" ], "title": "Netvlad: Cnn architecture for weakly supervised place recognition", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence, PP", "year": 2017 }, { "authors": [ "Jacob Benesty", "Jingdong Chen", "Emanuël AP Habets" ], "title": "Speech enhancement in the STFT domain", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Qiong Cao", "Li Shen", "Weidi Xie", "Omkar M Parkhi", "Andrew Zisserman" ], "title": "Vggface2: A dataset for recognising faces across pose and age", "venue": "In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG", "year": 2018 }, { "authors": [ "D Manning Christopher", "Raghavan Prabhakar", "Schütze Hinrich" ], "title": "Introduction to information retrieval", "venue": "An Introduction To Information Retrieval,", "year": 2008 }, { "authors": [ "Joon Son Chung", "Arsha Nagrani", "Andrew Zisserman" ], "title": "Voxceleb2: Deep speaker recognition", "venue": null, "year": 2018 }, { "authors": [ "Yandong Guo", "Lei Zhang", "Yuxiao Hu", "Xiaodong He", "Jianfeng Gao" ], "title": "Ms-celeb-1m: A dataset and benchmark for large-scale face recognition", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Harry Hollien", "G. Paul Moore" ], "title": "Measurements of the vocal folds during changes in pitch", "venue": "Journal of Speech and Hearing Research,", "year": 1960 }, { "authors": [ "Shota Horiguchi", "Naoyuki Kanda", "Kenji Nagamatsu" ], "title": "Face-voice matching using cross-modal embeddings", "venue": "ACM Multimedia Conference on Multimedia Conference,", "year": 2018 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Miyuki Kamachi", "Harold Hill", "Karen Lander", "Eric Vatikiotis-Bateson" ], "title": "Putting the face to the voice’: Matching identity across modality", "venue": "Current Biology,", "year": 2003 }, { "authors": [ "Changil Kim", "Hijung Valentina Shin", "Tae-Hyun Oh", "Alexandre Kaspar", "Mohamed Elgharib", "Wojciech Matusik" ], "title": "On learning associations of faces and voices", "venue": "In Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Chao Li", "Xiaokong Ma", "Bing Jiang", "Xiangang Li", "Xuewei Zhang", "Xiao Liu", "Ying Cao", "Ajay Kannan", "Zhenyao Zhu" ], "title": "Deep speaker: an end-to-end neural speaker embedding system", "venue": "arXiv preprint arXiv:1705.02304,", "year": 2017 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Lauren W Mavica", "Elan Barenholtz" ], "title": "Matching voice and face identity from static images", "venue": "Journal of Experimental Psychology: Human Perception and Performance,", "year": 2013 }, { "authors": [ "Arsha Nagrani", "Joon Son Chung", "Andrew Zisserman" ], "title": "Voxceleb: a large-scale speaker identification", "venue": null, "year": 2017 }, { "authors": [ "Arsha Nagrani", "Samuel Albanie", "Andrew Zisserman" ], "title": "Learnable pins: Cross-modal embeddings for person identity", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Arsha Nagrani", "Samuel Albanie", "Andrew Zisserman" ], "title": "Seeing voices and hearing faces: Crossmodal biometric matching", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Omkar M Parkhi", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep face recognition", "venue": "In bmvc,", "year": 2015 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Harriet MJ Smith", "Andrew K Dunn", "Thom Baguley", "Paula C Stacey" ], "title": "Concordant cues in faces and voices: Testing the backup signal hypothesis", "venue": "Evolutionary Psychology,", "year": 2016 }, { "authors": [ "Harriet MJ Smith", "Andrew K Dunn", "Thom Baguley", "Paula C Stacey" ], "title": "Matching novel face and voice identity using static and dynamic facial images", "venue": "Attention, Perception, & Psychophysics,", "year": 2016 }, { "authors": [ "Randy Thornhill", "Anders Pape Møller" ], "title": "Developmental stability, disease and medicine", "venue": "Biological Reviews,", "year": 1997 }, { "authors": [ "Quan Wang", "Carlton Downey", "Li Wan", "Philip Andrew Mansfield", "Ignacio Lopz Moreno" ], "title": "Speaker diarization with lstm", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Timothy Wells", "Thom Baguley", "Mark Sergeant", "Andrew Dunn" ], "title": "Perceptions of human attractiveness comprising face and voice cues", "venue": "Archives of sexual behavior,", "year": 2013 }, { "authors": [ "Yandong Wen", "Kaipeng Zhang", "Zhifeng Li", "Yu Qiao" ], "title": "A discriminative feature learning approach for deep face recognition", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Yandong Wen", "Mahmoud Al Ismail", "Weiyang Liu", "Bhiksha Raj", "Rita Singh" ], "title": "Disjoint mapping network for cross-modal matching of voices and faces. 2018", "venue": null, "year": 2018 }, { "authors": [ "Xiang Wu", "Ran He", "Zhenan Sun", "Tieniu Tan" ], "title": "A light cnn for deep face representation with noisy labels", "venue": "IEEE Transactions on Information Forensics and Security,", "year": 2018 }, { "authors": [ "Weidi Xie", "Arsha Nagrani", "Joon Son Chung", "Andrew Zisserman" ], "title": "Utterance-level aggregation for speaker recognition in the wild", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Kaipeng Zhang", "Zhanpeng Zhang", "Zhifeng Li", "Qiao Yu" ], "title": "Joint face detection and alignment using multitask cascaded convolutional networks", "venue": "IEEE Signal Processing Letters,", "year": 2016 } ]
[ { "heading": null, "text": "Cross-modal associations between a person’s voice and face can be learned algorithmically, and this is a useful functionality in many audio and visual applications. The problem can be defined as two tasks: voice-face matching and retrieval. Recently, this topic has attracted much research attention, but it is still in its early stages of development, and evaluation protocols and test schemes need to be more standardized. Performance metrics for different subtasks are also scarce, and a benchmark for this problem needs to be established. In this paper, a baseline evaluation framework is proposed for voice-face matching and retrieval tasks. Test confidence is analyzed, and a confidence interval for estimated accuracy is proposed. Various state-of-the-art performances with high test confidence are achieved on a series of subtasks using the baseline method (called TriNet) included in this framework. The source code will be published along with the paper. The results of this study can provide a basis for future research on voice-face cross-modal learning." }, { "heading": "1 INTRODUCTION", "text": "Studies in biology and neuroscience have shown that a person’s appearance is associated with his or her voice (Smith et al., 2016b;a; Mavica & Barenholtz, 2013). Both the facial features and voice–controlling organs of individuals are affected by hormones and genetic information (Hollien & Moore, 1960; Thornhill & Møller, 1997; Kamachi et al., 2003; Wells et al., 2013), and human beings have the ability to recognize this association. For example, when speaking on the phone, we can guess the gender and approximate age of the person on the other end of the line. When watching a TV show without sound, we can also imagine the approximate voice of the protagonist by observing his or her face movements. With the recent advances in deep learning, face recognition models (Wen et al., 2016; Wu et al., 2018; Liu et al., 2017) and speaker recognition models (Wang et al., 2018; Li et al., 2017) have achieved extremely high precision. It is then natural to wonder if the associations between voices and faces could be discovered algorithmically by machines. The research on this problem could benefit many applications such as the synchronization of video faces with talking voices and the generation of faces according to voice.\nIn recent years, much research attention (Wen et al., 2018; Horiguchi et al., 2018; Nagrani et al., 2018a; Kim et al., 2018; Nagrani et al., 2018b) has been paid to voice-face cross-modal learning tasks, which has shown the feasibility of recognizing voice-face associations. This problem is generally formulated as a voice-face matching task and a voice-face retrieval task. The research on this problem is still at an early stage, and a benchmark for this problem still needs to be established. In this paper, we address this issue with the following contributions: 1) Existing methods are all evaluated on a single dataset of about 200 identities with limited tasks. The estimated accuracy always has great deviation due to the high sampling risk existed in cross-modal learning problem. Test confidence interval is proposed for qualifying the statistical significance of experimental results. 2) A solid baseline framework for voice-face matching and retrieval is also proposed. State-of-the-art performances on various voice-face matching and retrieval tasks are achieved on large-scale datasets with a high test confidence." }, { "heading": "2 RELATED WORKS", "text": "The existing methods for voice-face cross-modal learning can be classified as classification-based methods and pair-wise loss based methods, as shown in Figure 1. CNN-based networks are normally used to embed the voices and faces into feature vectors. SVHF (Nagrani et al., 2018b) is a prior study on voice-face cross-modal learning that investigated the performance of a CNN-based deep network on this problem. The human baseline for the voice-face matching task is also presented in this paper. DIMNet (Wen et al., 2018) learns a common representation for faces and voices by leveraging their relationships to some covariates such as gender and nationality. For pair-wise loss based methods, a pair or a triplet of vectors is embedded by a voice and face network, and contrastive loss (Hadsell et al., 2006) or triplet loss (Schroff et al., 2015) is used to supervise the learning of the embeddings. Horiguchi et al.’s method (Horiguchi et al., 2018) , Pins (Nagrani et al., 2018a), Kim et al.’s methods (Kim et al., 2018) are all these kind of methods. The aim of pair-wise loss based methods is to make the embeddings of positive pairs closer and the embeddings of negative pairs farther apart. In contrast, the aim of classification-based methods is to separate the embeddings of different classes. Of these two approaches, pair-wise loss based methods are better at distinguishing hard examples because of the characteristics of this approach.\nThere is still no related work which presents a benchmark for voice-face cross-modal learning tasks, which is addressed in detail as follows:\n1) As for evaluation metrics, the reliability of experiments has not been addressed by all previous research. Test confidence is proposed in this paper. With the guidance of test confidence, reliable evaluations can be conducted.\n2) As for tasks, joint matching and joint retrieval tasks established in this paper are not noticed by previous research. Though these tasks are direct extensions of traditional tasks, these very simple extensions can improve the performance of voice-face cross-modal learning dramatically.\n3) As for models, the most similar work to TriNet of this paper is Kim et al.’s method (Kim et al., 2018). Both models use the triplet loss function. The main difference is that TriNet uses L2 normalization and voice-anchored embedding learning to constrain the feature space, because it is difficult to obtain satisfactory results by training directly in a huge Euclidean space. Though L2 normalization is a normal technique, it hasn’t been introduced to the current problem.\n4) As for datasets, currently available voice-face datasets are the data generated by the common speakers of VGGFace (Cao et al., 2018; Parkhi et al., 2015) face recognition dataset and VoxCeleb (Nagrani et al., 2017; Chung et al., 2018) speaker recognition dataset. As shown in Table 1, the voice-face datasets have two versions, Vox-VGG-1 and Vox-VGG-2, which include 1,251 and 5,994 identities, respectively. To the best of our knowledge, only Vox-VGG-1 is used in previous research. Both Vox-VGG-1 and Vox-VGG-2 are used to evaluate the proposed baseline method, TriNet." }, { "heading": "3 TASKS AND EVALUATION", "text": "" }, { "heading": "3.1 TASKS", "text": "1:2 Matching and 1:n Matching. Given an audio and two face candidates (only one of which is from the speaker of the audio), the goal is to find the face that belongs to the speaker. The more difficult l:n matching task is an extension of the 1:2 matching task that increases the number of candidate faces from 2 to N .\nRetrieval. Given a “query” voice, the goal of voice-face retrieval is to rank face images according to their relevance with respect to the voice query. This task is a supplement to the matching task, the position-related information of all retrieved faces is also effective for analyzing the model performance.\nJoint Matching and Joint Retrieval. Instead of a single audio segment or single face for one identity, multiple audio segments and faces can provide more information. Matching and retrieval can be conducted on the mean embeddings of multiple audios or images. This is the simplest way to improve the performance of current voice-face matching and retrieval methods. Widespread video resources imply that the use of multiple faces and voices is feasible." }, { "heading": "3.2 TEST CONFIDENCE", "text": "The evaluation criteria for matching and retrieval tasks are accuracy and mAP(Christopher et al., 2008) respectively. All previous studies (Wen et al., 2018; Nagrani et al., 2018a; Kim et al., 2018; Nagrani et al., 2018b) evaluated their methods on a single dataset of about 200 identities. As shown in the experiment (Section 5.4), the 1:2 matching accuracy tested on multiple datasets with 189 identities varies significantly, from 81% to 87%. So the results of all related works that used VoxVGG-1 for training and testing are unreliable. Testing a model on a single small dataset may lead to a large deviation in the accuracy.\nIn 1:2 matching task, the accuracy is estimated on the sampled data, to represent the accuracy on the overall population. The estimated accuracy always has a large deviation due to the high sampling risk in the triplet sampling scenario.\nEssentially, our aim is to obtain the correct matching probability of a single independent triplet. When the dataset and the model are determined, a single independent sampling conforms to the Bernoulli distributionB(p). The results of n samplings fit the binomial distributionB(n, p). Interval estimation of a binomial distribution can be used for quantifying the deviation of the estimated accuracy. Suppose a dataset D can generate up to N triplets, and the number of sampled triplets used for testing is n. Among the sampled triplets, there are m correctly matched triplets. Suppose the sample rate is p, where p = mn , and the population rate of correctly matched N triplets is P . When n is sufficiently large, p can be approximated as normal distribution p ∼ N(P, P (1−P )n . By converting it to a standard normal distribution, we obtain u = p−P√\nP (1−P ) n\n∼ N(0, 1). For a\nsignificance level α, the confidence interval of p is (p − uα 2\n√ p(1−p)\nn , p + uα2\n√ p(1−p)\nn ). Testing a model on multiple datasets is strongly recommended when the dataset is very small. The test can be performed multiple times on datasets with a similar scale, and the results are regarded as conforming to the normal distribution. The t-test can then be used to estimate the confidence interval of the accuracy." }, { "heading": "4 TRINET BASELINE METHOD", "text": "As shown in Figure 2, the baseline method in the proposed framework consists of three steps: extracting voice and face features, constraining embeddings to a spherical space, and computing the triplet loss. After training, the face embedding and voice embedding form their own regions, and the distance between positive samples tends to be smaller." }, { "heading": "4.1 TRIPLET MINING", "text": "The input triplets for the embedding network need to be mined from the datasets, the number of which is extremely large. In previous research (Nagrani et al., 2018b; Kim et al., 2018; Wen et al., 2018; Horiguchi et al., 2018), discrete triplets are randomly mined to create a single input each time, which will lead to training and test inefficiency. Identity based sampling named as online mining is adopted in this paper, which can greatly improve the training and testing efficiency. In the identity based sampling, a batch of identities is randomly selected first, and then certain number of face images and audios for each identity of the batch are sampled. Triplets are generated based on each batch of identities. Triplet Loss is susceptible to noise which means the direction of network convergence is easy to be changed by few noise samples. Identity based training can effectively handle the disadvantage of Triplet Loss." }, { "heading": "4.2 EMBEDDING CONSTRAINTS AND THE LOSS FUNCTION", "text": "For a specific triplet < v(i), f (i), f (j) >, v(i), and f (i) are from the same identity, and v(i) and f (j) are from different identities. The feature extraction functions for voice and face are defined as Featurev(v) and Featuref (f), respectively, and a fully connected layer is added to form the embedding vectors as embv(v) = s × ‖Wv × Featurev(v) + Bv‖22 and embf (f) = s × ‖Wf × Featuref (f) + Bf‖22. LResNet50 (He et al., 2015) and Thin ResNet34 (Xie et al., 2019) with NetVLAD (Arandjelovic et al., 2017) are networks that perform well on face recognition and speaker recognition tasks, respectively. These two networks are used in this paper for face feature extraction and voice feature extraction.\nAs illustrated in Figure 3a, embedded vectors from the same person will appear in a Euclidean space after a long period of training. Because there are billions of input triples, it is difficult to obtain satisfactory results by directly training in a huge Euclidean space. To deal with this problem, two strategies are adopted in the method proposed in this paper. First, L2 normalization is added to constrain the embedding vectors to a spherical space (Figure 3b). Second, voice-anchored embedding learning is adopted. By freezing the pre-trained voice embedding network, feature vectors\nfrom voice serve as anchors, and the goal of the model is to make positive instances approach each other while keeping the negative instances away (Figure 3c). Examples tend to be distinguished much better and faster in the voice-anchored embedding learning process when used with the L2 constrained space. Triplet loss is adopted in this paper for embedding learning. Suppose d(x, y) indicates Euclidean distance; the loss function is defined as\nLoss = ∑\nv(i),f(i),f(j),i6=j\nmax {d(embv(v(i)), embf (f (i)))−d(embv(v(i)), embf (f (j))) +m, 0}, (1)\nwhere m is a margin to control the distance between positive and negative pairs." }, { "heading": "5 EXPERIMENT", "text": "" }, { "heading": "5.1 BASELINE MODEL SETUP", "text": "Training Details. TriNet was trained on Vox-VGG-2 with 5,994 identities. Face detection based on MTCNN (Zhang et al., 2016) was conducted and all face images were then rescaled to 112×112×3 to form the input for the face embedding networks. Audio preprocessing consisted of 512-point fast Fourier transform, a short-time Fourier transform (Benesty et al., 2011) for each frame, and normalization. The audio segments used for training were uniformly trimmed to 2.5 s for training efficiency, and the test audio segments were not clipped. The input shape for a k-s audio clip is 257× (100× k)× 1. The voice embedding network and face embedding network were pre-trained by VoxCeleb2 and VGGFace2, respectively. Margin, m, for the triplet loss was set to 1, and scale, s, for the L2 normalization was set to 128. The Adam optimizer was adopted in these experiments, and the total number of learning steps was 70k. The learning rates of the final fully connected layer for step < 20k, 20k < step < 40k, 40k < step < 60k, and step > 60k were 10−3, 10−4, 10−5, and 10−6 respectively. The learning rate of the face embedding network was fixed to 10−6.\nTesting Details. 1) For the 1:2 matching task, a total of 10,000 steps were tested on the baseline, which implies that a total of 30.72 million triples were tested. Note that the gender of the test triples in the 1:2 matching task was balanced. 2) For the 1:n matching task, the number of tuples to be sampled will be much higher than the number of triples in 1:2 matching; therefore, we performed this test directly on the 10k tuples. This will lower the confidence level, but the results are still useful for comparisons. 3) For the retrieval task, a face database of 500 pictures was constructed from 100 randomly selected identities, and 40 audio queries were constructed for each identity." }, { "heading": "5.2 COMPARISONS ON MATCHING AND RETRIEVAL TASKS", "text": "Comparisons of TriNet and related works on the 1:2 matching task and retrieval task are shown in Table 2. TriNet achieves state-of-the-art performance on these two main tasks. As shown in Figure 4a, on the 1:n task, the performances of all the methods decrease rapidly as n increases. This task is still significantly difficult. Some TriNet retrieval results that fit p@1 = 1 are illustrated in Figure 5. The top ranked faces in each sequence are very similar to the target face." }, { "heading": "5.3 JOINT TASKS PERFORMANCE", "text": "The results of 1:2 joint matching using mean voice and mean face are shown in Table 3. Two variables, mf and mv , are introduced to represent the number of faces and audios used to compute the mean embedding. Various values of mf and mv are tested for 1:2 matching. For retrieval, mv was set to 20 and mf was set to 5. This simple strategy of using multiple faces and voices can further improve the matching accuracy and retrieval mAP. Specifically, when mv = mf = 10, an accuracy of 89.66± 0.80% can be obtained for TriNet on the 1:2 matching task, which is 5% higher than that of single voice and single face matching. This improvement reveals a broad prospect for future research on using video data for cross-modal learning." }, { "heading": "5.4 TEST CONFIDENCE OF DATASETS WITH DIFFERENT SCALES", "text": "Figure 4b shows the fluctuations in the estimated accuracies of TriNet on the 1:2 matching task when 30 repeated random tests were conducted. The numbers of sampled identities for each curve are 100, 189, 500, and 1,000. For a determined dataset scale (such as 100 identities), instead of testing the model on a single dataset with 100 identities, 30 randomly sampled sets of data with 100 identities were used for testing. As shown in the figure, when a small-scale dataset is used, the accuracy of different runs fluctuates substantially. For large datasets, fluctuations in test accuracies also exist; however, in general, the test results are more generalized; therefore, large datasets are strongly recommended for evaluation." }, { "heading": "5.5 ABLATION STUDY", "text": "There are various options in the baseline model. To determine the option that has a greater impact on performance, we conducted a more detailed ablation study." }, { "heading": "5.5.1 TRAINING SCALE", "text": "The size of the training dataset identities used by the baseline is five times that of most related studies. To demonstrate the effects of a larger training dataset on the results, TriNet was also trained on a dataset of 1,000 identities and tested on a dataset with 189 identities. As shown in Table 4, the improvement of training on large scale dataset is near 1%. The upper limit of the results is similar to those of DIMNet-IG. Adding more identities can increase the performance by 0.5%. It is difficult to further improve the performance by increasing the size of the dataset. In contrast, integrating multiple faces and voices is an effective way to further improve the performance." }, { "heading": "5.5.2 PREPROCESSING", "text": "We need to study whether face detection should be used and how big the detection box should be. As the results in Table 5 reveal, without the use of face detection, a large amount of noise is introduced along with a few useful features, and the performance of the baseline model on all matching and retrieval tasks decreases. When the size of the default detection box is increased by 1.1 times, better performance is obtained." }, { "heading": "5.5.3 NETWORK STRUCTURE", "text": "As shown in Table 5, deeper CNN structures such as SE-ResNet50 (Hu et al., 2018) and the structure used in DIMNet outperform traditional shallow structures such as VGG-M. An SE-ResNet50 with a\nsqueeze-and-excitation module does not produce better results than the original ResNet50 structure used in the baseline model." }, { "heading": "5.5.4 EMBEDDING CONSTRAINTS", "text": "The effects of using L2 normalization and freezing the pre-trained networks are analyzed here. As presented in Table 5, using L2 normalization improves the performance of 1:2 matching accuracy by 2% and mAP by 2%. In the default configuration, the size of the metric space is 128. The model performance decreases when the scale is set to 1, which indicates that it is necessary to properly increase the size of the metric space. Freezing the face embedding network reduces the performance, whereas freezing the voice embedding network improves the performance slightly. This is because human voices are only related to some local features of human faces, and similar faces in traditional face recognition tasks do not necessarily have similar voices. Therefore, voiceanchored embedding learning outperforms face-anchored embedding learning. Training efficiency is improved substantially by freezing the voice network." }, { "heading": "5.5.5 PRE-TRAINING", "text": "As shown in Table 5, when TriNet is pre-trained on the large dataset MS-1B (Guo et al., 2016), its performance is not improved. However, without pre-training, the model’s performance is substantially reduced. (Note that in this case, the voice network was not frozen.)" }, { "heading": "6 CONCLUSION", "text": "In this study, a benchmark was established for voice-face matching and retrieval. The contributions of this paper are as follows. A solid voice-face matching and retrieval baseline method (TriNet) was proposed, which was tested on large-scale dataset with comprehensive ablation studies. The test confidence was proposed as a metric for qualifying the statistical significance of the experiments. On the 1:2 matching and retrieval tasks, TriNet achieved an accuracy of 84.48% and a mAP of 11%. Compared with the best results published so far, there is a 7% improvement in mAP. Using mean face and mean voice embeddings, the matching accuracy and retrieval mAP can be further improved by approximately 5% and 10%, respectively. This improvement implies a broad prospect for future research on using video data for cross-modal learning." } ]
2,020
null
SP:ddd2ae85b54dbb9143d25adf8bb2977732dae29b
[ "This paper proposes a method to learn a continuous latent space via CVAE to represent solutions to routing problems. Combined with differentiable evolution search algorithms, one can search in the learned latent space for solutions to new problem instances at test time. The proposed method is evaluated on two classes of routing problems: TSP and CVRP. Results show better performance in terms of objective values and runtime. They are also competitive with established expert-designed algorithms such as LKH3." ]
Methods for automatically learning to solve routing problems are rapidly improving in performance. While most of these methods excel at generating solutions quickly, they are unable to effectively utilize longer run times because they lack a sophisticated search component. We present a learning-based optimization approach that allows a guided search in the distribution of high-quality solutions for a problem instance. More precisely, our method uses a conditional variational autoencoder that learns to map points in a continuous (latent) search space to highquality, instance-specific routing problem solutions. The learned space can then be searched by any unconstrained continuous optimization method. We show that even using a standard differential evolution search strategy our approach is able to outperform existing purely machine learning based approaches.
[ { "affiliations": [], "name": "VARIATIONAL AUTOENCODERS" }, { "affiliations": [], "name": "André Hottung" }, { "affiliations": [], "name": "Bhanu Bhandari" } ]
[ { "authors": [ "James C Bean" ], "title": "Genetic algorithms and random keys for sequencing and optimization", "venue": "ORSA journal on computing,", "year": 1994 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning", "venue": "arXiv preprint arXiv:1611.09940,", "year": 2016 }, { "authors": [ "David Berthelot", "Colin Raffel", "Aurko Roy", "Ian Goodfellow" ], "title": "Understanding and improving interpolation in autoencoders via an adversarial regularizer", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sourodeep Bhattacharjee", "Robin Gras" ], "title": "Estimation of distribution using population queue based variational autoencoders", "venue": "IEEE Congress on Evolutionary Computation (CEC),", "year": 2019 }, { "authors": [ "Xinyun Chen", "Yuandong Tian" ], "title": "Learning to perform local rewriting for combinatorial optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Michel Deudon", "Pierre Cournut", "Alexandre Lacoste", "Yossiri Adulyasak", "Louis-Martin Rousseau" ], "title": "Learning heuristics for the TSP by policy gradient", "venue": "In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research,", "year": 2018 }, { "authors": [ "Unai Garciarena", "Roberto Santana", "Alexander Mendiburu" ], "title": "Expanding variational autoencoders for learning and exploiting latent representations in search distributions", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference,", "year": 2018 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "José Fernando Gonçalves", "Mauricio GC Resende" ], "title": "A parallel multi-population biased randomkey genetic algorithm for a container loading problem", "venue": "Computers & Operations Research,", "year": 2012 }, { "authors": [ "Keld Helsgaun" ], "title": "An extension of the Lin-Kernighan-Helsgaun TSP solver for constrained traveling salesman and vehicle routing problems", "venue": "Roskilde: Roskilde University,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "John J Hopfield", "David W Tank" ], "title": "Neural computation of decisions in optimization problems", "venue": "Biological cybernetics,", "year": 1985 }, { "authors": [ "André Hottung", "Kevin Tierney" ], "title": "Neural large neighborhood search for the capacitated vehicle routing problem", "venue": "In European Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Brian Ichter", "James Harrison", "Marco Pavone" ], "title": "Learning sampling distributions for robot motion planning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Chaitanya K Joshi", "Thomas Laurent", "Xavier Bresson" ], "title": "An efficient graph convolutional network technique for the travelling salesman problem", "venue": null, "year": 1906 }, { "authors": [ "Elias Khalil", "Hanjun Dai", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Attention, learn to solve routing problems", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Mohammadreza Nazari", "Afshin Oroojlooy", "Lawrence Snyder", "Martin Takác" ], "title": "Reinforcement learning for solving the vehicle routing problem", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Malte Probst", "Franz Rothlauf", "Jörn Grahl" ], "title": "Scalability of using restricted Boltzmann machines for combinatorial optimization", "venue": "European Journal of Operational Research,", "year": 2017 }, { "authors": [ "Stefan Ropke", "David Pisinger" ], "title": "An adaptive large neighborhood search heuristic for the pickup and delivery problem with time windows", "venue": "Transportation science,", "year": 2006 }, { "authors": [ "Vui Ann Shim", "Kay Chen Tan", "Jun Yong Chia" ], "title": "Probabilistic based evolutionary optimizers in bi-objective travelling salesman problem", "venue": "In Asia-Pacific Conference on Simulated Evolution and Learning,", "year": 2010 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Rainer Storn", "Kenneth Price" ], "title": "Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces", "venue": "Journal of global optimization,", "year": 1997 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Huajin Tang", "Vui Ann Shim", "Kay Chen Tan", "Jun Yong Chia" ], "title": "Restricted boltzmann machine based algorithm for multi-objective optimization", "venue": "In IEEE congress on evolutionary computation,", "year": 2010 }, { "authors": [ "Robin Winter", "Floriane Montanari", "Andreas Steffen", "Hans Briem", "Frank Noé", "Djork-Arné Clevert" ], "title": "Efficient multi-objective molecular optimization in a continuous latent space", "venue": "Chemical science,", "year": 2019 }, { "authors": [ "Byoung-Tak Zhang", "Soo-Yong Shin" ], "title": "Bayesian evolutionary optimization using Helmholtz machines", "venue": "In International Conference on Parallel Problem Solving from Nature,", "year": 2000 } ]
[ { "heading": "1 INTRODUCTION", "text": "Significant progress has been made in learning to solve optimization problems via machine learning (ML). Especially for practical applications, learning-based approaches are of great interest because of the high labor costs associated with the development of completely hand-crafted solution approaches. For routing problems such as the traveling salesperson problem (TSP) and the capacitated vehicle routing problem (CVRP), recent ML-based approaches are able to generate good solutions for small problem instances in a fraction of a second (e.g., Kool et al. (2019)). However, in many real-world applications of these problems users gladly accept more computation time for solutions of even higher quality. Recently proposed approaches (e.g., Hottung & Tierney (2020)) address this demand and integrate learning-based components with high-level search procedures. While these approaches offer improved performance over non-search-based methods, they rely on domain knowledge encapsulated in the high-level search procedures.\nIn this work, we present a learning-based optimization approach for routing problems that is able to perform an extensive search for high-quality solutions. In contrast to other approaches, our method does not rely on domain-specific high-level search procedures. Our approach learns an instancespecific mapping of solutions to a continuous search space that can then be searched via any existing continuous optimization method. We use a conditional variational autoencoder (CVAE) that learns to encode a solution to a given instance as a numerical vector and vice versa. Some genetic algorithm variants (e.g., Gonçalves & Resende (2012)) use numerical vectors to represent solutions to combinatorial optimization problems. However, these approaches rely on decoding schemes that are carefully handcrafted by domain experts. In contrast, our approach learns the problem-specific decoding schema on its own, requiring no domain or optimization knowledge on the side of the user.\nThe performance of an optimization algorithm heavily depends on the structure of the fitness landscape of the search space, such as its smoothness. If solutions close to each other in the search space are semantically similar, resulting in a smooth landscape, the employed search algorithm can\niteratively move towards the more promising areas of the search space. It has been observed for some problems that variational autoencoders (VAEs) are capable of learning a latent space in which semantically similar inputs are placed in the same region. This allows, for example, a semantically meaningful interpolation between two points in the latent space (see e.g. Berthelot et al. (2018)). However, it is unclear if this property upholds for a conditional latent space that encodes routing problems. We show experimentally that our CVAE-based approach is indeed capable of learning a latent search space in which neighboring solutions have a similar objective function value. Furthermore, we introduce a novel technique that addresses the issue of symmetries in the latent space and show that it enables our method to match and surpass state-of-the-art ML-based methods. We train our method using high-quality solutions because we aim to learn a latent search space that contains mostly high-quality solutions. Hence, our method usually requires a long offline phase (e.g., to generate solutions using a slow, domain-independent, generic solver). However, this offline phase is offset by fast, online solution generation.\nWe focus on the TSP and the CVRP, which are two of the most well-researched problems in the optimization literature. The TSP is concerned with finding the shortest tour between a set of cities that visits each city exactly once and returns to the starting city. The CVRP describes a routing problem where the routes for multiple vehicles to a set of customers must be planned. All customers have a certain demand of goods and all vehicles have a maximum capacity that they can carry. All routes must start and end at the depot. The task is to find a set of routes with minimal cost so that the demand of all customers is fulfilled and each customer is visited by exactly one vehicle. We consider the versions of the TSP and CVRP where the distance matrix obeys the triangle inequality.\nThe contributions of this work are as follows:\n• We propose a novel approach that learns a continuous, latent search space for routing problems based on CVAEs.\n• We show that our approach is able to learn a well-structured latent search space. • We show that the learned search space enables a standard differential evolution search\nstrategy to outperform state-of-the-art ML methods." }, { "heading": "2 RELATED WORK", "text": "In Hopfield & Tank (1985), it was first proposed to use an ML-based method to solve a routing problem. The authors use a Hopfield network to solve small TSP instances with up to 30 cities. In Vinyals et al. (2015), pointer networks are proposed and trained to solve TSP instances with up to 50 cities using supervised learning. Bello et al. (2016) extend this idea and train a pointer network via actor-critic reinforcement learning. More recently, graph neural networks have been used to solve the TSP, e.g., a graph embedding network in Khalil et al. (2017), a graph attention network in Deudon et al. (2018), or a graph convolutional network in Joshi et al. (2019). The significantly more complex CVRP has first been addressed in Nazari et al. (2018) and Kool et al. (2019), in which a recurrent neural network decoder coupled with an attention mechanism and a graph attention network are used, respectively. While some of these methods use a high-level search procedure (such as beam search), all of them are focused on finding solutions quickly (in under one second). In contrast, our approach is able to exploit a longer runtime (more than one minute for larger instances) to find solutions of better quality.\nA couple of approaches use local search like algorithms combined with ML techniques to solve routing problems. Chen & Tian (2019) propose to learn an improvement operator that makes small changes to an existing solution. The operator is applied to a solution iteratively to find a highquality solutions for the CVRP. However, with a reported runtime of under half a second for the CVRP with 100 nodes, the method is not focused on performing an extensive search. In Hottung & Tierney (2020), another iterative improvement method for the CVRP is proposed that integrates learned heuristics into a large neighborhood search framework. The method is used to perform an extensive search with reported runtimes of over one minute for larger instances. In contrast to our method, the high-level large neighborhood search framework contains domain specific components and is known to perform exceptionally well on routing problems (Ropke & Pisinger, 2006).\nPerhaps most similar to our work is the line of research based on Gómez-Bombarelli et al. (2018), in which the authors use a VAE to learn a continuous latent search space for discovering molecules.\nThey use an additional Gaussian process model that is trained to predict the the quality of molecules given their latent search space representation to allow for a gradient-based search. Kusner et al. (2017) and Jin et al. (2018) use a similar setup, but use Bayesian optimization for the search. Winter et al. (2019) propose to use particle swarm optimization to search a learned latent space for new molecules. To a more limited degree, the idea of optimizing in a continuous learned space has also been used for neural architecture optimization (Luo et al., 2018). In contrast to the aforementioned methods, we do not use a separate model to predict the solution quality based on their latent representation, because decoding and evaluating solutions in our setting is cheap (compared to molecules or neural network architectures). Furthermore, our approach addresses a fundamentally different problem, because routing problems must be solved with respect to a given context (i.e., a problem instance that describes location coordinates that must be visited) and we hence use a CVAE in this work. Learning a latent space conditioned on a problem instance (with the number of possible instances being basically infinite) is significantly more challenging. Ichter et al. (2018) propose to use CAVEs to learn a latent space conditioned on problem instances to represent solutions to robot motion planing problems. However, they only sample solutions at random from the learned distribution and do not perform a guided search. We show that the learned structured latent space of our approach enables a guided search that significantly outperforms random sampling.\nDifferent generative models have been used to sample new population members in probabilistic evolutionary algorithms known as estimation of distribution algorithms (e.g., a Helmholtz machine (Zhang & Shin, 2000), a restricted Boltzmann machine (Tang et al., 2010; Shim et al., 2010; Probst et al., 2017), or a VAE (Garciarena et al., 2018; Bhattacharjee & Gras, 2019)). All these methods are focused on how to explore an existing search space using generative models. In contrast, our method is focused on learning the search space itself, leaving the actual search to a generic optimizer." }, { "heading": "3 METHOD", "text": "Our novel approach, called CVAE-Opt, learns a continuous (latent) search space for routing problems that can be searched by any continuous optimization method. It is based on a CVAE that learns to map solutions to routing problem instances to a continuous, n-dimensional space. In contrast to conventional search spaces, the learned latent search is trained to contain only high-quality solutions.\nAutoencoders are neural networks that are used to learn an efficient encoding of data. They consist of an encoder and a decoder network. The encoder learns to reduce an input x to a point z in a low dimensional space and the decoder tries to reconstruct the input x based on z. The objective of the training is to minimize the difference between the input x and the output of the decoder, requiring the network to learn an efficient encoding of x. In contrast, VAEs are generative models that do not use a deterministic encoder, but instead an encoder that parameterizes an approximate posterior distribution over z. In our context, we do not want to train the decoder to generate solutions for only a single instance (e.g., a given set of coordinates for the TSP), but instead for all instances of a certain instance type (e.g., all TSP instances with 50 cities). We thus use a CVAE (Sohn et al., 2015), which enables us to learn a latent search space conditioned on the problem instances." }, { "heading": "3.1 VARIATIONAL AUTOENCODER-BASED COMBINATORIAL OPTIMIZATION", "text": "The overall training process of CVAE-Opt is shown in Figure 1a. The stochastic encoder q(z|l, s) receives a problem instance l and a high-quality solution s and outputs an n-dimensional vector z. The decoder p(s|l, z) is given z together with the instance l and outputs a solution s′. One objective\nof the training is to minimize the difference between the original high-quality solution s and the solution s′ generated by the decoder. While the decoder is powerful enough to construct a good solution based on the instance l alone, it is also given the latent variable z that describes the aspects of the solution s that the decoder cannot reliably infer on its own. The second objective during training is to ensure that high-quality solutions can be generated for values of the latent variable that have not been seen during training. This objective is explained in more detail below.\nFigure 1b shows the iterative search process, in which the decoder p(s|l, z) is used together with any unconstrained continuous optimizer to search for solutions to a problem instance l. The unconstrained continuous optimizer navigates the search through the learned latent search space. At each iteration, the optimizer outputs a vector z describing a point in the latent search space. The decoder generates a solution s′ based on z and the objective function value of s′ is returned to the optimizer. With an effective optimizer and the learned search space, high-quality solutions to l can be found.\nRouting problem representation We describe a routing problem instance by a graphG = (V,E), with V = {v0, ..., vn}. The representation of a problem instance l consists of a set of n feature vectors x0, . . . , xi, . . . , xn, where xi describes node vi. For the TSP, each node represents a location (e.g., a city) with each two-dimensional feature vector describing the location’s coordinates. For the CVRP, the node v0 represents the depot, and all other nodes represent the customers. As in Nazari et al. (2018), each feature vector is four-dimensional and describes the unfulfilled demand of a location, the remaining capacity of the vehicle, and the coordinates of the location. For both problems, a solution s describes a sequence of locations vs0 , . . . , vsT (for the TSP, T = n) in which the first location is the starting city (for the TSP) or the depot (for the CVRP). We note that our formalism focuses on routing problems on a Euclidean plane. While we anticipate that our approach will work for other types of combinatorial optimization problems (with adjustment of the input layers), we save showing this for future work." }, { "heading": "3.2 MODEL", "text": "We implement the encoder qφ(z|l, s) and the decoder pθ(s|l, z) using neural networks, with φ and θ denoting the network weights. In earlier work, (e.g., Nazari et al. (2018)) routing problems are often modeled as Markov decision processes where a solution is constructed by a sequence of actions (i.e., which node should be visited next). We follow that approach and train our decoder pθ(s|l, z) to select the location that should be added to the solution at each step t ∈ {1, . . . , T}, with the first element of the solution being predefined (TSP: the starting city, CVRP: the depot). As in Nazari et al. (2018), we use a masking schema to prevent the model from selecting actions that would result in an unfeasible solution. The probability of the decoder of generating a solution s can be decomposed as (Sutskever et al., 2014):\npθ(s|l, z) = T∏ t=1 p(st|s0, . . . , st−1; l; z). (1)\nLike the decoder, the encoder generates the latent variable z for a solution s0, . . . , sT to a problem instance l sequentially. At step t ∈ {1, . . . , T} it encodes the t-th element of the solution. Similar to Nazari et al. (2018), we allow the input representation to change during encoding and decoding. The input x0,t, . . . , xn,t at time step t can be changed to reflect the new sub-problem defined by the problem instance l and the constraints introduced by the partially constructed solution s0, . . . , st−1. In the following, we omit the index t when referring to the input data of the model to allow for better readability. For the CVRP, we update the demands of the customers and the remaining vehicle capacity based on the decisions of the model in earlier decoding steps. For the TSP, we make no changes to the problem instance representation.\nNetwork architecture The architecture of the encoder and the decoder is shown in Figure 2. Both use a linear embedding layer and an attention mechanism to encode/decode solutions sequentially. Weights are shared between identical components in the encoder and decoder. This allows not only for faster training, but also enforces a shared view of the encoder and decoder on the given problem representation. A more detailed description of the network architecture is given in Appendix A." }, { "heading": "3.3 TRAINING", "text": "The objective of the model training is two-fold. The first objective is to maximize the (log-)likelihood of reconstructing a solution s to an instance l encoded by the encoder qφ(z|l, s) via the decoder pθ(s|l, z). The second objective is to keep the posterior distribution of the encoder close to a given desired probability distribution p(z). We use a standard Gaussian distribution (µ = 0, σ = 1) for p(z) and measure the difference between both distributions with the Kullback–Leibler (KL) divergence. As in β−VAEs (Higgins et al., 2017), we weight the objectives during training using the parameter β:\nL(φ, θ, s, l, z, β) = Eqφ(z|l,s)[log pθ(s|l, z)]− β DKL(qφ(z|l, s)||p(z)). (2)\nSymmetry breaking Optimization problems are commonly subject to symmetrical solutions, which are multiple solutions that represent the same semantic solution, but differ in terms of their syntax. For example, for the TSP the solution sequence s0, . . . , sn represents the same solution as the solution sequence s1, . . . , sn, s0. For the CVRP, the subtours can be ordered in any order in the solution sequence without changing the underlying solution. This might lead to the CVAE placing identical solutions in different regions of the learned latent search space because they are represented by different solution sequences. To force the model to learn a representation of the underlying solution and not the solution sequence, we train the model to reproduce a symmetrical solution to the input, rather than the exact same solution as the input. The symmetrical solutions used during training are chosen at random for each epoch." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate CVAE-Opt on datasets of TSP and CVRP instances and compare it to state-of-the-art optimization approaches. We use two different unconstrained continuous optimizers in our experiments: a basic differential evolution (DE) algorithm (Storn & Price, 1997) and random search (RS). In the following, we refer to the two variants of CVAE-Opt as CVAE-Opt-DE and CVAE-Opt-RS. In all experiments, CVAE-Opt is run on a single Nvidia Tesla V100 GPU and a single core of a Intel Xeon 4114 CPU at 2.2 GHz1. We evaluate CVAE-Opt on TSP and CVRP instances with 20, 50, and 100 nodes. For each of these six problem classes, we generate instances with identical properties to the instances used in Kool et al. (2019) using the instance generator made available by the authors. We use 93,440 instances for model training, 100 for search validation, and 1,000 for testing the search per problem class." }, { "heading": "4.1 SETUP", "text": "Training For the TSP, we solve all instances to optimality using CONCORDE (Applegate et al., 2006). For the CVRP, we create high-quality solutions using the heuristic solver LKH3 (Helsgaun, 2017). We run LKH3 a single time for each instance with the hyperparameter configuration used in Helsgaun (2017). We train separate models for each instance class for 300 epochs. Every 25 epochs\n1Our implementation of CVAE-Opt is available at https://github.com/ahottung/CVAE-Opt\nthe model is evaluated by using its decoder in a CVAE-Opt search setting to look for solutions to the 100 validation instances. The search setup (i.e., the hyperparameter configuration) is identical to the one used in the later testing/deployment stage. The model offering the best validation performance is used to search for solutions to the test instances.\nThe ideal selection of the hyperparameter β depends on the problem class and the search setup. For each problem class we repeat the training process a small number (<20) of times and pick the model with the best validation search performance. All other search hyperparameters are identical over all training runs and have not been tuned. The training batch size is set to 128 and the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−3 is used.\nSearch The DE algorithm employed in CVAE-Opt-DE maintains a population of vectors in the learned latent search space that is improved by crossover and mutation. Offspring vectors are created by combining three vectors of the population using vector arithmetic, as described in Storn & Price (1997). We slightly modify the employed DE algorithm to better profit from the parallel computing capabilities of a GPU: Instead of generating one offspring solution at a time, we decode and evaluate a batch of solutions per iteration.\nIn all experiments of CVAE-Opt-DE, we use a DE population size of 600. At each iteration of the DE algorithm, 600 offspring vectors are generated and decoded in one batch. The initial population vectors are sampled uniformly at random from the bounded search space. To determine the bounds, we encode 1,000 separate model validation instances (with the encoder) to points in the latent space. The bounds are then selected so that 99% of the coordinates of the points are within the bounds. This ensures that the search operates in regions of the latent search space known by the decoder even if the posterior distribution of the encoder differs substantially from the standard Gaussian distribution. The crossover probability CR and the differential weight F of the DE are set to 0.95 and 0.3, respectively. Solutions are generated greedily by the decoder (i.e., the action with the highest probability value is selected at each step). The search terminates after 300 iterations. We note that we do not tune these hyperparameters and that the reported results can thus likely be improved.\nIn CVAE-Opt-RS, the latent variables are sampled randomly from a Gaussian distribution. We also evaluated sampling from a uniform distribution using the same bounds as for CVAE-Opt-DE, but observed that this slightly deteriorates the performance. All other components of CVAE-Opt-RS (and its hyperparameters) are identical to CVAE-Opt-DE." }, { "heading": "4.2 SYMMETRY BREAKING", "text": "First, we evaluate the effectiveness of our symmetry breaking mechanism. We train five models with symmetry breaking and five models without symmetry breaking for the TSP and the CVRP with 50 and 100 nodes each. In all training runs β is set to 1e-3. Figure 3 shows the search performance of the models after the final training epoch on the validation instances (in terms of the gap to the solutions obtained via CONCORDE and LKH3). In all cases our symmetry breaking mechanism leads to an significant performance improvement. For the TSP instances the mean gap is reduced from 0.09% to 0.04% for instances with 50 nodes, and from 1.23% to 0.37% for instances with 100 nodes. Similarly, for the CVRP symmetry breaking reduces the mean gap from 3.18% to 0.33% and 5.66% to 1.67% for instances with 50 and 100 nodes, respectively.\n4.3 INFLUENCE OF β\nTo evaluate the influence of the parameter β, we repeat the training with different β values (again five times per value and problem setting). We only consider TSP and CVRP instances with 100 nodes because the experiments are computationally expensive. Figure 4 shows the performance (gap to CONCORDE and LKH3) of the models after the final training epoch when searching for solutions to the validation instances. We observe that in our setting the best search performance can be observed for β values of 1e-2 and 1e-3 for the TSP and the CVRP, respectively. This is a significant deviation from the proposed β values (> 1) in Higgins et al. (2017). A high β value corresponds to a strong limit on the capacity of the latent information channel. We hypothesize that our extensive search procedure benefits from a latent (search) space that is able to represent many instances.\nTSP50 TSP100 0\n1\n2\n3\nG ap\n(% )\nTSP\nOn Off\nCVRP50 CVRP100\n0\n2\n4\n6\n8\nG ap\n(% )\nCVRP\nOn Off\nFigure 3: Symmetry breaking 1e-3 1e-2.5 1e-2 1e-1.5\n0.0\n0.5\n1.0\n1.5\nG ap\n(% )\nTSP100\n1e-4 1e-3 1e-2 1e-1\n0.0\n2.0\n4.0\nG ap\n(% )\nCVRP100\nFigure 4: Influence of β" }, { "heading": "4.4 STRUCTURE OF THE LEARNED SEARCH SPACE", "text": "The performance of any search algorithm depends on the structure of the search space. Ideally, solutions of similar quality should be placed in similar regions of the search space. We conduct the following experiment to evaluate if our method learns a (latent) search space in which solutions of high quality can on average be found in the proximity of other high-quality solutions: First, we sample 1,000 solutions for a routing problem instance from the learned search space. The best of these solutions functions as a reference solution. Next, we sample solutions from multiple hyperspheres around the reference solutions, only considering points within the defined bounds of the search space. For each hypersphere we sample 100 solutions, and discard all solutions that are identical to the reference solutions. We repeat this experiment for each of the 1,000 test instances per problem class. Figure 5 shows the absolute cost difference of the sampled solutions to the reference solution for the TSP and CVRP with 100 nodes (see Appendix B for all results). The results show for all problem classes that, on average, solutions close to the high-quality solutions are also of similar quality (in contrast to solutions farther away), indicating that the search space is well structured.\nThis experiment also shows that our method successfully learns a search space mostly containing high-quality solutions. Even randomly selected solutions that have a euclidean distance of five from the high-quality reference solution only have an average absolute cost difference of 1.13 for the TSP and 1.51 for the CVRP (both with 100 nodes). As an illustrative example, Figure 6 shows of a learned latent search space for randomly selected TSP instance with 20 nodes (the search space is only shown along 2 of 100 dimensions). While this visualization is not artificially selected, it does not allow for any generalizable assertions." }, { "heading": "4.5 COMPARATIVE EXPERIMENTS", "text": "TSP For a comparison to the state-of-the-art, we compare CVAE-Opt-DE and CVAE-Opt-RS to the AM approach from Kool et al. (2019). We run the AM approach on the same machine as CVAEOpt using the code and the models made available by the authors, sampling 500,000 solutions for each instance. Figure 7 shows the performance for all three methods over the course of the search process (with a 95% confidence interval). For instances with 20 nodes all methods achieve a very low (< 0.1) average gap to optimality, albeit the AM method performs slightly worse than the CVAE-based approaches. Instances with 50 and 100 nodes are computationally harder and allow CVAE-Opt-DE to take advantage of its guided search in the learned latent search space. For both instance groups, CVAE-Opt-DE outperforms the AM approach and CVAE-Opt-RS after the first few seconds of the search. This is the case although CVAE-Opt-DE needs significantly more time per sampled solution than the other approaches. Table 1 shows the final results after the completion of the search and additionally compares the performance of CVAE-Opt to Concorde, LKH3 and the\ngraph convolutional network approach using beam search and the shortest-tour heuristic (GCN-BS) from Joshi et al. (2019). We note that GCN-BS, in contrast to other evaluated learning-based methods, solves instances in batches (of size 200) making a direct comparison of the runtime difficult.\nCVRP First, we compare CVAE-Opt-DE and CVAE-Opt-RS to the AM approach using the same hyperparameters as for the TSP instances. Figure 8 shows the performance of all three methods. Note that we report the absolute cost instead of the gap to optimality because it is not currently computationally feasible to solve our CVRP instances to optimality. For all three instance sizes CVAE-Opt-DE outperforms both other methods given similar runtime. We note that the significant performance difference between CVAE-Opt-RS and CVAE-Opt-DE is the most unbiased confirmation that our approach is able to learn a well-structured search space. If the learned search would have no meaningful structure, we would expect both approaches to have similar performance.\nTable 1 shows additional results comparing both CVAE-Opt implementations to LKH3, NLNS (Hottung & Tierney, 2020), and NeuRewriter (Chen & Tian, 2019). We run all approaches except for NeuRewriter on the same machine as CVAE-Opt. For NLNS, we use 10 cores and limit the runtime to the time needed by CVAE-Opt-DE. For NeuRewriter, we report the results obtained by the authors (we thus mark the results with a star) on instances with identical properties. CVAE-Opt-DE finds better solutions than NeuRewriter on all instance sizes (due to its much longer runtime) and comes close to the performance of LKH3 and NLNS (which profit from expert designed high-level search components that CVAE-Opt does not require) on instances with 20 and 50 customers." }, { "heading": "4.6 GENERALIZATION", "text": "We evaluate the generalization performance of CVAE-Opt-DE and the AM approach by using a model trained on instances with 100 nodes to solve instances with 95, 105, 125 and 150 nodes. We\nmainly focus on the ability to generalize to larger instances, because using a model trained on small instances to tackle large-scale problems could be a viable option if training on large-scale instances is too computationally expensive. The results are shown in Table 2. Note that for the CVRP, we report the gap to LKH3 to allow for better comparability of the results over the different instance sizes. For TSP and CVRP instances with 95, 100 and 105 nodes there is no notable performance difference, which shows the ability of our model to generalize well to instances that are slightly different than the training instances. This is an important aspect for the application of our method in practice. For instances with 125 and 150 nodes the performance is significantly worse. We note that impaired performance on instances that differ substantially from the instances seen during training is to be expected. However, this does not severely limit the applicability of our method because there are many scenarios in which the distribution of encountered instances does not change frequently." }, { "heading": "4.7 ABLATION STUDY", "text": "We replace the learned decoding schema in CVAE-Opt-DE with a handcrafted decoder from the literature to further evaluate to what extent learning plays a role in CVAE-Opt’s performance. Opt-DE implements the decoding schema proposed by Bean (1994) while adopting all other components of our learning-based method. The decoder of Opt-DE takes in a vector z ∈ [0− 1]n that defines a permutation of the n nodes of a problem instance, which is constructed by sorting the nodes according to their corresponding entry in z, i.e., node vi corresponds to entry zi. A tour is constructed by trying to visit the nodes in the order of the permutation. For the CVRP we use the same masking schema as for CVAE-Opt to avoid illegal tours. We limit the search time of Opt-DE to the time needed by CVAE-Opt-DE and note that the handcrafted decoder is significantly faster than the learned decoder. The results are shown in Table 3. CVAE-Opt-DE outperforms Opt-DE on the TSP and CVRP for all instance sizes, with the difference being especially visible on larger problems." }, { "heading": "5 CONCLUSION", "text": "We presented CVAE-Opt, a method that uses a variational autoencoder to learn a mapping of routing problem solutions to points in a continuous (latent) search space. The learned space can be searched by any basic unconstrained continuous optimizer. The approach provides an interface between optimization and machine learning techniques, allowing traditional continuous optimization methods to search in a learned space. We show that our approach is able to learn a well-structured search space that enables a guided search by a high-level, domain independent continuous optimizer. On TSP and CVRP instances, CVAE-Opt significantly outperforms state-of-the-art ML-based approaches. In the future, we will further investigate the properties of the learned search space and evaluate recent extensions to the VAE framework." }, { "heading": "ACKNOWLEDGMENTS", "text": "The computational experiments in this work have been performed using the Bielefeld GPU Cluster." }, { "heading": "A NETWORK ARCHITECTURE DETAILS", "text": "A.1 ENCODER\nThe encoder encodes a routing problem solution s0, . . . , sT sequentially. The model is given the input features of the nodes xst−1 and xst at decoding step t ∈ {1, . . . , T} separately from the problem representation of all nodes x0, . . . , xn. For each of the inputs of the problem representation x0, . . . , xn, an embedding hi is created using a linear transformation that is applied to all inputs separately and identically. For the separate input features of the nodes xst−1 and xst , a different linear transformation is applied in a similar fashion to generate the embeddings h′st−1 and h ′ st . All learned embeddings have a dimensionality of dh, and we set dh to 128 for all trained models.\nThe first recurrent neural network module RNN 1 receives the embedding h′st−1 of the previously visited node in the solution s at each step t. The output hR1 contains information on the first t − 1 elements of the solution. We implement all recurrent neural networks in the model as gated recurrent neural networks (Chung et al., 2014).\nAll embeddings are used by the attention layer Att to compute a single dh-dimensional context vector c that describes all relevant embeddings h0, . . . , hn. The relevance of each input is determined based on the current encoding state given by hR1 . To compute the context vector c, first the ndimensional alignment vector ā is computed that describes the relevance of each input:\nā = softmax (uH0 , ..., u H n ), (3)\nwhere uHi = z A tanh(WA[hi;h R1 ]). (4) Here, zA is a vector and WA is a matrix with trainable parameters and “;” is used to describe the concatenation of two vectors. Based on the alignment vector ā, the context vector c is generated:\nc = n∑ i=0 āihi. (5)\nThe context vector c is then used by the recurrent neural network module RNN 2, which is the main encoding component of the encoder. At each step t it is given the embedding of the t-th node in the solution sequence xs1 , . . . , xsT in addition to c. Its output h\nR2 in the last iteration T encodes the complete sequence xs1 , . . . , xsT and is used in two separate linear transformations to calculate the dh-dimensional vectors µ and σ. These vectors parameterize a multivariate normal distribution from which the latent variable z is sampled using the reparameterization trick (Kingma & Welling, 2014).\nA.2 DECODER\nThe architecture of the decoder is based on the model proposed in Nazari et al. (2018). At each step t the model uses a pointer mechanism (Vinyals et al., 2015) to point towards the node that should be visited next. The decoder uses the same embedding, attention mechanism, and recurrent neural network RNN 1 as the encoder. The weights for these components are shared by the encoder and the decoder. In addition to the inputs required to calculate c, the decoder also gets the latent variable z as an input. The concatenation [z, c, xst−1 ] is transformed by a linear layer to a dh-dimensional vector c′. This vector provides the context to the pointing mechanism that calculates the output distribution over all actions based on the node embedding embedding h0, . . . , hn:\npθ(at|πt) = softmax (u0, ..., un), (6)\nwhere ui = z B tanh(WB [hi; c ′]), (7)\nand the vector zB and the matrix WB contain trainable parameters." }, { "heading": "B SEARCH SPACE STRUCTURE ANALYSIS FOR ALL PROBLEM SIZES", "text": "Figure 9 shows the absolute cost difference and the euclidean distance of the sampled solutions to the reference solution (i.e., the best solution found in a random search of 1,000 solutions) for all problem classes." } ]
2,021
null
SP:2fbbc4ff1a587e2239a4f5b8672dd310d0124e39
[ "This paper proposes to use natural gradient instead of standard gradient to optimize a regularized objective with the regularization being the Wasserstein distance between the so-called behaviour distributions for the previous policy and new policy. It then combines this Wasserstein gradient descent with Policy Gradient and Evolutionary Strategies. Experiments conducted in OpenAI and Roboschool show some promising results for this combination." ]
A novel optimization approach is proposed for application to policy gradient methods and evolution strategies for reinforcement learning (RL). The procedure uses a computationally efficient Wasserstein natural gradient (WNG) descent that takes advantage of the geometry induced by a Wasserstein penalty to speed optimization. This method follows the recent theme in RL of including a divergence penalty in the objective to establish a trust region. Experiments on challenging tasks demonstrate improvements in both computational cost and performance over advanced baselines.
[ { "affiliations": [], "name": "REINFORCEMENT LEARNING" }, { "affiliations": [], "name": "Ted Moskovitz" }, { "affiliations": [], "name": "Michael Arbel" }, { "affiliations": [], "name": "Ferenc Huszar" }, { "affiliations": [], "name": "Arthur Gretton" } ]
[ { "authors": [ "Shun-ichi Amari" ], "title": "Neural learning in structured parameter spaces - natural riemannian gradient", "venue": "Advances in Neural Information Processing Systems", "year": 1997 }, { "authors": [ "M Arbel", "A Gretton", "W Li", "G Montufar" ], "title": "Kernelized wasserstein natural gradient", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Krzysztof Choromanski", "Aldo Pacchiano", "Jack Parker-Holder", "Yunhao Tang", "Deepali Jain", "Yuxiang Yang", "Atil Iscen", "Jasmine Hsu", "Vikas Sindhwani" ], "title": "Provably robust blackbox optimization for reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Aude Genevay", "Marco Cuturi", "Gabriel Peyré", "Francis Bach" ], "title": "Stochastic optimization for large-scale optimal transport", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sham M Kakade" ], "title": "A natural policy gradient", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Solomon Kullback", "Richard A Leibler" ], "title": "On information and sufficiency", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Seong Jae Lee", "Zoran Popović" ], "title": "Learning behavior styles with inverse reinforcement learning", "venue": "ACM transactions on graphics (TOG),", "year": 2010 }, { "authors": [ "Wuchen Li" ], "title": "Geometry of probability simplex via optimal transport. arXiv:1803.06360 [math], March 2018", "venue": "URL http://arxiv.org/abs/1803.06360", "year": 2018 }, { "authors": [ "Wuchen Li", "Guido Montufar" ], "title": "Natural gradient via optimal transport", "venue": "[cs, math], March 2018a. URL http://arxiv.org/abs/1803.07033", "year": 2018 }, { "authors": [ "Wuchen Li", "Guido Montufar" ], "title": "Ricci curvature for parametric statistics via optimal transport", "venue": "[cs, math, stat], July 2018b. URL http://arxiv.org/abs/1807", "year": 2018 }, { "authors": [ "Wuchen Li", "Jiaxi Zhao" ], "title": "Wasserstein information matrix. arXiv:1910.11248 [cs, math, stat], November 2019", "venue": "URL http://arxiv.org/abs/1910.11248", "year": 1910 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search of static linear policies is competitive for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Elliot Meyerson", "Joel Lehman", "Risto Miikkulainen" ], "title": "Learning behavior characterizations for novelty search", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference 2016,", "year": 2016 }, { "authors": [ "Aldo Pacchiano", "Jack Parker-Holder", "Yunhao Tang", "Anna Choromanska", "Krzysztof Choromanski", "Michael I Jordan" ], "title": "Learning to score behaviors for guided policy optimization", "venue": null, "year": 1906 }, { "authors": [ "Martin L. Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2010 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution Strategies as a Scalable Alternative to Reinforcement Learning", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael I. Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": "CoRR, abs/1502.05477,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018", "venue": "URL http://incompleteideas.net/book/the-book-2nd. html", "year": 2018 }, { "authors": [ "Cedric Villani" ], "title": "OPTIMAL TRANSPORT: old and new", "venue": "SPRINGER-VERLAG BERLIN AN,", "year": 2016 }, { "authors": [ "Arbel" ], "title": "A visualization of the quadruped task. The agent receives receives more reward the closer it is to the goal (green). A naı̈ve agent will get stuck in the local maximum at the wall if it attempts to move directly to the goal", "venue": null, "year": 2021 }, { "authors": [ "Pacchiano" ], "title": "2019) for more details). Both tasks used 1000-dimensional random features and embeddings from the n = 2 previous policies to compute the WD", "venue": "For WNG,", "year": 2019 } ]
[ { "heading": null, "text": "A novel optimization approach is proposed for application to policy gradient methods and evolution strategies for reinforcement learning (RL). The procedure uses a computationally efficient Wasserstein natural gradient (WNG) descent that takes advantage of the geometry induced by a Wasserstein penalty to speed optimization. This method follows the recent theme in RL of including a divergence penalty in the objective to establish a trust region. Experiments on challenging tasks demonstrate improvements in both computational cost and performance over advanced baselines." }, { "heading": "1 INTRODUCTION", "text": "Defining efficient optimization algorithms for reinforcement learning (RL) that are able to leverage a meaningful measure of similarity between policies is a longstanding and challenging problem (Lee & Popović, 2010; Meyerson et al., 2016; Conti et al., 2018b). Many such works rely on similarity measures such as the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951) to define procedures for updating the policy of an agent as it interacts with the environment. These are generally motivated by the need to maintain a small variation in the KL between successive updates in an off-policy context to control the variance of the importance weights used in fthe estimation of the gradient. This includes work by Kakade (2002) and Schulman et al. (2015), who propose to use the Fisher Natural Gradient (Amari, 1997) as a way to update policies, using local geometric information to allow larger steps in directions where policies vary less; and the work of Schulman et al. (2017), which relies on a global measure of proximity using a soft KL penalty to the objective. While those methods achieve impressive performance, and the choice of the KL is well-motivated, one can still ask if it is possible to include information about the behavior of policies when measuring similarity, and whether this could lead to more efficient algorithms. Pacchiano et al. (2019) provide a first insight into this question, representing policies using behavioral distributions which incorporate information about the outcome of the policies in the environment. The Wasserstein Distance (WD) (Villani, 2016) between those behavioral distributions is then used as a similarity measure between their corresponding policies. They further propose to use such behavioral similarity as a global soft penalty to the total objective. Hence, like the KL penalty, proximity between policies is measured globally, and does not necessarily exploit the local geometry defined by the behavioral embeddings.\nIn this work, we show that substantial improvements can be achieved by taking into account the local behavior of policies. We introduce new, efficient optimization methods for RL that incorporate the local geometry defined by the behavioral distributions for both policy gradient (PG) and evolution strategies (ES) approaches. Our main contributions are as follows:\n1- We leverage recent work in (Li & Montufar, 2018a;b; Li, 2018; Li & Zhao, 2019; Chen & Li, 2018) which introduces the notion of the Wasserstein Information Matrix to define a local behavioral similarity measure between policies. This allows us to identify the Wasserstein Natural Gradient (WNG) as a key ingredient for optimization methods that rely on the local behavior of policies. To enable efficient estimation of WNG, we build on the recent work of Arbel et al. (2020), and further extend it to cases where the re-parameterization trick is not applicable, but only the score function of the model is available.\n∗Denotes equal contribution. Correspondence: ted@gatsby.ucl.ac.uk.\n2- This allows us to introduce two novel methods: Wasserstein natural policy gradients (WNPG) and Wasserstein natural evolution strategies (WNES) which use the local behavioral structure of policies through WNG and can be easily incorporated into standard RL optimization routines. When combined in addition with a global behavioral similarity such as a WD penalty, we show substantial improvement over using the penalty alone without access to local information. We find that such WNG-based methods are especially useful on tasks in which initial progress is difficult.\n3- Finally, we demonstrate, to our knowledge, the first in-depth comparative analysis of the FNG and WNG, highlighting a clear interpretable advantage of using WNG over FNG on tasks where the optimal solution is deterministic. This scenario arises frequently in ES and in policy optimization for MDPs (Puterman, 2010). This suggests that WNG could be a powerful tool for this class of problems, especially when reaching accurate solutions quickly is crucial.\nIn Section 2, we present a brief review of policy gradient approaches and the role of divergence measures as regularization penalties. In Section 3 we introduce the WNG and detail its relationship with the FNG and the use of Wasserstein penalties, and in Section 4 we derive practical algorithms for applying the WNG to PG and ES. Section 5 contains our empirical results." }, { "heading": "2 BACKGROUND", "text": "Policy Gradient (PG) methods directly parametrize a policy πθ, optimizing the parameter θ using stochastic gradient ascent on the expected total discounted reward R(θ). An estimate ĝk of the gradient of R(θ) at θk can be computed by differentiating a surrogate objective Lθ which often comes in two flavors, depending on whether training is on-policy (left) or off-policy (right):\nL(θ) = Ê [ log πθ(at|st)Ât ] , or L(θ) = Ê [ πθ(at|st) πθk(at|st) Ât ] . (1)\nThe expectation Ê is an empirical average over N trajectories τi = (si1, ai1, ri1, ..., siT , aiT , riT ) of state-action-rewards obtained by simulating from the environment using πθk . The scalar Ât is an estimator of the advantage function and can be computed, for instance, using\nÂt = rt + γV (st+1)− V (st) (2) where γ ∈ [0, 1) is a discount factor and V is the value function often learned as a parametric function via temporal difference learning (Sutton & Barto, 2018). Reusing trajectories can reduce the computational cost at the expense of increased variance of the gradient estimator (Schulman et al., 2017). Indeed, performing multiple policy updates while using trajectories from an older policy πθold means that the current policy πθ can drift away from the older policy. On the other hand, the objective is obtained as an expectation under πθ for which fresh trajectories are not available. Instead, the objective is estimated using importance sampling (by re-weighting the old trajectories according to importance weights πθ/πθold ). When πθ is too far from πθold , the importance weight can have a large variance. This can lead to a drastic degradation of performance if done naı̈vely (Schulman et al., 2017). KL-based policy optimization (PO) aims at addressing these limitations.\nKL-based PO methods ensure that the policy does not change substantially between successive updates, where change is measured by the KL divergence between the resulting action distributions. The general idea is to add either a hard KL constraint, as in TRPO (Schulman et al., 2015), or a soft constraint, as in PPO (Schulman et al., 2017), to encourage proximity between policies. In the first case, TRPO recovers the FNG with a step-size further adjusted using line-search to enforce the hard constraint. The FNG permits larger steps in directions where policy changes the least, thus reducing the number of updates required for optimization. In the second case, the soft constraint leads to an objective of the form:\nmaximizeθ L(θ)− βÊ [KL(πθk (·|st), πθ(·|st))] . (3) The KL penalty prevents the updates from deviating too far from the current policy πθk , thereby controlling the variance of the gradient estimator. This allows making multiple steps with the same simulated trajectories without degradation of performance. While both methods take into account the proximity between policies as measured using the KL, they do not take into account the behavior of such policies in the environment. Exploiting such information can greatly improve performance.\nBehavior-Guided Policy Optimization. Motivated by the idea that policies can differ substantially as measured by their KL divergence but still behave similarly in the environment, Pacchiano et al. (2019) recently proposed to use a notion of proximity in behavior between policies for PO. Exploiting similarity in behavior during optimization allows to take larger steps in directions where policies behave similarly despite having a large KL divergence. To capture a sense of global behavior, they define a behavioral embedding map (BEM) Φ that maps every trajectory τ to a behavior variable X = Φ(τ) belonging to some embedding space E . The behavior variable X provides a simple yet meaningful representation of each the trajectory τ . As a random variable, X is distributed according to a distribution qθ, called the behavior distribution. Examples of Φ include simply returning the final state of a trajectory (Φ(τ) = sT ) or its concatenated actions (Φ(τ) = [a0, . . . , aT ]). Proximity between two policies πθ and πθ′ is then measured using the Wasserstein distance between their behavior distributions qθ and qθ′ . Although, the KL could also be used in some cases, the Wasserstein distance has the advantage of being well-defined even for distributions with non-overlapping support, therefore allowing more freedom in choosing the embedding Φ (see Section 3.1). This leads to a penalized objective that regulates behavioral proximity:\nmaximizeθ L(θ)− β\n2 W2(qθk , qθ), (4)\nwhere β ∈ R is a hyper-parameter controlling the strength of the regularization. To compute the penalty, Pacchiano et al. (2019) use an iterative method from Genevay et al. (2016). This procedure is highly accurate when the Wasserstein distance changes slowly between successive updates, as ensured when β is large. At the same time, larger values for β also mean that the policy is updated using smaller steps, which can impede convergence. An optimal trade-off between the rate of convergence and the precision of the estimated Wasserstein distance can be achieved using an adaptive choice of β as done in the case of PPO Schulman et al. (2017). For a finite value of β, the penalty accounts for global proximity in behavior and doesn’t explicitly exploit the local geometry induced by the BEM, which can further improve convergence. We introduce an efficient method that explicitly exploits the local geometry induced by the BEM through the Wasserstein Natural gradient (WNG), leading to gains in performance at a reduced computational cost. When global proximity is important to the task, we show that using the Wasserstein penalty in Equation (4) and optimizing it using the WNG yields more efficient updates, thus converging faster than simply optimizing Equation (4) using standard gradients." }, { "heading": "3 THE WASSERSTEIN NATURAL GRADIENT", "text": "The Wasserstein natural gradient (WNG) (Li & Montufar, 2018a;b) corresponds to the steepestascent direction of an objective within a trust region defined by the local behavior of the Wasserstein2 distance (W2). The W2 between two nearby densities qθ and qθ+u can be approximated by computing the average cost of moving every sample X from qθ to a new sample X ′ approximately distributed according to qθ+u using an optimal vector field of the form ∇xfu(x) so that X ′ = X + ∇xfu(X) (see Figure 6). Optimality of ∇xfu is defined as a trade-off between accurately moving mass from qθ to qθ+u and reducing the transport cost measured by the average squared norm of∇xfu\nsup fu\n∇θEqθ [fu(X)]> u− 1\n2 Eqθ\n[ ‖∇xfu(X)‖2 ] , (5)\nwhere the optimization is over a suitable set of smooth real valued functions on E . Hence, the optimal function fu solving Equation (5) defines the optimal vector field ∇xfu(x). Proposition 1 makes this intuition more precise and defines the Wasserstein Information Matrix.\nProposition 1 (Adapted from Defintion 3 Li & Zhao (2019)) The second-order Taylor expansion of W2 between two nearby parametric probability distributions qθ and qθ+u is given by\nW 22 (qθ, qθ+u) = u >G(θ)u+ o(‖u‖2) (6)\nwhere G(θ) is the Wasserstein Information Matrix (WIM), with components in a basis (e1, ..., ep) Gj,j′(θ) = Eqθ [ ∇xfj(X)>∇xfj′(X) ] . (7)\nThe functions fj solve Equation (5) with u chosen as ej . Moreover, for any given u, the solution fu to Equation (5) satisfies Eθ[‖∇xfu(X)‖2] = u>G(θ)u.\nWhen qθ and qθ+u are the behavioral embedding distributions of two policies πθ and πθ+u, the function fu allows to transport behavior from a policy πθ to a behavior as close as possible to πθ+u with the least cost. We thus refer to fu as the behavioral transport function. The function fu determines how hard it is to change behavior locally from policy πθ in a direction u, thus providing a tool to find update directions u with either maximal or minimal change in behavior.\nProbing all directions in a basis (e1, ..., ep) of parameters allows us to construct the WIM G(θ) in Equation (7) which summarizes proximity in behavior along all possible directions u using u>G(θ)u = Eqθ [‖∇xfu(X)‖2]. For an objective L(θ), such as the expected total reward of a policy, the Wasserstein natural gradient (WNG) is then defined as the direction u that locally increases L(θ+u) the most with the least change in behavior as measured by fu. Formally, the WNG is related to the usual Euclidean gradient g = ∇θL(θ) by\ngW = arg max u 2g>u− u>G(θ)u. (8)\nFrom Equation (8), the WNG can be expressed in closed-form in terms of G(θ) and g as gW = G−1(θ)g. Hence, WNG ascent is simply performed using the update equation θk+1 = θk + λgWk . We’ll see in Section 4 how to estimate WNG efficiently without storing or explicitly inverting the matrix G. Next, we discuss the advantages of using WNG over other methods." }, { "heading": "3.1 WHY USE THE WASSERSTEIN NATURAL GRADIENT?", "text": "To illustrate the advantages of the WNG, we consider a simple setting where the objective is of the form L(θ) = Eqθ [ψ(x)], with qθ being a gaussian distribution. The optimal solution in this example is a deterministic point mass located at the global optimum x? of the function ψ(x). This situation arises systematically in the context of ES when using a gaussian noise distribution with learnable mean and variance. Moreover, the optimal policy of a Markov Decision Processes (MDP) is necessarily deterministic (Puterman, 2010). Thus, despite its simplicity, this example allows us to obtain closed-form expressions for all methods while capturing a crucial property in many RL problems (deterministic optimal policies) which, as we will see, results in differences in performance.\nWasserstein natural gradient vs Fisher natural gradient While Figure 1 (c) shows that both methods seem to reach the same solution, a closer inspection of the loss, as shown in Figure 1 (d) and (e) for two different parameterizations of qθ, shows that the FNG is faster at first, then slows down to reach a final error of 10−4. On the other hand, WNG is slower at first then transitions suddenly to an error of 10−8. The optimal solution being deterministic, the variance of the gaussian qθ needs to shrink to 0. In this case, the KL blows up, while the W2 distance remains finite. As the natural gradient methods are derived from those two divergences (Theorem 2 of Appendix B), they inherit the same behavior. This explains why, unlike the WNG, the FNG doesn’t achieve the error of 10−8. Beyond this example, when the policy πθ is defined only implicitly using a generative network, as in Tang & Agrawal (2019), the FNG and KL penalty are ill-defined since πθk and πθk+1 might have non-overlapping supports. However, the WNG remains well-defined (see Arbel et al. (2020)) and allows for more flexibility in representing policies, such as with behavioral embeddings.\nWasserstein penalty vs Wasserstein natural gradient The Wasserstein penalty Equation (4) encourages global proximity between updates qθk . For small values of the penalty parameter β, the method behaves like standard gradient descent (Figure 1 (a)). As β increases, the penalty encourages more local updates and thus incorporates more information about the local geometry defined by qθ. In fact, it recovers the WNG direction (Theorem 2 of Appendix B) albeit with an infinitely small step-size which is detrimental to convergence of the algorithm. To avoid slowing-down, there is an intricate balance between the step-size and penalty β that needs to be maintained (Schulman et al., 2017). All of these issues are avoided when directly using the WNG, as shown in Figure 1 (a), which performs the best and tolerates the widest range of step-sizes Figure 1 (f). Moreover, when using the log-diagonal parameterization as in Figure 1 (d,a), the WNGD (in red) achieves an error of 1e-8, while W2-penalty achieves a larger error of order 1e-0 for various values of the β. When using the diagonal parameterization instead, as shown in Figure 1 (e), both methods achieve a similar error of 1e-6. This discrepancy in performance highlights the robustness of WNG to parameterization of the model.\nCombining WNG and a Wasserstein penalty. The global proximity encouraged by a W2 penalty can be useful on its own, for instance, to explicitly guarantee policy improvement as in (Pacchiano et al., 2019, Theorem 5.1). However, this requires estimating theW2 at every iteration, which can be costly. Using WNG instead of the usual gradient can yield more efficient updates, thus reducing the number of time W2 needs to be estimated. The speed-up can be understood as performing secondorder optimization on theW2 penalty since the WNG arises precisely from a second-order expansion of the W2 distance, as shown in Section 3 (See also Example 2 in Arbel et al. (2020))." }, { "heading": "4 POLICY OPTIMIZATION USING BEHAVIORAL GEOMETRY", "text": "We now present practical algorithms to exploit the behavioral geometry induced by the embeddings Φ. We begin by describing how to efficiently estimate the WNG.\nEfficient estimation of the WNG can be performed using kernel methods, as shown in Arbel et al. (2020) in the case where the re-parametrization trick is applicable. This is the case, if for instance, the behavioral variable is the concatenation of actions X = [a0, ..., aT ] and if actions are sampled from a gaussian with mean and variance parameterized by a neural network, as is often done in practice for real-valued actions. Then X can be expressed as X = Bθ(Z) where Bθ is a known function and Z is an input sample consisting in the concatenation of states [s0, ..., sT ] and the gaussian noise used to generate the actions. However, the proposed algorithm is not readily applicable if for instance the behavioral variable X is a function of the reward.\nWe now introduce a procedure that extends the previous method to more general cases, including those where only the score ∇θ log qθ is available without an explicit re-parametrization trick. The core idea is to approximate the functions fej defining G(θk) in Equation (7) using a linear combinations of user-specified basis functions (h1(x), ..., hM (x)):\nf̂ej (x) = M∑ m=1 αjmhm(x), (9)\nThe numberM controls the computational cost of the estimation and is typically chosen on the order of M = 10. The basis can be chosen to be data-dependent using kernel methods. More precisely,\nAlgorithm 1: Wasserstein Natural Policy Gradient\n1: Input Initial policy πθ0 2: for iteration k = 1, 2, ... do 3: Obtain N rollouts {τ}Nn=1 of length T using policy πθk 4: Compute loss L(θk) in a forward pass 5: Compute gradient ĝk in the backward pass on L(θk) 6: Compute Behavioral embeddings {Xn = Φ(τn)}Nn=1 7: Compute WNG ĝWk using Algorithm 3 with samples {Xn}Nn=1 and gradient estimate ĝk. 8: Update policy using: θk+1 = θk + λĝWk . 9: end for\nwe use the same approach as in Arbel et al. (2020), where we first subsample M data-points Ym from a batch of N variables Xn and M indices im from {1, ..., d} where d is the dimension of Xn. Then, each basis can of the form hm(x) = ∂imK(Ym, x) whereK is a positive semi-definite kernel, such as the gaussian kernel K(x, y) = exp(−‖x−y‖ 2\nσ2 ). This choice of basis allows us to provide guarantees for functions fj in terms of the batch size N and the number of basis points M (Arbel et al., 2020, Theorem 7). Plugging-in each f̂j in the transport cost problem Equation (5) yields a quadratic problem of dimension M in the coefficients αj :\nmaximizeαj 2J.,jαj − (αj)>Lαj\nwhere L is a square matrix of size M ×M independent of the index j and J is a Jacobian matrix of shape M × p with rows given by Jm,. = ∇θEqθk [hm(X)]. There are two expressions for J , depending on the applicability of the re-parametrization trick or the availability of the score\nJm,. = Êqθ [∇xhm(X)∇θBθ(Z)] or Jm,. = Êqθ [∇θ log qθ(X)hm(X)] (10) Computing J can be done efficiently for moderate size M by first computing a surrogate vector of V of size M whose Jacobian recovers J using automatic differentiation software:\nVm = Êqθ [hm(Xn)] , or Vm = Êqθ [log qθ(Xn)hm(Xn)] . (11)\nThe optimal coefficients αj are then simply expressed as α = L†J . Plugging-in the optimal functions in the expression of the Wasserstein Information Matrix (Equation (7)), yields a low rank approximation of G of the form Ĝ = J>L†J . By adding a small diagonal perturbation matrix I , it is possible efficiently compute (Ĝ+ I)−1ĝ using a generalized Woodbury matrix identity which yields an estimator for the Wasserstein Natural gradient\nĝW = 1 ( ĝ − J> ( JJ> + L )† Jĝ ) . (12)\nThe pseudo-inverse is only computed for a matrix of size M . Using the Jacobian-vector product, Equation (12) can be computed without storing large matrices G as shown in Algorithm 3.\nWasserstein Natural Policy Gradient (WNPG). It is possible to incorporate local information about the behavior of a policy in standard algorithms for policy gradient as summarized in Algorithm 1. In its simplest form, one first needs to compute the gradient ĝk of the objective L(θk) using, for instance, the REINFORCE estimator computed usingN trajectories τn. The trajectories are then used to compute the BEMs which are fed as input, along with the gradient ĝk to get an estimate of the WNG gWk . Finally, the policy can be updated in the direction of g W k . Algorithm 1 can also be used in combination with an explicit W2 penalty to control non-local changes in behavior of the policy thus ensuring a policy improvement property as in (Pacchiano et al., 2019, Theorem 5.1). In that case, WNG enhances convergence by acting as a second-order optimizer, as discussed in Section 3.1. The standard gradient ĝk in Algorithm 1 is then simply replaced by the one computed in (Pacchiano et al., 2019, Algorithm 3). In Section 5, we show that this combination, which we call behavior-guided WNPG (BG-WNPG), leads to the best overall performance.\nWasserstein Natural Evolution Strategies (WNES). ES treats the total reward observed on a trajectory under policy πθ as a black-box function L(θ) (Salimans et al., 2017; Mania et al., 2018;\nAlgorithm 2: Wasserstein Natural Evolution Strategies\n1: Input Initial policy πθ0 , α > 0, δ ≤ 1 2: for iteration k = 1, 2, ... do 3: Sample 1, . . . , n ∼ N (0, I). 4: Perform rollouts {τn}Nn−1 of length T using the perturbed parameters {θ̃n = θk + σ n}Nn=1 and compute behavioral embeddings {Xn = Φ(τn)}Nn=1 5: Compute gradient estimate of L(θ̃n) using Equation (13) and trajectories {τn}Nn=1. 6: Compute Jacobian matrix J appearing in Algorithm 3 using Equation (14). 7: Compute WNG ĝWk using Algorithm 3, with samples {Xn}Ni=1 and computed ĝk and J . 8: Update policy using Equation (15). 9: end for\nChoromanski et al., 2020). Evaluating it under N policies whose parameters θ̃n are gaussian perturbations centered around θk and with variance σ can give an estimate of the gradient of L(θk):\nĝk = 1\nNσ N∑ n=1 ( L(θ̃n)− L(θk) ) (θ̃n − θk). (13)\nInstead of directly updating the policy using Equation (13), it is possible to encourage either proximity or diversity in behavior using the embeddingsXn = Φ(τn) of the trajectories τn generated for each perturbed policy πθ̃n . Those embeddings can be used as input to Algorithm 3 (see appendix), along with Equation (13) to estimate the ĝWk , which captures similarity in behavior. The algorithm remains unchanged except for the estimation of the Jacobian J of Equation (10) which becomes\nJm,. = 1\nNσ N∑ n=1 hm(Xn)(θ̃ n − θk). (14)\nThe policy parameter can then be updated using an interpolation between ĝk and the WNG ĝWk , i.e.,\n∆θk ∝ (1− δ)ĝk + δĝWk (15) with δ ≤ 1 that can also be negative. Positive values for δ encourage proximity in behavior, the limit case being δ = 1 where a full WNG step is taken. Negative values encourage repulsion and therefore need to compensated by ĝk to ensure overall policy improvement. Algorithm 2 summarizes the whole procedure, which can be easily adapted from existing ES implementations by calling a variant of Algorithm 3. In particular, it can also be used along with an explicit W2 penalty, in which case the proposed algorithm in Pacchiano et al. (2019) is used to estimate the standard gradient ĝk of the penalized loss. Then the policy is updated using Equation (15) instead of ĝk. We refer to this approach as behavior-guided WNES (BG-WNES)." }, { "heading": "5 EXPERIMENTS", "text": "We now test the performance of our estimators for both policy gradients (PG) and evolution strategies (ES) against their associated baseline methods. We show that in addition to an improved computational efficiency, our approach can effectively utilize the geometry induced by a Wasserstein penalty to improve performance, particularly when the optimization problem is ill-conditioned. Further experimental details can be found in the appendix, and our code is available online1.\nPolicy Gradients. We first apply WNPG and BG-WNPG to challenging tasks from OpenAI Gym (Brockman et al., 2016) and Roboschool (RS). We compare performance against behavior-guided policy gradients (BGPG), (Pacchiano et al., 2019), PPO with clipped surrogate objective (Schulman et al., 2017) (PPO (Clip)), and PG with no trust region (None). From Figure 2, we can see that BGPG outperforms the corresponding KL-based method (PPO) and vanilla PG, as also demonstrated in the work of Pacchiano et al. (2019). Our method (WNPG) matches or exceeds final performance of BGPG on all tasks. Moreover, combining both (BG-WNPG) produces the largest gains on all\n1https://github.com/tedmoskovitz/WNPG\nenvironments. Final mean rewards are reported in Table 1. It is also important to note that WNGbased methods appear to offer the biggest advantage on tasks where initial progress is difficult. To investigate this further, we computed the hessian matrix at the end of training for each task and measured the ratios of its largest eigenvalue to each successive eigenvalue (Figure 3). Larger ratios indicate ill-conditioning, and it is significant that WNG methods produce the greatest improvement on the environments with the poorest conditioning. This is consistent with the findings in Arbel et al. (2020) that showed WNG to perform most favorably compared to other methods when the optimization problem is ill-conditioned, and implies a useful heuristic for gauging when WNGbased methods are most useful for a given problem.\nEvolution Strategies To test our estimator for WNES, as well as BG-WNES, we applied our approach to the environment introduced by Pacchiano et al. (2019), designed to test the ability of behavior-guided learning to succeed despite deceptive rewards. During the task, the agent receives a penalty proportional to its distance from a goal, but a wall is placed directly in the agent’s path (Figure 7). This barrier induces a local maximum in the objective—a naı̈ve agent will simply walk directly towards the goal and get stuck at the barrier. The idea is that the behavioral repulsion fostered by applying a positive coefficient to the Wasserstein penalty (β > 0) will encourage the agent to\nseek novel policies, helping it to eventually circumvent the wall. As in Pacchiano et al. (2019), we test two agents, a simple point and a quadruped. We then compare our method with vanilla ES as described by Salimans et al. (2017), ES with gradient norm clipping, BGES (Pacchiano et al., 2019), and NSR-ES (Conti et al., 2018a). In Figure 4, we can see that WNES and BG-WNES improve over the baselines for both agents. To test that the improvement shown by BG-WNES wasn’t simply a case of additional “repulsion” supplied by the WNG to BGES, we also tested BGES with an increased β = 0.75, compared to the default of 0.5. This resulted in a decrease in performance, attesting to the unique benefit provided by the WNES estimator.\nComputational Efficiency We define the computational efficiency of an algorithm as the rate with which it accumulates reward relative to its runtime. To test the computational efficiency of our\napproach, we plotted the total reward divided by wall clock time obtained by each agent for each task (Fig. 5). Methods using a WNG estimator were the most efficient on each task for both PG and ES agents. On several environments used for the policy gradient tasks, the added cost of BG-WNPG reduced its efficiency, despite having the highest absolute performance." }, { "heading": "6 CONCLUSION", "text": "Explicit regularization using divergence measures between policy representations has been a common theme in recent work on policy optimization for RL. While prior works have previously focused on the KL divergence, Pacchiano et al. (2019) showed that a Wasserstein regularizer over behavioral distributions provides a powerful alternative framework. Both approaches implicitly define a form of natural gradient, depending on which divergence measure is chosen. Through the introduction of WNPG and WNES, we demonstrate that directly estimating the natural gradient of the un-regularized objective can deliver greater performance at lower computational cost. These algorithms represent novel extensions of previous work on the WNG to problems where the reparameterization trick is not available, as well as to black-box methods like ES. Moreover, using the WNG in conjunction with a WD penalty allows the WNG to take advantage of the local geometry induced by the regularization, further improving performance. We also provide a novel comparison between the WNG and FNG, showing that the former has significant advantages on certain problems. We believe this framework opens up a number of avenues for future work. Developing a principled way to identify useful behavioral embeddings for a given RL task would allow to get the highest benefit form WNPG and WNES. From a theoretical perspective, it would be useful to characterize the convergence boost granted by the combination of explicit regularization and the corresponding natural gradient approach.\nAcknowledgments The authors would like to thank Jack Parker-Holder for sharing his code for BGPG and BGES, as well as colleagues at Gatsby for useful discussions." }, { "heading": "A BACKGROUND", "text": "A.1 POLICY OPTIMIZATION\nAn agent interacting with an environment form a system that can be described by a state variable s belonging to a state space S. In the Markov Decision Process (MDP) setting, the agent can interact with the environment by taking an action a from a set of possible actions A given the current state s of the system. As a consequence, the system moves to a new state s′ according to a probability transition function P (s′|a, s) which describes the probability of moving to state s′ given the previous state s and action a. The agent also receives a partial reward r which can be expressed as a possibly randomized function of the new state s′, r = r(s′). The agent has access to a set of possible policies πθ(a|s) parametrized by θ ∈ Rp and that generates an action a given a current state s. Thus, each policy can be seen as a probability distribution conditioned a state s. Using the same policy induces a whole trajectory of state-action-rewards τ = (st, at, rt)t≥0 which can be viewed as a sample from a trajectory distribution Pθ defined over the space of possible trajectories τ . Hence, for a given random trajectory τ induced by a policy πθ, the agent receives a total discounted reward R(τ) := ∑∞ t=1 γ\nt−1r(st) with discount factor 0 < γ < 1. This allows to define the value function as the expected total reward conditioned on a particular initial state s:\nVθ(st) = EPθ|st [ ∞∑ l=1 γl−1r(sl+t) ] . (16)\nWhen the gradient of the score function ∇ log πθ(a|s) is available, the policy gradient theorem allows us to express the gradient ofR(θ):\n∇θR(θ) = EPθ [ ∞∑ t=0 γt∇ log πθ(at|st)Aθ(st, at) ]\n(17)\nwhere the expectation is taken over trajectories τ under Pθ and Aθ(s, a) represents the advantage function which can be expressed in terms of the value function Vθ(s) in terms of\nAθ(st, at) = Est+1|st,at [r(st+1) + γVθ(st+1)]− Vθ(st). The agent seeks an optimal policy πθ? that maximizes the expected total reward under the trajectory distribution: R(θ) = EPθ [R(τ)]." }, { "heading": "B WASSERSTEIN NATURAL GRADIENT", "text": "Connection to the Fisher natural gradient and proximal methods. Both WNG and FNG are obtained from a proximity measure between probability distributions:\nProposition 2 Let D(θ, θ′) be either the KL-divergence KL(πθ, πθ′) or the Wasserstein-2 distance between the behavioral distributions W2(qθ, qθ′) and let gD be either the FNG gF or WNG gW ,\nthen\ngDk = lim β→+∞ arg max u β\n( L(θk + β−1u)− L(θk)− β 2 D ( θk, θk + β −1u ))\n(18)\nEquation (18) simply states that the both WNG and FNG arise as limit cases of penalized objectives provided the strength of the penalty β diverges to infinity and the step-size is shrank proportionally to β−1. An additional global rescaling by β of the total objective prevents it from collapsing to 0. Intuitively, performing a Taylor expansion of Equation (18) recovers an equation similar to Equation (8). Equation (18) shows that using a penalty that encourages global proximity between successive policies, it is possible to recover the local geometry of policies (captured by the local ) by increasing the strength of the penalty using appropriate re-scaling. This also informally shows why both natural gradients are said to be invariant to re-parametrization (Arbel et al., 2020, Proposition 1), since both KL and W2 remains unchanged if qθ is parameterized in a different way." }, { "heading": "C ALGORITHM FOR ESTIMATING WNG", "text": "Algorithm 3: Efficient Wasserstein Natural Gradient\n1: Input mini-batch of samples {Xn}Nn=1 distributed according to qθ, gradient direction ĝ, basis functions h1, ..., hM , regularization parameter . 2: Output Wasserstein Natural gradient ĝW 3: Compute a matrix C of shape M ×Nd using Cm,(n,i) ← ∂ihm(Xn). 4: Compute similarity matrix L← 1NCCT . 5: Compute surrogate vector V using Equation (11). 6: for iteration= 1, 2, ...M do 7: Use automatic differentiation on Vm to compute Jacobian matrix J in Equation (10). 8: end for 9: Compute a matrix D of shape M ×M using D ← JJ> + L.\n10: Compute a vector b of size M using b← Jĝ. 11: Solve linear system of size M : b← solve (D, b) 12: Return ĝW ← 1 (ĝ − J>b)" }, { "heading": "D ADDITIONAL EXPERIMENTAL DETAILS", "text": "D.1 POLICY GRADIENT TASKS\nWe conserve all baseline and shared hyperparameters used by Pacchiano et al. (2019). More precisely, for each task we ran a hyperparameter sweep over learning rates in the set {1e-5, 5e-5, 1e-4, 3e-4}, and used the concatenation-of-actions behavioral embedding Φ(τ) = [a0, a1, . . . , aT ] with the base network implementation the same as Dhariwal et al. (2017).\nThe WNG hyperparameters were also left the same as in Arbel et al. (2020). Specifically, the number of basis points was set as M = 5, the reduction factor was bounded in the range [0.25, 0.75], and ∈ [1e-10, 1e5].\nD.2 EVOLUTION STRATEGIES TASKS\nAs with the policy gradient tasks, we conserved all baseline and shared hyperparameters used by Pacchiano et al. (2019). Specifically, for the point task, we set the learning rate to be η = 0.1, the standard deviation of the noise to be σ = 0.01, the rollout length H was 50 time steps, and the behavioral embedding function to be the last state Φ(τ) = sH . For the quadruped task we set η = 0.02, σ = 0.02, H = 400, and Φ(τ) = ∑H t=0 rt (∑t i=0 ei ) (reward-to-go encoding; see Pacchiano et al. (2019) for more details). Both tasks used 1000-dimensional random features and embeddings from the n = 2 previous policies to compute the WD.\nFor WNG, the same hyperparameters were used as in the policy gradient tasks.\nD.3 EXPERIMENTAL SETTING OF FIGURE 1\nThe Objective We consider a function ψ(x) is the sum of sinc functions over all dimensions of x ∈ R100\nψ(x) = 100∑ i=1 sin(xi) xi − 1 (19)\nSuch function is highly non-convex and admits multiple bad local minima with the global minimum of ψ(x) reached for x? = 0. However, we do not make use of this information during optimization. To alleviate the non-convexity of this loss, we consider a gaussian relaxation objectiveL(θ) obtained by taking the expectation of ψ(x) over the 100 dimensional vector x w.r.t. to a gaussian qθ with parameter vector θ. Thus the objective function to be optimized is a function of θ:\nL(θ) = Eqθ [ψ(x)] (20)\nThe parameter vector θ is of the form θ = (µ, v), where µ is the mean of the gaussian qθ and v is a vector in R100 parameterizing the covariance matrix Σ of the gaussian qθ. We will later consider two parameterizations for the covariance matrix.\nThe minimal value of L(θ) is reached when the gaussian qθ is degenerate with Σ = 0 and mean µ = x? = 0. Hence, the mean parameter of the global minimum of L(θ) recover the global optimum of ψ.\nParameterization of the gaussian We choose the covariance matrix of the gaussian to be diagonal and consider two parameterizations for the covariance matrix Σ: diagonal and log-diagonal. For the diagonal parameterization the Covariance Σii = vi and for the log-diagonal we set Σii = exp(2vi).\nOptimization methods We consider different optimization methods using the same objective L(θ). For the penalty methods, we use the closed form expressions for the both the Wasserstein distance and KL which are available explicitly in the case of gaussians.\nFor the Natural gradient methods (WNG) and (FNG), we use the closed form expressions which are also available in the gaussian case. We denote them as ∇WL(θ) for (WNG) and ∇FL(θ) for FNG and express them in terms of the euclidean/standard gradient∇L(θ):\n• Diagonal parameterization: – WNG:\n∇Wv L(θ) = 4 ∗ Σ∇vL(θ), ∇Wµ L(θ) = ∇µL(θ) (21) – FNG:\n∇Fv L(θ) = 2Σ2∇ΣL(θ), ∇FµL(θ) = Σ∇µL(θ) (22) • Log-diagonal parameterization:\n– WNG:\n∇Wv L(θ) = Σ−1∇vL(θ), ∇Wµ L(θ) = ∇µL(θ) (23) – FNG:\n∇Fv L(θ) = .5 ∗ ∇vL(θ), ∇FµL(θ) = Σ∇µL(θ) (24)\nTraining details Training is up to 4000 gradient iterations, with λ = .9 and β = .1 unless they are varied." } ]
2,021
null
SP:c3835de54da82e1b07406d118aca719082367ffb
[ "Inspired by the observations of feedforward inhibition in the brain, the authors propose a novel ANN architecture that respects Dale’s rule (DANN). They provide two improvements for training DANNs: better initialization and update scaling for synaptic weights. As a result, they empirically demonstrate that DANNs perform no worse than the ANNs that do not respect Dale’s rule." ]
The units in artificial neural networks (ANNs) can be thought of as abstractions of biological neurons, and ANNs are increasingly used in neuroscience research. However, there are many important differences between ANN units and real neurons. One of the most notable is the absence of Dale’s principle, which ensures that biological neurons are either exclusively excitatory or inhibitory. Dale’s principle is typically left out of ANNs because its inclusion impairs learning. This is problematic, because one of the great advantages of ANNs for neuroscience research is their ability to learn complicated, realistic tasks. Here, by taking inspiration from feedforward inhibitory interneurons in the brain we show that we can develop ANNs with separate populations of excitatory and inhibitory units that learn just as well as standard ANNs. We call these networks Dale’s ANNs (DANNs). We present two insights that enable DANNs to learn well: (1) DANNs are related to normalization schemes, and can be initialized such that the inhibition centres and standardizes the excitatory activity, (2) updates to inhibitory neuron parameters should be scaled using corrections based on the Fisher Information matrix. These results demonstrate how ANNs that respect Dale’s principle can be built without sacrificing learning performance, which is important for future work using ANNs as models of the brain. The results may also have interesting implications for how inhibitory plasticity in the real brain operates.
[ { "affiliations": [], "name": "Jonathan Cornford" }, { "affiliations": [], "name": "Damjan Kalajdzievski" }, { "affiliations": [], "name": "Marco Leite" }, { "affiliations": [], "name": "Amélie Lamarquette" }, { "affiliations": [], "name": "Dimitri M. Kullmann" }, { "affiliations": [], "name": "Blake Richards" } ]
[ { "authors": [ "Daniel J Amit", "C Campbell", "KYM Wong" ], "title": "The interaction space of neural networks with sign-constrained synapses", "venue": "Journal of Physics A: Mathematical and General,", "year": 1989 }, { "authors": [ "Bassam V Atallah", "William Bruns", "Matteo Carandini", "Massimo Scanziani" ], "title": "Parvalbuminexpressing interneurons linearly transform cortical responses to visual stimuli", "venue": null, "year": 2012 }, { "authors": [ "Helen C Barron", "Tim P Vogels", "Timothy E Behrens", "Mani Ramaswami" ], "title": "Inhibitory engrams in perception and memory", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Sergey Bartunov", "Adam Santoro", "Blake Richards", "Luke Marris", "Geoffrey E Hinton", "Timothy Lillicrap" ], "title": "Assessing the scalability of biologically-motivated deep learning algorithms and architectures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sven Behnke" ], "title": "Hierarchical neural networks for image interpretation, volume 2766", "venue": null, "year": 2003 }, { "authors": [ "Xavier Bouthillier", "Christos Tsirigotis", "François Corneau-Tremblay", "Pierre Delaunay", "Reyhane Askari", "Dendi Suhubdy", "Michael Noukhovitch", "Dmitriy Serdyuk", "Arnaud Bergeron", "Peter Henderson", "Pascal Lamblin", "Mirko Bronzi", "Christopher Beckham" ], "title": "Oríon - asynchronous distributed hyperparameter optimization, October 2019", "venue": "URL https://doi.org/10.5281/zenodo", "year": 2019 }, { "authors": [ "Rui Costa", "Ioannis Alexandros Assael", "Brendan Shillingford", "Nando de Freitas", "Tim Vogels" ], "title": "Cortical microcircuits as gated-recurrent neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Peter W Donhauser", "Sylvain Baillet" ], "title": "Two distinct neural timescales for predictive speech", "venue": "processing. Neuron,", "year": 2020 }, { "authors": [ "John Carew Eccles" ], "title": "From electrical to chemical transmission in the central nervous system: the closing address of the sir henry dale centennial symposium cambridge", "venue": "september", "year": 1975 }, { "authors": [ "Mario Galarreta", "Shaul Hestrin" ], "title": "Spike transmission and synchrony detection in networks of gabaergic", "venue": "interneurons. Science,", "year": 2001 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Hua Hu", "Jian Gan", "Peter Jonas" ], "title": "Fast-spiking, parvalbumin+ gabaergic interneurons: From cellular design to microcircuit function", "venue": null, "year": 2014 }, { "authors": [ "Alessandro Ingrosso", "LF Abbott" ], "title": "Training dynamically balanced excitatory-inhibitory networks", "venue": "PloS one,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Jeffry S Isaacson", "Massimo Scanziani" ], "title": "How inhibition shapes cortical activity", "venue": null, "year": 2011 }, { "authors": [ "Alexander JE Kell", "Daniel LK Yamins", "Erica N Shook", "Sam V Norman-Haignere", "Josh H McDermott" ], "title": "A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing", "venue": "hierarchy. Neuron,", "year": 2018 }, { "authors": [ "Tim Christian Kietzmann", "Patrick McClure", "Nikolaus Kriegeskorte" ], "title": "Deep neural networks in computational neuroscience", "venue": "BioRxiv, pp", "year": 2018 }, { "authors": [ "Jonas Kubilius", "Martin Schrimpf", "Kohitij Kar", "Rishi Rajalingham", "Ha Hong", "Najib Majaj", "Elias Issa", "Pouya Bashivan", "Jonathan Prescott-Roy", "Kailyn Schmidt" ], "title": "Brain-like object recognition with high-performing shallow recurrent anns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Dimitri M Kullmann", "Karri P Lamsa" ], "title": "Long-term synaptic plasticity in hippocampal interneurons", "venue": "Nature Reviews Neuroscience,", "year": 2007 }, { "authors": [ "Qianli Liao", "Tomaso Poggio" ], "title": "Bridging the gaps between residual learning, recurrent neural networks and visual cortex", "venue": "arXiv preprint arXiv:1604.03640,", "year": 2016 }, { "authors": [ "Timothy P Lillicrap", "Adam Santoro", "Luke Marris", "Colin J Akerman", "Geoffrey Hinton" ], "title": "Backpropagation and the brain", "venue": "Nature Reviews Neuroscience,", "year": 2020 }, { "authors": [ "Joana Lourenço", "Angela Michela De Stasi", "Charlotte Deleuze", "Mathilde Bigot", "Antonio Pazienti", "Andrea Aguirre", "Michele Giugliano", "Srdjan Ostojic", "Alberto Bacci" ], "title": "Modulation of coordinated activity across cortical layers by plasticity of inhibitory synapses", "venue": "Cell reports,", "year": 2020 }, { "authors": [ "James Martens" ], "title": "New insights and perspectives on the natural gradient method", "venue": "arXiv preprint arXiv:1412.1193,", "year": 2014 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Chris J McBain", "Tamas F Freund", "Istvan Mody" ], "title": "Glutamatergic synapses onto hippocampal interneurons: precision timing without lasting plasticity", "venue": "Trends in neurosciences,", "year": 1999 }, { "authors": [ "Jonathan A Michaels", "Stefan Schaffelhofer", "Andres Agudelo-Toro", "Hansjörg" ], "title": "Scherberger. A neural network model of flexible grasp movement generation", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "Thomas Miconi" ], "title": "Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive", "venue": "tasks. Elife,", "year": 2017 }, { "authors": [ "Sun Minni", "Li Ji-An", "Theodore Moskovitz", "Grace Lindsay", "Kenneth Miller", "Mario Dipoppa", "Guangyu Robert Yang" ], "title": "Understanding the functional and structural differences across excitatory and inhibitory neurons. 2019", "venue": null, "year": 2019 }, { "authors": [ "Aran Nayebi", "Daniel Bear", "Jonas Kubilius", "Kohitij Kar", "Surya Ganguli", "David Sussillo", "James J DiCarlo", "Daniel L Yamins" ], "title": "Task-driven convolutional recurrent models of the visual system", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christopher Parisien", "Charles H Anderson", "Chris Eliasmith" ], "title": "Solving the problem of negative synaptic weights in cortical models", "venue": "Neural computation,", "year": 2008 }, { "authors": [ "Alexandre Payeur", "Jordan Guerguiev", "Friedemann Zenke", "Blake Richards", "Richard Naud" ], "title": "Burstdependent synaptic plasticity can coordinate learning in hierarchical circuits", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "Frédéric Pouille", "Antonia Marin-Burgin", "Hillel Adesnik", "Bassam V Atallah", "Massimo Scanziani" ], "title": "Input normalization by global feedforward inhibition expands cortical dynamic range", "venue": "Nature neuroscience,", "year": 2009 }, { "authors": [ "Frederic Pouille", "Oliver Watkinson", "Massimo Scanziani", "Andrew J Trevelyan" ], "title": "The contribution of synaptic location to inhibitory gain control in pyramidal cells", "venue": "Physiological reports,", "year": 2013 }, { "authors": [ "Blake A Richards", "Timothy P Lillicrap", "Philippe Beaudoin", "Yoshua Bengio", "Rafal Bogacz", "Amelia Christensen", "Claudia Clopath", "Rui Ponte Costa", "Archy de Berker", "Surya Ganguli" ], "title": "A deep learning framework for neuroscience", "venue": "Nature neuroscience,", "year": 2019 }, { "authors": [ "João Sacramento", "Rui Ponte Costa", "Yoshua Bengio", "Walter Senn" ], "title": "Dendritic cortical microcircuits approximate the backpropagation algorithm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Emilio Salinas", "Terrence J Sejnowski" ], "title": "Impact of correlated synaptic input on output firing rate and variability in simple neuronal models", "venue": "Journal of neuroscience,", "year": 2000 }, { "authors": [ "Martin Schrimpf", "Jonas Kubilius", "Ha Hong", "Najib J Majaj", "Rishi Rajalingham", "Elias B Issa", "Kohitij Kar", "Pouya Bashivan", "Jonathan Prescott-Roy", "Kailyn Schmidt" ], "title": "Brain-score: Which artificial neural network for object recognition is most brain-like? BioRxiv", "venue": null, "year": 2018 }, { "authors": [ "Eduardo Serrano", "Thomas Nowotny", "Rafael Levi", "Brian H Smith", "Ramón Huerta" ], "title": "Gain control network conditions in early sensory coding", "venue": "PLoS Comput Biol,", "year": 2013 }, { "authors": [ "Bryan A Seybold", "Elizabeth AK Phillips", "Christoph E Schreiner", "Andrea R Hasenstaub" ], "title": "Inhibitory actions unified by network", "venue": "integration. Neuron,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "H Francis Song", "Guangyu R Yang", "Xiao-Jing Wang" ], "title": "Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework", "venue": "PLoS computational biology,", "year": 2016 }, { "authors": [ "Nai-Wen Tien", "Daniel Kerschensteiner" ], "title": "Homeostatic plasticity in neural development", "venue": "Neural development,", "year": 2018 }, { "authors": [ "Robin Tremblay", "Soohyun Lee", "Bernardo Rudy" ], "title": "Gabaergic interneurons in the neocortex: from cellular properties to circuits", "venue": null, "year": 2016 }, { "authors": [ "Nicolas X Tritsch", "Adam J Granger", "Bernardo L Sabatini" ], "title": "Mechanisms and functions of gaba co-release", "venue": "Nature Reviews Neuroscience,", "year": 2016 }, { "authors": [ "James CR Whittington", "Rafal Bogacz" ], "title": "Theories of error back-propagation in the brain", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Nathan R Wilson", "Caroline A Runyan", "Forea L Wang", "Mriganka Sur" ], "title": "Division and subtraction by distinct cortical inhibitory networks in vivo", "venue": null, "year": 2012 }, { "authors": [ "Daniel LK Yamins", "James J DiCarlo" ], "title": "Using goal-driven deep learning models to understand sensory cortex", "venue": "Nature neuroscience,", "year": 2016 }, { "authors": [ "Daniel LK Yamins", "Ha Hong", "Charles F Cadieu", "Ethan A Solomon", "Darren Seibert", "James J DiCarlo" ], "title": "Performance-optimized hierarchical models predict neural responses in higher visual cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "He" ], "title": "2015) and considering the response of the layer at a single location, we use", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, artificial neural networks (ANNs) have been increasingly used in neuroscience research for modelling the brain at the algorithmic and computational level (Richards et al., 2019; Kietzmann et al., 2018; Yamins & DiCarlo, 2016). They have been used for exploring the structure of representations in the brain, the learning algorithms of the brain, and the behavioral patterns of humans and non-human animals (Bartunov et al., 2018; Donhauser & Baillet, 2020; Michaels et al., 2019; Schrimpf et al., 2018; Yamins et al., 2014; Kell et al., 2018). Evidence shows that the ability of ANNs to match real neural data depends critically on two factors. First, there is a consistent correlation between the ability of an ANN to learn well on a task (e.g. image recognition, audio perception, or motor control) and the extent to which its behavior and learned representations match real data (Donhauser & Baillet, 2020; Michaels et al., 2019; Schrimpf et al., 2018; Yamins et al., 2014; Kell et al., 2018). Second, the architecture of an ANN also helps to determine how well it can match real brain data, and generally, the more realistic the architecture the better the match (Schrimpf et al., 2018; Kubilius et al., 2019; Nayebi et al., 2018). Given these two factors, it is important for neuroscientific applications to use ANNs that have as realistic an architecture as possible, but which also learn well (Richards et al., 2019; Kietzmann et al., 2018; Yamins & DiCarlo, 2016).\nAlthough there are numerous disconnects between ANNs and the architecture of biological neural circuits, one of the most notable is the lack of adherence to Dale’s principle, which states that a neuron releases the same fast neurotransmitter at all of its presynaptic terminals (Eccles, 1976). Though there are some interesting exceptions (Tritsch et al., 2016), for the vast majority of neurons in\n†Corresponding author: blake.richards@mcgill.ca\nadult vertebrate brains, Dale’s principle means that presynaptic neurons can only have an exclusively excitatory or inhibitory impact on their postsynaptic partners. For ANNs, this would mean that units cannot have a mixture of positive and negative output weights, and furthermore, that weights cannot change their sign after initialisation. In other words, a unit can only be excitatory or inhibitory. However, most ANNs do not incorporate Dale’s principle.\nWhy is Dale’s principle rarely incorporated into ANNs? The reason is that this architectural constraint impairs the ability to learn—a fact that is known to many researchers who have tried to train such ANNs, but one that is rarely discussed in the literature. However, when we seek to compare ANNs to real brains, or use them to explore biologically inspired learning rules (Bartunov et al., 2018; Whittington & Bogacz, 2019; Lillicrap et al., 2020), ideally we would use a biologically plausible architecture with distinct populations of excitatory and inhibitory neurons, and at the same time, we would still be able to match the learning performance of standard ANNs without such constraints.\nSome previous computational neuroscience studies have used ANNs with separate excitatory and inhibitory units (Song et al., 2016; Ingrosso & Abbott, 2019; Miconi, 2017; Minni et al., 2019; Behnke, 2003), but these studies addressed questions other than matching the learning performance of standard ANNs, e.g. they focused on typical neuroscience tasks (Song et al., 2016), dynamic balance (Ingrosso & Abbott, 2019), biologically plausible learning algorithms (Miconi, 2017), or the learned structure of networks (Minni et al., 2019). Importantly, what these papers did not do is develop means by which networks that obey Dale’s principle can match the performance of standard ANNs on machine learning benchmarks, which has become an important feature of many computational neuroscience studies using ANNs (Bartunov et al., 2018; Donhauser & Baillet, 2020; Michaels et al., 2019; Schrimpf et al., 2018; Yamins et al., 2014; Kell et al., 2018).\nHere, we develop ANN models with separate excitatory and inhibitory units that are able to learn as well as standard ANNs. Specifically, we develop a novel form of ANN, which we call a “Dale’s ANN” (DANN), based on feed-forward inhibition in the brain (Pouille et al., 2009). Our novel approach is different from the standard solution, which is to create ANNs with separate excitatory and inhibitory units by constraining whole columns of the weight matrix to be all positive or negative (Song et al., 2016). Throughout this manuscript, we refer to this standard approach as “ColumnEi” models. We have departed from the ColumnEI approach in our work because it has three undesirable attributes. First, constrained weight matrix columns impair learning because they limit the potential solution space (Amit et al., 1989; Parisien et al., 2008). Second, modelling excitatory and inhibitory units with the same connectivity patterns is biologically misleading, because inhibitory neurons in the brain tend to have very distinct connectivity patterns from excitatory neurons (Tremblay et al., 2016). Third, real inhibition can act in both a subtractive and a divisive manner (Atallah et al., 2012; Wilson et al., 2012; Seybold et al., 2015; Pouille et al., 2013), which may provide important functionality.\nGiven these considerations, in DANNs, we utilize a separate pool of inhibitory neurons with a distinct, more biologically realistic connectivity pattern, and a mixture of subtractive and divisive inhibition (Fig. 1). This loosely mimics the fast feedforward subtractive and divisive inhibition provided by fast-spiking interneurons in the cortical regions of the brain (Atallah et al., 2012; Hu et al., 2014; Lourenço et al., 2020). In order to get DANNs to learn as well as standard ANNs we also employ two key insights:\n1. It is possible to view this architecture as being akin to normalisation schemes applied to the excitatory input of a layer (Ba et al., 2016; Ioffe & Szegedy, 2015; Wu & He, 2018), and we use this perspective to motivate DANN parameter initialisation.\n2. It is important to scale the inhibitory parameter updates based on the Fisher information matrix, in order to balance the impact of excitatory and inhibitory parameter updates, similar in spirit to natural gradient approaches (Martens, 2014).\nAltogether, our principle contribution is a novel architecture that obey’s Dale’s principle, and that we show can learn as well as standard ANNs on machine learning benchmark tasks. This provides the research community with a new modelling tool that will allow for more direct comparisons with real neural data than traditional ANNs allow, but which does not suffer from learning impairments. Moreover, our results have interesting implications for inhibitory plasticity, and provide a means for future research into how excitatory and inhibitory neurons in the brain interact at the algorithmic level." }, { "heading": "2 BIOLOGICALLY INSPIRED NETWORKS THAT OBEY DALE’S PRINCIPLE", "text": "" }, { "heading": "2.1 MODEL DEFINITION", "text": "Our design for DANNs takes inspiration from the physiology of feedforward inhibitory microcircuits in the neocortex and hippocampus. Based on these circuits, and an interpretation of layers in ANNs as corresponding to brain regions, we construct DANNs with the following architectural constraints:\n1. Each layer of the network contains two distinct populations of units, an excitatory and an inhibitory population.\n2. There are far fewer inhibitory units than excitatory units in each layer, just as there are far more excitatory neurons than inhibitory neurons (∼ 5-10 times) in cortical regions of the brain (Tremblay et al., 2016; Hu et al., 2014).\n3. As in real neural circuits where only the excitatory populations project between regions, here only excitatory neurons project between layers, and both the excitatory and inhibitory populations of a layer receive excitatory projections from the layer below.\n4. All of the synaptic weights are strictly non-negative, and inhibition is enforced via the activation rules for the units (eq. 1).\n5. The inhibitory population inhibits the excitatory population through a mixture of subtractive and divisive inhibition.\nThis constrained architecture is illustrated in Figure 1.\nFormally, we define the network as follows. Input to the network is received as a vector of positive scalar values x ∈ Rd+, which we consider to be the first excitatory population. Each hidden layer, `, is comprised of a vector of excitatory units h` ∈ Rne+ and inhibitory units hI` ∈ R ni + , in-line with constraint (1) above. (We will drop the layer index when it is unnecessary for clarity.) Note, for the first layer (` = 1), we have h` = x and ne = d. Next, based on constraint (2) we set ne >> ni, and use 10% inhibitory units as default. Following constraint (3), both the excitatory and inhibitory units receive inputs from the excitatory units in the layer below (h`−1), but the inhibitory units do not project between layers. Instead, excitatory units receive inputs from the inhibitory units of the same layer. In-line with constraint (4), we have three sets of strictly non-negative synaptic weights, one for the excitatory connections between layers, WEE` ∈ R ne×ne + , one for the excitatory projection to the inhibitory units WIE` ∈ R ni×ne + , and one for the inhibitory projections within layer W EI ` ∈ R ne×ni + . Finally, per constraint (5), we define the impact of the inhibitory units on the excitatory units as comprising both a subtractive and a divisive component:\nh` = f(z`) z` = g` γ` (zE` −WEI` hI`) + β` (1)\nwhere zE` = W EE ` h`−1 h I ` = f I(zI`) = f I(WIE` h`−1)\nγ` = W EI ` (e α` hI`)\nwhere for each layer `, β` ∈ Rne is a bias, g` ∈ Rne+ controls the gain, γ` is the divisive inhibitory term, and α` ∈ Rni is a parameter that controls the strength of this divisive inhibition. Here denotes elementwise multiplication (Hadamard product) and the exponential function and division are applied elementwise. In the rest of this manuscript we set f to be the rectified linear function (ReLU). Though a ReLU function is not a perfect match to the input-output properties of real neurons, it captures the essential rectification operation performed by neurons in physiologically realistic low activity regimes (Salinas & Sejnowski, 2000). In this paper, we model the inhibitory units as linear (i.e. f I(zI) = zI) since they receive only positive inputs and have no bias, and therefore their activation would always be in the linear part of the ReLU function. Although we make make this modelling choice mainly for mathematical simplicity, there is some biological justification, as the resting membrane potential of the class of fast-spiking interneurons most related to our model is relatively depolarised and their spike outputs can follow single inputs one-to-one (Hu et al., 2014; Galarreta & Hestrin, 2001). In future work, for example in which inhibitory connections are included between inhibitory units, we expect that the use of nonlinear functions for inhibitory units will be important." }, { "heading": "3 PARAMETER INITIALISATION FOR DALE’S ANNS", "text": "In biology, excitation and inhibition are balanced (Isaacson & Scanziani, 2011), and we use this biological property to derive appropriate weight initialisation for DANNs. First we initialise excitatory parameters from an exponential distribution with rate parameter λE, WEE iid∼ Exp(λE), and then inhibitory parameters are initialised such that excitation and subtractive inhibition are balanced, i.e. E[zEk ] = E[(WEIzI)k], ∀k. This can be achieved in a number of ways (see appendix C.2). In line with biology, we choose to treat excitatory weights onto inhibitory and excitatory units the same, and sample WIE iid∼ Exp(λE) and set WEI ← 1/ni. We note that for a DANN layer with a single inhibitory neuron, e.g. at an output layer with 10 excitatory neurons, the noise inherent in sampling a single weight vector may result in a poor match between the excitatory and inhibitory inputs, so in this case we initialise WIE as 1ne ∑ne j=1 w EE j,: explicitly (where w EE j,: is the j th row of WEE).\nNext, we consider the relationship between this initialisation approach and normalisation schemes (Ba et al., 2016; Ioffe & Szegedy, 2015). Normalisation acts to both center and scale the unit activities in a network such that they have mean zero and variance one. The weight initialisation given above will produce centered activities at the start of training. We can also draw a connection between the divisive inhibition and standardisation if we assume that the elements of x are sampled from a rectified normal distribution, x iid∼ max(0,N (0, σ2`−1)). Under this assumption, the mean and standard deviation of the excitatory input are proportional (see Appendix D). For example, if we consider the relationship c ·E[zEk ] = Var(zEk )1/2 for each unit k, we get the scalar proportionality constant c = √ 2π − 1/ √ d, as: E[zEk ] = d · E[wEE]E[x]\n= d · E[wEE]σ`−1√ 2π\nVar(zEk ) = d ·Var(wEE)(E[x2] + Var(x))\n= d ·Var(wEE)σ2`−1 2π − 1\n2π\n(2)\nwith expectation over the data and the parameters, and where wEE, x refer to any element of WEE,x. Therefore, since E[WEE]2 = Var(WEE) for weights drawn from an exponential distribution, we have\nc = Var(zEk )\n1 2\nE[zEk ] = √ 2π − 1√ d\n(3)\nThis proportionality means that you can perform centering and standardisation operations using the same neurons. For DANNs, eα will dictate the expected standard deviation of the layer’s activation z, as it controls the proportionality between subtractive and divisive inhibition for each inhibitory unit. If eα is set to c, then the divisive inhibition approximates dividing zE by its standard deviation, as E[zEk ] · c = E[wEIk,:(eα zIk)] = E[γk]. We note that due to the proportionality between the mean and standard deviation of zE, other values of eα will also control the layer’s variance with depth. However, given these considerations, we initialise eα ← √ 2π − 1/ √ d, thereby achieving standardisation at initialisation. We find that these initialisation schemes enable DANNs to learn well. We next turn to the question of how to perform parameter updates in DANNs in order to learn well." }, { "heading": "4 PARAMETER UPDATES FOR DALE’S ANNS", "text": "Unlike a layer in a column constrained network, whose affine function is restricted by sign constrained columns, a layer in a DANN is not restricted in its potential function space. This is because excitatory inputs to a layer can still have an inhibitory impact via feedforward inhibition. However, the inhibitory interneuron architecture of DANN layers introduces disparities in the degree to which updates to different parameters affect the layer’s output distribution. This can be seen intuitively, for example if a single element of WIE is updated, this has an effect on each element of z. Similarly, an update to wEIij will change zi depending on the alignment of x and all of the j\nth inhibitory unit’s weights. Therefore, instead of using the euclidean metric to measure distance between parameter settings, we employ an alternative approach. Similar to natural gradient methods, we use an approximation of the Kullback-Leibler divergence (KL divergence) of the layer’s output distribution for our metric. In order to help ensure that both excitatory and inhibitory parameter updates have similar impacts on the KL divergence, we scale the updates using correction terms derived below. We provide an extended derivation of these scaling factors in the Appendix E.\nGiven a probability distribution parameterized by some vector θ, a second order approximation to the KL divergence for a change to the parameters θ is\nDKL [ P (y|x;θ) ‖P (y|x;θ + δ) ] ≈ 1\n2 δTF (θ)δ (4)\nF (θ) = E x∼P (x),y∼P (y|x;θ)\n[ ∂ logP (y|x;θ)\n∂θ\n∂ logP (y|x;θ) ∂θ\nT ]\n(5)\nWhere F (θ) is the Fisher Information matrix (or just the Fisher). In order to calculate the Fisher for the parameters of a neural network, we must interpret the network’s outputs in a probabilistic manner. One approach is to view a layer’s activation as parameterising a conditional distribution from the natural exponential family P (y|x;θ) = P (y|z), independent in each coordinate of y|z (similar to a GLM, and as done in Ba et al. (2016)). The log likelihood of such a distribution can be written as1\nlogP (y|x;θ) = y · z− η(z) φ + c(y, φ) (6)\nE[y|x;θ] = f(z) = η′(z) Cov(y|x;θ) = diag(φf ′(z)) (7)\nwhere f(z) is the activation function of the layer, and φ, η, c define the particular distribution in the exponential family. Note that here we are taking η′(z) and f ′(z) to denote ∂η∂z and ∂f ∂z , respectively.\nIn our networks, we have used softmax activation functions at the output and ReLU activation functions in the hidden layers. In this setting, the log likelihood of the output softmax probability layer would only be defined for a one-hot vector y and would correspond to φ = 1, c(y, φ) = 0, and η(z) = log( ∑ i e zi). For the ReLU activation functions, the probabilistic model corresponds to a tobit regression model, in which y is a censored observation of a latent variable ŷ ∼ N (z, diag(φf ′(z))). In this case, one could consider either the censored or pre-censored latent random variable, depending on modelling preference. As it fits well with the above framework we analyze the pre-censored random variable ŷ, i.e. f(z) = z in equation 6. Returning to the general case, where we consider layer’s activation as parameterising a conditional distribution from the natural exponential family, the fisher of a layer is:\nF (θ) = E x∼P (x),y∼P (y|x;θ)\n[ ∂z\n∂θ (y − η′(z)) φ (y − η′(z)) φ T ∂z ∂θ\nT ]\n(8)\n= E x∼P (x)\n[ ∂z\n∂θ diag(f ′(z)) φ ∂z ∂θ\nT ]\n(9)\n1Note the general form of the exponential family is logP (y|z) = z·T (y)−η(z) φ\n+ c(y, φ), but here we only consider distributions from the natural exponential family, where T (y) = y, as this includes distributions of interest for us, such as Normal and Categorical, and also common distributions including Exponential, Poisson, Gamma, etc.\nTo estimate the approximate KL divergence resulting from the simple case of perturbing an individual parameter θ̃ ∈ θ of a single-layer DANN, we only need to consider the diagonal entries of the Fisher:\nDKL [ Pθ ‖Pθ+δθ̃ ] ≈ δ 2\n2φ ne∑ k E x∼P (x) [ f ′(zk) (∂zk ∂θ̃ )2] (10)\nwhere δθ̃ represents a 1-hot vector corresponding to θ̃ multiplied by a scalar δ. We now consider the approximate KL divergence after updates to a single element of WEE , WIE, WEI and α:\nDKL [ Pθ ‖Pθ+δ\nWEE ij\n] ≈ δ 2 2φ E [ f ′(zi)( gi γi xj) 2 ] (11)\nDKL [ Pθ ‖Pθ+δ\nWIE ij\n] ≈ δ 2\n2φ ne∑ k E [ f ′(zk)( gk γk xj) 2(wEIki aki) 2 ] (12)\nDKL [ Pθ ‖Pθ+δ\nWEI ij\n] ≈ δ 2\n2φ d∑ n E [ f ′(zi)( gi γi xn) 2(wIEjnaij) 2 ] (13)\n+ δ2\n2φ d∑ n 6=m E [ f ′(zi)( gi γi )2xnxmw IE jnw IE jm(aij) 2 ]\nDKL [ Pθ ‖Pθ+δαi ] = δ2\n2φ ne∑ k d∑ j E [ f ′(zk)( gk γk xj) 2wEIkiw IE ij (aki − 1)2 ] (14)\n+ δ2\n2φ ne∑ k d∑ n 6=m E [ f ′(zk)( gk γk )2xnxmw IE inw IE im(w EI ki ) 2(aki − 1)2 ]\nWhere akj = eαj\nγk (zEk − (WEIzI)k) + 1, and expectations are over the data, Ex∼P (x).\nTherefore, as a result of the feedforward inhibitory architecture of DANNs, for a parameter update δ, the effect on the model’s distribution will be different depending on the updated parameter-type. While the exact effect depends on the degree of co-variance between terms, the most prevalent differences between and within the excitatory and inhibitory parameter-types are the sums over layer input and output dimensions. For example, an inhibitory weight update of δ to wIEij is expected to change the model distribution approximately ne times more than an excitatory weight update of δ to wEEij . In order to balance the impact of updating different parameter-types, we update DANN parameters after correcting for these terms: updates to WIE were scaled by √ ne −1, WEI by d−1 and α by (d √ ne) −1. As a result, inhibitory unit parameters updates are scaled down relative to excitatory parameter updates. This leads to an interesting connection to biology, because while inhibitory neuron plasticity is well established, the rules and mechanisms governing synaptic updates are different from excitatory cells (Kullmann & Lamsa, 2007; Kullmann et al., 2012), and historically interneuron synapses were thought to be resistant to long-term weight changes (McBain et al., 1999).\nNext, we empirically verified that our heuristic correction factors captured the key differences between parameter-types in their impact on the KL divergence. To do this we compared parameter gradients before and after correction, to parameter gradients multiplied by an approximation of the diagonal of the Fisher inverse for each layer (which we refer to as Fisher corrected gradients), see Appendix F.3. The model was trained for 50 epochs on MNIST, and updated using the Fisher corrected gradients. Throughout training, we observed that the heuristic corrected gradients were more aligned to the Fisher corrected gradients than the uncorrected gradients were (Fig. 2). Thus, our derived correction factors help to balance the impact of excitatory and inhibitory updates on the network’s behaviour. Below, we demonstrate that these corrections are key to getting DANNs to learn well." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "Having derived appropriate parameter initialisation and updates for DANNs, we now explore how they compare to traditional ANNs and ColumnEi models on simple benchmark datasets. In brief, we find that column constrained models perform poorly, failing even to achieve zero training-set error, whereas DANNs perform equivalently to traditional ANNs." }, { "heading": "5.1 IMPLEMENTATION DETAILS", "text": "All models were composed of 4 layers: in general 3 hidden layers of dimension 500 with a ReLU activation function followed by a softmax output with 10 units, and all experiments were run for 50 epochs with batch size 32. Unless stated, for DANNs and ColumnEi models, 50 inhibitory units were included per hidden layer. For DANN models, the softmax output layer was constructed with one inhibitory unit. For ColumnEi models, each hidden layer’s activation is z = Wx where 500 columns of W were constrained to be positive and 50 negative (therefore for ColumnEi models h` was of dimension 550). ColumnEi layer weights were initialised so that variance did not scale with depth and that activations were centered (see Appendix C.1 for further details). All benchmark datasets (MNIST, Kuzushiji MNIST and Fashion MNIST) were pre-processed so that pixel values were in [0, 1]. Learning rates were selected according to validation error averaged over 3 random seeds, after a random search (Orion; Bouthillier et al. (2019), log uniform [10, 1e-5], 100 trials, 10k validation split). Selected models were then trained on test data with 6 random seeds. Plots show mean training error per epoch, and mean test set error every 200 updates over random seeds. Tables show final error mean ± standard deviation. For further implementation details and a link to the accompanying code see Appendix F.\nNote that because our goal in this paper is not to achieve state-of-the-art performance, we did not apply regularisation techniques, such as dropout and weight decay, or common modifications to stochastic gradient descent (SGD). Instead the goal of the experiments presented here was simply to determine whether, in the simplest test case scenario, DANNs can learn better than ColumnEi models and as well as traditional ANNs." }, { "heading": "5.2 COMPARISON OF DANNS TO COLUMN-EI MODELS AND MLPS", "text": "We first compared model performance on the MNIST dataset (Fig 3). We observed that ColumnEi models generalised poorly, and failed to achieve 0 % training error within the 50 epochs. This confirms the fact that such models cannot learn as well as traditional ANNs. In contrast, we observed that DANNs performed equivalently to multi-layer perceptrons (MLPs), and even generalised marginally better. This was also the case for ColumnEi and DANN models constructed with more inhibitory units (Supp. Fig. 6, 100 inhibitory units per layer). In addition, performance was only slightly worse for DANNs with one inhibitory unit per layer. These results show that DANN performance generalizes to different ratios of excitatory-to-inhibitory units. We also found that not correcting parameter updates using the corrections derived from the Fisher significantly impaired optimization, further verifying the correction factors (Fig 3).\nNext, we compared DANN performance to MLPs trained with batch and layer normalization on more challenging benchmark datasets (Fig 4). Again we found that DANNs performed equivalently to these standard architectures, whereas ColumnEi models struggled to achieve acceptable performance.\nWe also explored methods for improving DANN performance (Appendix F.4). First, in order to maintain the positive DANN weight constraint, if after a parameter update a weight was negative, we reset it to zero, i.e. θ ← max(0,θ), and as a result the actual update is no longer that suggested by SGD. We therefore experimented with temporarily reducing the learning rate whenever this parameter clipping would reduce the cosine of the angle made between the gradient and actual updates below a certain constraint (see Appendix F.4). Second, we note that the divisive inhibition term, γ, appears in the denominator of the weight gradients (Appendix E.2) and, therefore, if γ becomes small, the gradients will become large, potentially resulting in inappropriate parameter updates. We therefore wondered if constraining the gradient norm would be particularly effective for DANNs. We tested both of these modifications to DANNs trained on Fashion MNIST (Supp. Fig. 5). However, we found that they provided no observable improvement, indicating that the loss landscape and gradients were well behaved over optimization.\nFinally, we provide an analysis and preliminary experiments detailing how the DANN architecture described above may be extended to recurrent and convolutional neural networks in future work (Appendix B). In brief, we unroll recurrent networks over time and place inhibition between both network layers and timesteps, corresponding to fast feedforward and local recurrent inhibition, respectively. For convolutional architectures, we can directly apply the DANN formulation to activation maps if inhibitory and excitatory filters are of the same size and stride. Supporting this, we found that a DANN version of VGG16 (Simonyan & Zisserman, 2014) converged equivalently to a standard VGG16 architecture (Supp.Fig.7).\nAltogether, our results demonstrate that: (1) the obvious approach to creating ANNs that obey Dale’s principle (ColumnEi models) do not learn as well as traditional ANNs, (2) DANNs learn better than ColumnEi models and as well as traditional ANNs, (3) DANN learning is significantly improved by taking appropriate steps to scale updates in excitatory and inhibitory units appropriately." }, { "heading": "6 DISCUSSION", "text": "Here we presented DANNs, a novel ANN architecture with separate inhibitory and excitatory units. We derived appropriate parameter initialisation and update rules and showed experimentally that, unlike ANNs where some columns are simply constrained to be positive or negative, DANNs perform equivalently to traditional ANNs on benchmark datasets. These results are important as they are, as far as we know, the first example of an ANN architecture that fully adheres to Dale’s law without sacrificing learning performance. However, our results also raise an interesting question: why does nature employ Dale’s principle? After all, we did not see any improvement over normal ANNs in our experiments. There are two possible hypotheses. First, it is possible that Dale’s principle represents an evolutionary local minima, whereby early phylogenetic choices led to constraints on the system that were difficult to escape via natural selection. Alternatively, Dale’s principle may provide some computational benefit that we were unable to uncover given the specific tasks and architectures we used here. For example, it has been hypothesized that inhibition may help to prevent catastrophic forgetting (Barron et al., 2017). We consider exploring these questions an important avenue for future research.\nThere are a number of additional avenues for future work building upon DANNs, the most obvious of which are to further extend and generalize DANNs to recurrent and convolution neural networks (see Appendix B). It would also be interesting to explore the relative roles of subtractive and divisive inhibition. While subtractive inhibition is required for the unconstrained functional space of DANN layers, divisive inhibition may confer some of the same optimisation benefits as normalisation schemes. A related issue would be to explore the continued balance of excitation and inhibition during optimization, because while DANNs are initialised such that these are balanced, and inhibition approximates normalisation schemes, the inhibitory parameters are updated during training, and the model is free to diverge from this initialisation. As a result, the distribution of layer activations may be unstable over successive parameter updates, potentially harming optimization. In the brain, a variety of homeostatic plasticity mechanisms stabilize neuronal activity. For example, reducing excitatory input naturally results in a reduction in inhibition in real neural circuits (Tien & Kerschensteiner, 2018). It would therefore be interesting to test the inclusion of a homeostatic loss to encourage inhibition to track excitation throughout training. Finally, we note that while fast feedforward inhibition in the mammalian cortex was the main source of inspiration for this work, future investigations may benefit from drawing on a broader range of neurobiology, for example by incorporating principles of invertebrate neural circuits, such as the mushroom bodies of insects (Serrano et al., 2013).\nIn summary, DANNs sit at the intersection of a number of programs of research. First, they are a new architecture that obeys Dale’s principle, but which can still learn well, allowing researchers to more directly compare trained ANNs to real neural data (Schrimpf et al., 2018; Yamins et al., 2014). Second, DANNs contribute towards computational neuroscience and machine learning work on inhibitory interneurons in ANNs, and in general towards the role of inhibitory circuits and plasticity in neural computation (Song et al., 2016; Sacramento et al., 2018; Costa et al., 2017; Payeur et al., 2020; Atallah et al., 2012; Barron et al., 2017). Finally, the inhibition in DANNs also has an interesting connection to normalisation methods used to improving learning in deep networks (Ioffe & Szegedy, 2015; Wu & He, 2018; Ba et al., 2016). As DANNs tie these distinct programs of research together into a single model, we hope they can serve as a basis for future research at the intersection of deep learning and neuroscience." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Shahab Bakhtiari, Luke Prince, and Arna Ghosh for their helpful comments on this work. This work was supported by grants to BAR, including a NSERC Discovery Grant (RGPIN-2020-05105), an Ontario Early Career Researcher Award (ER17-13-242), a Healthy Brains, Healthy Lives New Investigator Start-up (2b-NISU-8), and funding from CIFAR (Learning in Machines and Brains Program, Canada CIFAR AI Chair), the Wellcome Trust and by the Medical Research Council (UK). Additionally, DK was supported by the FRQNT Strategic Clusters Program (2020-RS4-265502-UNIQUE)." }, { "heading": "A SUPPLEMENTARY RESULTS", "text": "" }, { "heading": "SUPPLEMENTARY MATERIAL", "text": "" }, { "heading": "B EXTENSION OF DANNS TO OTHER ARCHITECTURES", "text": "Here we discuss how our results and analysis of fully-connected feedforward Dale’s ANNs may be applied to convolutional and recurrent neural networks." }, { "heading": "B.1 EXTENSION TO CONVOLUTIONAL NEURAL NETWORKS", "text": "Consider the response of a standard convolutional layer of n output channels with filters of size k× k at a single position j over m input channels:\nzj = Wxj + b (15)\nHere, W is a n× k2m matrix whose rows correspond to the kernel weights of each output channel, and the vector xj of length k2m contains the values over the n input channels for the spatial location i. Concatenating each input location xj as the columns of a matrix X, the full output of the convolutional layer over all input locations can be expressed as Z = WX + b, where b is broadcast over the columns of Z. We can readily make an equivalent DANN formulation for a convolution layer by assuming the same kernel size and stride for excitatory and inhibitory filter-sets WEE and WEI:\nzj = g\nγ (WEExj −WEIWIExj) + β,\nγ = WEI(eα WIExj) (16)\nHere the inhibitory channels are mapped to each excitatory output channel by WIE for subtractive inhibition, and are first scaled by eα for divisive inhibition. For parameter initialisation, by following the approach of He et al. (2015) and considering the response of the layer at a single location, we use the same initialisations as those derived in section 3, but where the input dimension d is the product of kernel size and input channels, k2m. Next, the correction factors to parameters updates apply as in section 4 as the KL divergence is summed over each valid input location j, which results in approximately the same multiplicative factor for each parameter, but does not change the approximate relative differences between parameter types:\nDKL [ Pθ ‖Pθ+δθ̃ ] ≈ ∑ j\n( δ2\n2φ n∑ i E x∼P (x) [ f ′(zi,j) (∂zi,j ∂θ̃ )2]) (17)\nwhere we consider the full response Z of the layer over all valid kernel locations.\nIn order to confirm our extension to convolutional neural networks we conducted preliminary experiments with DANN versions of convolutional neural networks as described above. Below, we show results of training a standard VGG16 architecture, and a DANN version of the VGG16 architecture (Supp. Fig. 7) on CIFAR-10. As can be seen, the DANN network trains approximately as well as the standard VGG16 model.\nBoth control and DANN VGG16 architectures were trained on CIFAR-10 with stochastic gradient descent with batch size 128, without dataset augmentation, dropout, or batch normalisation layers. Best model learning rates (control - 0.089 , DANN - 0.03458) were selected after a random search according to average final validation error over random seeds, and conditional on all seeds beginning to converge within 5 epochs (convergence defined as validation error < 90%). The random search was performed with learning rates sampled from a log-uniform [1e-4,1] distribution, 3 seeds per trial, 60 trials, 150 epochs, and with a 10k validation split. Final epoch test error over 6 random seeds was 21.08± 0.811 for the control VGG16 model, and 21.178± 0.348 for the DANN-VGG16 model with constrained weights. VGG16 models were adapted from code here2, and for the DANN VGG16 model we used 10 inhibitory filters per 64 excitatory filters, and 10% inhibitory units in the fully connected layers." }, { "heading": "B.2 EXTENSION TO RECURRENT NEURAL NETWORKS", "text": "We can readily make a connection between the fully-connected Dales ANNs described in Section 2.1 and recurrent neural networks (RNNs) by considering the similarities between depth and time. As has been previously noted, a shallow RNN unrolled over time can be expressed as a deep neural network with weight sharing (Liao & Poggio, 2016).\nht = f(zt) zt = gt γt Ŵht−1 + β (18)\nwhere Ŵ = WEE −WEIWEI, γt = WEI(eα WEIht−1)\nwhere in this simple case, recurrent processing steps over time are applied to the input x = h0. In this view, layer depth corresponds to time, and inhibition between layers corresponds to fast feedback inhibition.\nWe note that if there are a sequence of inputs coming at each time-step, xt, then this formulation can still hold, but with a simple modification to incorporate the time-varying inputs. Specifically, we need to add additional input weights, Û:\nht = f(zt) zt = gt γt Ŵht−1 + gx γx Ûxt + β (19)\n2https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py\nwhere Ŵ = WEE −WEIWEI, γt = WEI(eαt WEIht−1) Û = UEE −UEIUEI, γx = UEI(eαx UEIxt)\nAll of the existing DANN approaches developed above can be applied to this case." }, { "heading": "C PARAMETER INITIALISATIONS", "text": "In this section we provide further details regarding parameter initialisations.\nThroughout we assume that the elements of the input, x, to a layer ` are iid and also the output of a layer `− 1 whose pre-activations were distributed N (0, σ2`−1). Therefore x will follow a rectified normal distribution:\nE[x] = ∫ ∞ 0 x · e −x2/2σ2`−1 σ`−1 √ 2π dx = σ`−1√ 2π\nE[x2] = ∫ ∞ 0 x2 · e −x2/2σ2`−1 σ`−1 √ 2π dx = σ2`−1 2\nVar(x) = E[x2]− E[x]2 = σ2`−1 π − 1\n2π\nVar(x) + E[x]2 = σ2`−1 2π − 1\n2π\n(20)\nwhere here, and throughout the text, non-indexed non-bold to refers to any element of a vector or matrix, e.g E[x] refers to the expectation of any element of x, Var(x) refers to the variance of any element of x, etc. In addition, for all models we draw positively constrained weights iid from exponential distributions, and make use of the following properties for w ∼ Exp(λ)\nE[w] = 1\nλ\nVar(w) = 1\nλ2 = E[w]2\nE[w2] = 2!\nλ2 = 2Var(w) = 2E[w]2\n(21)" }, { "heading": "C.1 COLUMN CONSTRAINED EI MODELS AND WEIGHT INITIALISATION", "text": "Here we provide detail on the parameter initialisation of column constrained models. Layer activations are z = Wx where columns of W are constrained to be positive or negative. Therefore, for convenience, let us denote W = [W+,W−], and x = [xE ,xI ], and we assume xEi ,x I j are iid ∀i, j. Note for this model, ne + ni = d, the input dimensionality. As for DANN models, throughout training we preserve the sign constraints of the weights by resetting weights using rectification around zero, i.e. W+ ← max(0,W+), W− ← min(0,W−). At initialisation for the column constrained model for each layer we require E[zk] = 0, Var(zk) = σ 2 `−1.\nE[zk] = neE[w+]E[x]− niE[w−]E[x] neE[w+]E[x] = niE[w−]E[x]\nE[w−] = E[w+] ne ni\n(22)\nWhere w+, w− refer to any element of W+,W−.\nVar(zk) = ne∑ i Var(w+kix E i ) + ni∑ j Var(w−kjx I j )\n= neVar(w +x) + niVar(w −x) = ne ( E[w+]2Var(x) + Var(w+)E[x]2 + Var(w+)Var(x) ) + ni ( E[w−]2Var(x) + Var(w−)E[x]2 + Var(w−)Var(x) ) (23)\nAs weights are drawn from an exponential distribution, Var(w+) = E[w+]2, we have\nVar(zk) = neE[w+]2(2Var(x) + E[x]2) + niE[w−]2(2Var(x) + E[x]2) = neE[w+]2(E[x2] + Var(x)) + niE[w−]2(E[x2] + Var(x)) = (E[x2] + Var(x))(neE[w+]2 + niE[w−]2)\n= σ2`−1( 2π − 1\n2π )E[w+]2(ne + n2e ni )\n(24)\nTherefore E[w+] = 1/( 2π−12π )(ne + n2e ni )\nNote that as the input to the network is all positive, the first weight matrix has no negative columns. We therefore use the bias vector to center the activations of the first layer (in other layers it is initialised to zeros).\nE[zk] = neE[w+]E[x] + βk (25)\nTherefore we initialise all elements of β to −neE[w+]E[x]\nC.2 INITIALISATION OF DANN INHIBITORY WEIGHTS FOR BALANCED EXCITATION AND SUBTRACTIVE INHIBITION\nHere provide details of inhibitory parameter initialisation such that E[zEk ] = E[(WEIzI)k], for WEE iid∼ Exp(λE).\nE[zEk ] = E[ d∑ i wEEki xi] = d 1 λE E[x]\nE[(WEIzI)k] = E[ ni∑ j wEIkj d∑ i wEIji xi] = niE[wEI]dE[wIE]E[x]\n(26)\nThese expectaions are equal when both sets of excitatory weights are drawn from the same distribution, WIE\niid∼ Exp(λE) and WEI ← 1/ni. Or alternatively, inhibitory weights can both drawn from the same distribution, WIE,WEI iid∼ Exp( √ λEni). Note, that although the above always holds in expectation, in the case of a multiple inhibitory units we can apply the law of large numbers to conclude that the subtractive inhibition and excitatory input will be approximately equal.\nNote that while this initialisation is general to different settings of λE, we initialise λE ← √ d(2π − 1)/ √ 2π (see section D.1).\n———————–" }, { "heading": "D PROPORTIONAL RELATIONSHIP BETWEEN EXCITATORY INPUT MEAN AND STANDARD DEVIATION", "text": "Here we provide further details regarding the proportionality between zE’s mean and standard deviation. This proportionality constant depends on which statistic or distribution that is of interest for activations (e.g. layer-statistics or unit batch-statistics as in layer and batch normalisation)." }, { "heading": "D.1 UNIT STATISTICS OVER DATA AND PARAMETER DISTRIBUTIONS", "text": "As discussed in the main text, if we consider c · E[zEk ] = Var(zEk )1/2 for a unit k, with expectation over the data and parameters, c = √ 2π − 1/ √ d:\nE[zEk ] = d · E[wEE]E[x]\n= d · E[wEE]σ`−1√ 2π\nVar(zEk ) = Var( d∑ i wEEki xi)\n= d ·Var(wEEx) = d ·Var(wEE)E[x2] + d ·Var(x)E[wEE]2\n= d ·Var(wEE)(E[x2] + Var(x))\n= d ·Var(wEE)σ2`−1 2π − 1\n2π\n(27)\nWhere E[wEE]2 = Var(wEE) for weights drawn from an exponential distribution. Therefore E[zEk ] · c = √ Var(zEk )\nc = √ 2π − 1√ d\n(28)\nAdditionally, we see that for Var(zEk ) = σ 2 `−1 the variance of the distribution that elements of W EE are drawn from should be\nVar(wEE) = 2π\nd · (2π − 1) (29)\nand so we can set λE ← √ d(2π − 1)/ √ 2π, for Var(zEk ) = σ 2 `−1." }, { "heading": "D.2 UNIT STATISTICS OVER THE DATA DISTRIBUTION", "text": "If instead we consider a unit k, with excitatory weights wEEk and expectation and variance taken only over the data we have the approximation:\nE[zEk ] = E[x] d∑ i wEEki\n≈ d · E[x]E[wEE]\n= d · σ`−1√ 2π E[wEE]\n(30)\nLikewise the variance over the data can be approximated as\nVar(zEk ) = Var(x) d∑ i (wEEki ) 2\n≈ d ·Var(x) · E[(wEE)2]\n= d · σ2`−1 π − 1\n2π · 2 · E[wEE]2\n(31)\nTherefore\nEx∼p(x)[z E k ] · c = √ Varx∼p(x)[z E k ]\nc ≈ √\n2π − 2√ d\n(32)" }, { "heading": "D.3 LAYER STATISTICS OVER THE DATA AND PARAMETER DISTRIBUTIONS", "text": "Alternatively we can consider the mean and standard deviation of the layer statistics µzE , σzE as calculated if one was to apply layer normalisation to zE. Here again, these statistics are proportionally related, but with the constant √ π/ √ d.\nIf we were to apply layer normalisation to zE, the layer statistics would be as follows:\nz = g\nσzE (zE−µzE)+β µzE =\n1\nne ne∑ j zEj = 1 ne ne∑ j wEEj,: x σ 2 zE =\n1\nne − 1 ne∑ j (zEj −µzE)2\n(33)\nWe now derive the relationship that the expectation of layer statistics are proportionally related by E[µzE ] · √ π/ √ d = E(σ2zE) 1/2. The expectation of E[µzE ] is straightforward:\nE[µzE ] = d · E[wEE] · E[x] (34)\nTurning to the derivation of E[σ2zE ]:\nE[σ2zE ] = E[ 1\nne − 1 ne∑ i (zEi − µzE)2] (35)\n= 1\nne − 1 ne∑ i E[(wEEi,: x− 1 ne ne∑ j wEEj,: x) 2] (36)\n= 1\nne − 1 ne∑ i E[(ẑi)2] (37)\nwhere we have defined ẑi = wEEi,: x− 1ne ∑ne j w EE j,: x. We can obtain E[(ẑi)2] by deriving E[ẑi] and Var(ẑi). As\nŵij = w EE ij −\n1\nne ne∑ k wEEkj = 1 ne ne∑ k=1,k 6=i (wEEij − wEEkj ) (38)\nwe see that E[ŵij ] = 0, and therefore E[ẑi] = 0. For the variance of Var(ẑi) we start with Var(ŵij).\nVar(ŵij) = 1\nn2e Var( ne∑ k=1,k 6=i (wEEij − wEEkj )) (39)\n= 1\nn2e ( ne∑ k=1,k 6=i Var(wEEij − wEEkj ) + ne∑\nk=1,k′=1 k,k′ 6=i, k 6=k′\nCov(wEEij − wEEkj , wEEij − wEEk′j)) (40)\n= 1\nn2e ((ne − 1)2Var(wEE) + (ne − 1)(ne − 2)Var(wEE)) (41)\n= (ne − 1) ne Var(wEE) (42)\nFor i ≤ ne we calculate Var(ẑi), keeping in mind that for i ≤ ne, j ≤ d, xj are iid, and equation (38) shows that ŵij are iid in the j’th coordinate, so we see that\nVar(ẑi) = Var( d∑ j ŵijxj) = dVar(ŵx) (43)\nRemembering the values of E[ŵ],Var(ŵ), that E[X2] = Var(X) + E[X]2, and for independent X,Y , Var(XY ) = Var(X)Var(Y ) + Var(X)E[Y ]2 + Var(Y )E[X]2, we have\nVar(ẑi) = d(Var(ŵ)Var(x) + Var(ŵ)E[x]2 + Var(x)E[ŵ]2) (44) = d(Var(ŵ)E[x2] + Var(x)E[ŵ]2) = dVar(ŵ)E[x2] (45)\n= d(ne − 1)\nne Var(wEE)E[x2] (46)\nNow putting these terms together we can derive E[σ2zE ].\nE[σ2zE ] = 1\nne − 1 ne∑ i E[(ẑi)2] (47)\n= 1\nne − 1 ne∑ i Var(ẑi) (48)\n= d ·Var(wEE) · E[x2] (49)\nTherefore returning to E[µzE ] ·c = E(σ2zE) 1/2 and keeping in mind that the variance of an exponential random variable is it’s mean squared,\nc = (d ·Var(wEE)E[x2])1/2\nd · E[wEE] · E[x] = √ E[x2]√ d · E[x]\n(50)\nWe have assumed that x follows a rectified normal distribution. Therefore, E[x] = σl−1√ 2π ,E[x2] = σ2l−1 2 . Resulting in:\nc = √ π√ d\n(51)\nWe note that for a DANN layer with a single inhibitory unit, µzE = zI as WIE ← 1ne ∑ne j w EE j,: , and WEI ← 1. Therefore DANN divisive inhibition, γ, can be made equivalent to layer standard deviation at initialisation in expectation if eα ← c. However, these calculations apply for the case of multiple interneuron if one makes the approximation µzE ≈ (WEIWIEx)i for any i." }, { "heading": "E PARAMETER UPDATES AND FISHER INFORMATION MATRIX", "text": "" }, { "heading": "E.1 LAYER FISHER INFORMATION MATRIX", "text": "We view a layer’s activation as parameterising a conditional distribution from the exponential family P (y|x;θ) = P (y|z), independent in each coordinate of y|z.\nlogP (y|x;θ) = y · z− η(z) φ + c(y, φ) (52)\nE[y|x;θ] = f(z) = η′(z) Cov(y|x;θ) = diag(φf ′(z)) (53)\nwhere f(z) is the activation function of the layer, and φ, η, c define the particular distribution in the exponential family. Note we take η′(z), f ′(z) to denote the ∂η∂z , ∂f ∂z .\nF (θ) is defined as:\nF (θ) = E x∼P (x),y∼P (y|x;θ)\n[ ∂ logP (y|x;θ)\n∂θ\n∂ logP (y|x;θ) ∂θ\nT ]\n(54)\nAs ∂\n∂θ logP (y|x;θ) = ∂z ∂θ ∂ ∂z\n[ y · z− η(z)\nφ + c(y, φ) ] = 1\nφ\n∂z ∂θ (y − ∂η ∂z )\n(55)\nwe have\nF (θ) = E x∼P (x),y∼P (y|x;θ)\n[ ∂z\n∂θ (y − η′(z)) φ (y − η′(z)) φ T ∂z ∂θ\nT ]\n(56)\n= E x∼P (x)\n[ ∂z\n∂θ E y∼P (y|x;θ)\n[ (y − η′(z))\nφ\n(y − η′(z)) φ T ∣∣x;θ] ∂z ∂θ T ]\n(57)\n= E x∼P (x)\n[ ∂z\n∂θ\nCov [ y ∣∣x;θ]\nφ2 ∂z ∂θ\nT ]\n(58)\n= E x∼P (x)\n[ ∂z\n∂θ diag(f ′(z)) φ ∂z ∂θ\nT ]\n(59)\nwhere we recognise the covariance matrix is diagonal:\nCov [ y ∣∣x;θ] = diag(Var(y1|x;θ), ...,Var(yne |x;θ))\nTo analyse the approximate KL divergence resulting from the simple case of perturbing individual parameters of a single-layer DANN, we only need to consider the diagonal entries of the Fisher.\nDKL [ Pθ ‖Pθ+δθ̃ ] ≈ 1\n2 δT θ̃ E x∼P (x)\n[ ∂z\n∂θ diag(f ′(z)) φ ∂z ∂θ T ] δθ̃ (60)\n= δ2\n2φ E x∼P (x)\n[ ∂z\n∂θ̃ diag(f ′(z))\n∂z\n∂θ̃\nT ]\n(61)\n= δ2\n2φ E x∼P (x)\n[ (f ′(z)T ∂z\n∂θ̃ ) ∂z ∂θ̃\nT ]\n(62)\n= δ2\n2φ ne∑ k E x∼P (x) [ f ′(zk) (∂zk ∂θ̃ )2] (63)\nwhere δθ̃ represents a 1-hot vector corresponding to θ̃, multiplied by a scalar δ." }, { "heading": "E.2 DERIVATIVES", "text": "Here we provide derivatives for DANN layer activations with respect to the different parameter groups. The equations for the layer activation can be written\nz = g\nγ (zE −WEIzI) + β\nwhere zE = WEEx zI = WIEx γ = WEI(eα zI) (64)\nNote ∂zk ∂wEEij = ∂zk ∂wEIij = 0 for k 6= i.\n∂zi ∂wEEij = ∂ ∂wEEij (gi γi (zEi − (WEIzI)i) + βi )\n(65)\n= gi γi ∂ ∂wEEij (zEi ) (66) = gi γi xj (67)\n∂zi ∂wEIij = ∂ ∂wEIij (gi γi (zEi − (WEIzI)i) + βi )\n(68)\n= − gi γ2i ∂γi ∂wEIij (zEi − (WEIzI)i)− gi γi zIj (69) = − gi γ2i eαjzIj(z E i − (WEIzI)i)− gi γi zIj (70)\n= −gi γi zIj (eαj γi (zEi − (WEIzI)i) + 1 )\n(71)\n= − d∑ k gi γi wIEjkxk (eαj γi (zEi − (WEIzI)i) + 1 )\n(72)\nIn contrast ∂zk ∂wEIij , ∂zk ∂αj 6= 0 for k 6= i.\n∂zk ∂wIEij = ∂ ∂wIEij (gk γk (zEk − (WEIzI)k) + βk )\n(73)\n= − gk γ2k ∂γk ∂wIEij (zEk − (WEIzI)k)− gk γk wEIki xj (74) = − gk γ2k eαiwEIki xj(z E k − (WEIzI)k)− gk γk wEIki xj (75)\n= −gk γk wEIki xj (eαi γk (zEk − (WEIzI)k) + 1 )\n(76)\n∂zi ∂αj = ∂ ∂αj (gi γi (zEi − (WEIzI)i) + βi )\n(77)\n= − gi γ2i ∂γi ∂αj (zEi − (WEIzI)i) (78) = − gi γ2i wEIij e αjzIj(z E i − (WEIzI)i) (79)\n= − d∑ k gi γi wEIij w IE jkxk eαj γi (zEi − (WEIzI)i) (80)\n∂zi ∂gi = 1 γi (zEi − (WEIzI)i) (81)\n∂zi ∂bi = 1 (82)\n(83)" }, { "heading": "E.3 APPROXIMATE KL DIVERGENCE FOR WEIGHT UPDATES", "text": "If we consider an update to an element ij of WEE the approximate KL divergence is\nDKL [ Pθ ‖Pθ+δ\nWEE ij\n] ≈ δ 2\n2φ ne∑ k E x∼P (x)\n[ f ′(zk)\n( ∂zk ∂wEEij\n)2]\n= δ2\n2φ E x∼P (x)\n[ f ′(zi)(\ngi γi xj) 2\n] (84)\nas ∂zk ∂wEEij = 0 for k 6= i.\nIn contrast, for an update to an element ij of WIE we sum over ne terms, as ∂zk∂wIEij 6= 0 for k 6= i.\nDKL [ Pθ ‖Pθ+δ\nWIE ij\n] ≈ δ 2\n2φ ne∑ k E x∼P (x)\n[ f ′(zk)\n( ∂zk ∂wIEij\n)2]\n= δ2\n2φ ne∑ k E x∼P (x) [ f ′(zk) ( − gk γk wEIki xjaki )2]\n= δ2\n2φ ne∑ k E x∼P (x) [ f ′(zk)( gk γk xj) 2(wEIki aki) 2\n] (85)\nwhere akj = eαj\nγk (zEk − (WEIzI)k) + 1.\nFor an update δWEIij , while ∂zk ∂wEIij = 0 for k 6= i, the derivative contains a zIj term, so there is instead a squared sum over d terms.\nDKL [ Pθ ‖Pθ+δ\nWEI ij\n] ≈ δ 2\n2φ ne∑ k E x∼P (x)\n[ f ′(zk)\n( ∂zk ∂wEIij\n)2]\n= δ2\n2φ E x∼P (x)\n[ f ′(zi)\n( ∂zi ∂wEIij\n)2]\n= δ2\n2φ E x∼P (x)\n[ f ′(zi) ( − gi γi zIjaij )2] = δ2\n2φ E x∼P (x)\n[ f ′(zi)(z I j)\n2( gi γi )2(aij) 2 ] = δ2\n2φ E x∼P (x)\n[ f ′(zi)(\nd∑ n wIEj,nxn) 2( gi γi )2(aij) 2\n]\n= δ2\n2φ E x∼P (x) f ′(zi)( d∑ n (wIEjn) 2(xn) 2 + d∑ n 6=m wIEjnw IE jmxnxm ) ( gi γi )2(aij) 2 = δ2\n2φ d∑ n E x∼P (x) [ f ′(zi)(w IE jn) 2(xn) 2( gi γi )2(aij) 2 ]\n+ δ2\n2φ d∑ n 6=m E x∼P (x) [ f ′(zi)w IE jnw IE jmxnxm( gi γi )2(aij) 2 ] (86)\nFinally, for alpha\nDKL [ Pθ ‖Pθ+δαi ] ≈ δ 2\n2φ ne∑ k E x∼P (x) [ f ′(zk) (∂zk ∂αi )2]\n= δ2\n2φ ne∑ k E x∼P (x) f ′(zk)(− d∑ j gk γk wEIkiw IE ij xj eαi γk (zEk − (WEIzI)k )2 = δ2\n2φ ne∑ k E x∼P (x) f ′(zk)(− d∑ j gk γk wEIkiw IE ij xj(aki − 1) )2 = δ2\n2φ ne∑ k E x∼P (x) f ′(zk)( d∑ j ( gk γk wEIkiw IE ij xj(aki − 1))2\n+ d∑ n 6=m (( gk γk wEIki ) 2wIEinxnw IE imxm(aki − 1)2 ) = δ2\n2φ ne∑ k d∑ j E x∼P (x) [ f ′(zk)( gk γk wEIkiw IE ij xj(aki − 1))2 ]\n+ δ2\n2φ ne∑ k d∑ n 6=m E x∼P (x) [ f ′(zk)( gk γk wEIki ) 2wIEinxnw IE imxm(aki − 1)2 ] (87)" }, { "heading": "F ALGORITHMS", "text": "Here we provide pseudo-code for implementation details. Please see the following link for code: https://github.com/linclab/ltlwdp" }, { "heading": "F.1 PARAMETER INITIALIZATION", "text": "Algorithm 1 Parameter initialization for DANNs for layer L do\nrequire ne, ni, d WEE ∼ exp(λE) if ni = 1\nWIE ← 1ne ∑ne j=1 w EE j\nWEI ← 1 else:\nWIE ∼ exp(λE) WEI ← 1/ni\nend if α← 1 · log( √ 2π−1√ d )\ng,β ← 1 end for\nWhere number of excitatory output units is ne, number of inhibitory units ni, and input dimensionality d and λE = √ d(2π − 1)/ √ 2π." }, { "heading": "F.2 PARAMETER UPDATES", "text": "For DANN parameter updates we used the algorithms detailed below. Note that gradients were corrected as detailed in Section 4 and see Algorithm 3.\nAll below algorithms are computed using the loss gradients ∇θ of parameter θ, in a given model computed on a minibatch sample.\nAlgorithm 2 Parameter updates Require learning rate η, updates ∆θ for each layer l do\nWEE ←WEE − η∆WEE WIE ←WIE − η∆WIE WEI ←WEI − η∆WEI α← α− η∆α g← g − η∆g β ← β − η∆β WEE ← max(WEE, 0) WIE ← max(WIE, 0) WEI ← max(WEI, 0) g← max(g, 0)\nend for" }, { "heading": "F.3 DANN GRADIENT CORRECTION ALGORITHMS", "text": "For the majority of experiments we scaled gradients using the heuristic correction terms derived in Section 4 (and see Appendix E). In this case we applied the following algorithm before Algorithm 2.\nAlgorithm 3 DANN gradient correction for each layer l do\nrequire ne, ni, d ∆WEE ← ∇WEE ∆WIE ← 1√ne∇W IE ∆WEI ← 1d∇W EI ∆α← 1d√ne∇α ∆g← ∇g ∆β ← ∇β\nend for\nWe also tested that our heuristic correction factors approximated gradient multiplication by the diagonal of F−1t for each layer (see Figure 2).\nAlgorithm 4 Gradient correction by approximation of diag(F−1) Require learning rate η, fisher momentum k, fisher learning rate λ for each batch (xt, yt) do\nCompute p = softmax(z) Compute cross entropy loss L(p, yt) for each layer l do ∇θl ← ∂L\n∂θl\nend for Sample ŷ ∼ Categorical(p) Compute cross entropy loss L(p, ŷ) for each layer l do\nF̂ = 1λ·|batch| ∑|batch| i ( ∂L ∂θl )2i Ft = kF̂ + (1− k)Ft−1 F−1t = 1/Ft F ∗t = F −1 t · 1/||FW EE t || where FW EE\nis the elements of F corresponding to WEE ∆θl ← F ∗t ∇θl\nend for end for\nHere we note this update can be considered very rough diagonal approximation to natural gradient descent. In addition, various efficient approximations to natural gradient descent that have been utilized such as KFAC Martens & Grosse (2015) could not be applied due to the structure of DANNs, as the mathematical assumptions of KFAC, which were made for feedforward networks with activations as matrix multiplications, do not apply." }, { "heading": "F.4 LEARNING RATE SCALING AND GRADIENT NORMALISATION", "text": "We also tested whether constraining the gradient norm and scaling the learning rate based on parameter clipping improved DANN performance. For these experiments we applied the following algorithms.\nAlgorithm 5 Gradient normalisation for each layer ` do\nRequire∇θ`, M if ||∇θ`||2 > M :\n∆θ` ←M · ∇θ `\n||∇θ`||2 else:\n∆θ` ← ∇θ` end if\nend for\nAlgorithm 6 Learning rate scaling for each layer ` do\nRequire∇θ`, M , ξ i← 1 c← 0 while c < M :\nη ← ξi c← CosineSimilarity(max(0, θ` − η∇θ`), θ` − η∇θ`)) i← i+ 1\nend for\nThe learning rate scaling method temporarily reduces the learning rate whenever parameter clipping would reduce the cosine of the angle, made between the gradient and actual updates, below a certain constraint. For any optimization problems caused by actual clipped updates not following the gradient, learning rate scaling is a principled way of following the direction of the gradient.\nWe also note, this technique can be generally applied to any other model which is constrained so that it cannot have updates freely follow gradient descent. If the constrained parameter space is an open subset of euclidean space, and we allow the learning rate to be arbitrarily small (Algorithm 6 with limi→∞ ξi = 0), updates will always follow the direction of the gradient." } ]
2,021
WITH SEPARATE EXCITATORY AND INHIBITORY UNITS
SP:765c8b969d795ab629aa74bc20e8f19558a4e165
[ "This paper proposes learning embeddings for sketch or natural images by training a network that takes in a raster image and outputs and collection of sketch strokes. The architecture consists of a standard CNN encoder followed by an RNN decoder. The authors evaluate their learned embeddings on few-shot classification tasks and explore the the quality of the latent space. They demonstrate that they outperform unsupervised few-shot classification approaches and seem to obtain a latent space that is more aware of long-range structure than those from methods that operate purely in raster space." ]
Sketch drawings are an intuitive visual domain that appeals to human instinct. Previous work has shown that recurrent neural networks are capable of producing sketch drawings of a single or few classes at a time. In this work we investigate representations developed by training a generative model to produce sketches from pixel images across many classes in a sketch domain. We find that the embeddings learned by this sketching model are extremely informative for visual tasks and infer a unique visual understanding. We then use them to exceed state-of-the-art performance in unsupervised few-shot classification on the Omniglot and miniImageNet benchmarks. We also leverage the generative capacity of our model to produce high quality sketches of novel classes based on just a single example.
[ { "affiliations": [], "name": "IMITATING DRAWINGS" } ]
[ { "authors": [ "Antreas Antoniou", "Amos J. Storkey" ], "title": "Assume, augment and learn: Unsupervised few-shot meta-learning via random labels and data augmentation", "venue": null, "year": 1902 }, { "authors": [ "Pablo Arbelaez", "Michael Maire", "Charless C. Fowlkes", "Jitendra Malik" ], "title": "Contour detection and hierarchical image segmentation", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2011 }, { "authors": [ "David Berthelot", "Colin Raffel", "Aurko Roy", "Ian J. Goodfellow" ], "title": "Understanding and improving interpolation in autoencoders via an adversarial regularizer", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ayan Kumar Bhunia", "Yongxin Yang", "Timothy M. Hospedales", "Tao Xiang", "Yi-Zhe Song" ], "title": "Sketch less for more: On-the-fly fine-grained sketch-based image retrieval", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Nan Cao", "Xin Yan", "Yang Shi", "Chaoran Chen" ], "title": "AI-Sketcher: A deep generative model for producing high-quality sketches", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In 15th European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Yajing Chen", "Shikui Tu", "Yuqi Yi", "Lei Xu" ], "title": "Sketch-pix2seq: a model to generate sketches of multiple categories", "venue": null, "year": 2017 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton", "Radford M Neal", "Richard S Zemel" ], "title": "The helmholtz machine", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Sounak Dey", "Pau Riba", "Anjan Dutta", "Josep Llados", "Yi-Zhe Song" ], "title": "Doodle to search: Practical zero-shot sketch-based image retrieval", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A. Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In IEEE International Conference on Computer Vision, ICCV,", "year": 2015 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "David H. Douglas", "Thomas K. Peucker" ], "title": "Algorithms for the reduction of the number of points required to represent a digitized line or its caricature", "venue": null, "year": 1973 }, { "authors": [ "Anjan Dutta", "Zeynep Akata" ], "title": "Semantically tied paired cycle consistency for zero-shot sketch-based image retrieval", "venue": null, "year": 1903 }, { "authors": [ "Martin Ester", "Hans-Peter Kriegel", "Jörg Sander", "Xiaowei Xu" ], "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "venue": "In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD,", "year": 1996 }, { "authors": [ "Kawin Ethayarajh", "D. Duvenaud", "Graeme Hirst" ], "title": "Towards understanding linear word analogies", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Dileep George", "Wolfgang Lehrach", "Ken Kansky", "Miguel Lázaro-Gredilla", "Christopher Laan", "Bhaskara Marthi", "Xinghua Lou", "Zhaoshi Meng", "Yi Liu", "Huayan Wang", "Alex Lavin", "D. Scott Phoenix" ], "title": "A generative vision model that trains with high data efficiency and breaks text-based captchas", "venue": "doi: 10.1126/science.aag2612", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow" ], "title": "NIPS 2016 tutorial", "venue": "Generative adversarial networks. CoRR,", "year": 2017 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "CoRR, abs/1308.0850,", "year": 2013 }, { "authors": [ "Karol Gregor", "Ivo Danihelka", "Alex Graves", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "DRAW: A recurrent neural network for image generation", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "David Ha", "Douglas Eck" ], "title": "A neural representation of sketch drawings", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Luke B. Hewitt", "Maxwell I. Nye", "Andreea Gane", "Tommi S. Jaakkola", "Joshua B. Tenenbaum" ], "title": "The variational homoencoder: Learning to learn high capacity generative models from few examples", "venue": "In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Irina Higgins", "Loïc Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Geoffrey E. Hinton", "Vinod Nair" ], "title": "Inferring motor programs from images of handwritten digits", "venue": "In Advances in Neural Information Processing Systems", "year": 2005 }, { "authors": [ "Kyle Hsu", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised learning via meta-learning", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate", "venue": "shift. CoRR,", "year": 2015 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Siavash Khodadadeh", "Ladislau Bölöni", "Mubarak Shah" ], "title": "Unsupervised meta-learning for few-shot image classification", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Brenden M. Lake", "Ruslan Salakhutdinov", "Joshua B. Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": "Science, 350(6266):1332–1338,", "year": 2015 }, { "authors": [ "Brenden M Lake", "Ruslan Salakhutdinov", "Joshua B Tenenbaum" ], "title": "The omniglot challenge: a 3-year progress report", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Mengtian Li", "Zhe L. Lin", "Radomír Mech", "Ersin Yumer", "Deva Ramanan" ], "title": "Photo-sketching: Inferring contour drawings from images", "venue": "In IEEE Winter Conference on Applications of Computer Vision,", "year": 2019 }, { "authors": [ "M. Liwicki", "H. Bunke" ], "title": "Iam-ondb - an on-line english sentence database acquired from handwritten text on a whiteboard", "venue": "In Eighth International Conference on Document Analysis and Recognition,", "year": 2005 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "venue": "arXiv preprint arXiv:1802.03426,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In 14th European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Boris N. Oreshkin", "Pau Rodríguez López", "Alexandre Lacoste" ], "title": "TADAM: task dependent adaptive metric for improved few-shot learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Anubha Pandey", "Ashish Mishra", "Vinay Kumar Verma", "Anurag Mittal", "Hema A. Murthy" ], "title": "Stacked adversarial network for zero-shot sketch based image retrieval", "venue": "In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, WACV,", "year": 2020 }, { "authors": [ "Marc’Aurelio Ranzato", "Christopher S. Poultney", "Sumit Chopra", "Yann LeCun" ], "title": "Efficient learning of sparse representations with an energy-based model", "venue": "In Advances in Neural Information Processing Systems", "year": 2006 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Scott E. Reed", "Yutian Chen", "Thomas Paine", "Aäron van den Oord", "S.M. Ali Eslami", "Danilo Jimenez Rezende", "Oriol Vinyals", "Nando de Freitas" ], "title": "Few-shot autoregressive density estimation: Towards learning to learn distributions", "venue": null, "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Ivo Danihelka", "Karol Gregor", "Daan Wierstra" ], "title": "Oneshot generalization in deep generative models", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Leo Sampaio Ferraz Ribeiro", "Tu Bui", "John Collomosse", "Moacir Ponti" ], "title": "Sketchformer: Transformerbased representation for sketched structure", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tim Salimans", "Ian J. Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Patsorn Sangkloy", "Nathan Burnell", "Cusuh Ham", "James Hays" ], "title": "The sketchy database: learning to retrieve badly drawn bunnies", "venue": "ACM Trans. Graph.,", "year": 2016 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard S. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jifei Song", "Kaiyue Pang", "Yi-Zhe Song", "Tao Xiang", "Timothy M. Hospedales" ], "title": "Learning to sketch with shortcut cycle consistency", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Aäron van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Koray Kavukcuoglu", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Isabelle Lajoie", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "J. Mach. Learn. Res.,", "year": 2010 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Saining Xie", "Zhuowen Tu" ], "title": "Holistically-nested edge detection", "venue": "Int. J. Comput. Vis.,", "year": 2017 }, { "authors": [ "Qian Yu", "Feng Liu", "Yi-Zhe Song", "Tao Xiang", "Timothy M. Hospedales", "Chen Change Loy" ], "title": "Sketch me that shoe", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Liliang Zhang", "Liang Lin", "Xian Wu", "Shengyong Ding", "Lei Zhang" ], "title": "End-to-end photo-sketch generation via fully convolutional representation learning", "venue": "In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval,", "year": 2015 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A. Efros" ], "title": "Colorful image colorization", "venue": "In 14th European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Ha", "Eck" ], "title": "2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer. The following list of classes were used for training: The Eiffel", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Upon encountering a novel concept, such as a six-legged turtle, humans can quickly generalize this concept by composing a mental picture. The ability to generate drawings greatly facilitates communicating new ideas. This dates back to the advent of writing, as many ancient written languages are based on logograms, such as Chinese hanzi and Egyptian hieroglyphs, where each character is essentially a sketch of the object it represents. We often see complex visual concepts summarized by a few simple strokes.\nInspired by the human ability to draw, recent research has explored the potential to generate sketches using a wide variety of machine learning models, ranging from hierarchical Bayesian models (Lake et al., 2015), to more recent deep autoregressive models (Gregor et al., 2015; Ha & Eck, 2018; Chen et al., 2017) and generative adversarial nets (GANs) (Li et al., 2019). It is a natural question to ask whether we can obtain useful intermediate representations from models that produce sketches in the output space, as has been shown by other generative models (Ranzato et al., 2006; Kingma & Welling, 2014; Goodfellow et al., 2014; Donahue et al., 2017; Doersch et al., 2015). Unfortunately, a hierarchical Bayesian model suffers from prolonged inference time, while other current sketch models mostly focus on producing drawings in a closed set setting with a few classes (Ha & Eck, 2018; Chen et al., 2017), or on improving log likelihood at the pixel level (Rezende et al., 2016). Leveraging the learned representation from these drawing models remains a rather unexplored topic.\nIn this paper, we pose the following question: Can we learn a generalized embedding function that captures salient and compositional features by directly imitating human sketches? The answer is affirmative. In our experiments we develop SketchEmbedNet, an RNN-based sketch model trained to map grayscale and natural image pixels to the sketch domain. It is trained on hundreds of classes without the use of class labels to learn a robust drawing model that can sketch diverse and unseen inputs. We demonstrate salience by achieving state-of-the-art performance on the Omniglot few-shot classification benchmark and visual recognizability in one-shot generations. Then we explore how the embeddings capture image components and their spatial relationships to explore image space compositionality and also show a surprising property of conceptual composition.\nWe then push the boundary further by applying our sketch model to natural images—to our knowledge, we are the first to extend stroke-based autoregressive models to produce drawings of open domain natural images. We train our model with adapted SVG images from the Sketchy dataset (Sangkloy et al., 2016) and then evaluate the embedding quality directly on unseen classes in the mini-ImageNet task for few-shot classification (Vinyals et al., 2016). Our approach is competitive with existing unsupervised few-shot learning methods (Hsu et al., 2019; Khodadadeh et al., 2019; Antoniou & Storkey, 2019) on this natural image benchmark. In both the sketch and natural image domain, we show that by learning to draw, our methods generalize well even across different datasets and classes." }, { "heading": "2 RELATED WORK", "text": "In this section we review relevant literature including generating sketch-like images, unsupervised representation learning, unsupervised few-shot classification and sketch-based image retrieval (SBIR).\nAutoregressive drawing models: Graves (2013) use an LSTM to directly output the pen coordinates to imitate handwriting sequences. SketchRNN (Ha & Eck, 2018) builds on this by applying it to general sketches beyond characters. Song et al. (2018); Cao et al. (2019); Ribeiro et al. (2020) all extend SketchRNN through architectural improvements. Chen et al. (2017) change inputs to be pixel images. This and the previous 3 works consider multi-class sketching, but none handle more than 20 classes. Autoregressive models also generate images directly in the pixel domain. DRAW (Gregor et al., 2015) uses recurrent attention to plot pixels; Rezende et al. (2016) extends this to one-shot generation and PixelCNN (van den Oord et al., 2016) generates image pixels sequentially.\nImage processing methods & GANs: Other works produce sketch-like images based on style transfer or low-level image processing techniques. Classic methods are based on edge detection and image segmentation (Arbelaez et al., 2011; Xie & Tu, 2017). Zhang et al. (2015) use a CNN to directly produce sketch-like pixels for face images. Photo-sketch and pix2pix (Li et al., 2019; Isola et al., 2017) propose a conditional GAN to generate images across different style domains. Image processing based methods do not acquire high-level image understanding, as all the operations are in terms of low-level filtering; none of the GAN sketching methods are designed to mimic human drawings on open domain natural images.\nUnsupervised representation learning: In the sketch image domain, our method is similar to the large category of generative models which learn unsupervised representations by the principle of analysis-by-synthesis. Work by Hinton & Nair (2005) operates in a sketch domain and learns to draw by synthesizing an interpretable motor program. Bayesian Program Learning (Lake et al., 2015) draws through exact inference of common strokes but learning and inference are computationally challenging. As such, a variety of deep generative models aim to perform approximate Bayesian inference by using an encoder structure that directly predicts the embedding, e.g., deep autoencoders (Vincent et al., 2010), Helmholtz Machine (Dayan et al., 1995), variational autoencoder (VAE) (Kingma & Welling, 2014), BiGAN (Donahue et al., 2017), etc. Our method is also related to the literature of self-supervised representation learning (Doersch et al., 2015; Noroozi & Favaro, 2016; Gidaris et al., 2018; Zhang et al., 2016), as sketch strokes are part of the input data itself. In few-shot learning (Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017), recent work has explored unsupervised meta-training. CACTUs, AAL and UMTRA (Hsu et al., 2019; Antoniou & Storkey, 2019; Khodadadeh et al., 2019) all operate by generating pseudo-labels for training.\nSketch-based image retrieval (SBIR): In SBIR, a model is provided a sketch-drawing and retrieves a photo of the same class. The area is split into fine-grained (FG-SBIR) (Yu et al., 2016; Sangkloy et al., 2016; Bhunia et al., 2020) and a zero-shot setting (ZS-SBIR) (Dutta & Akata, 2019; Pandey et al., 2020; Dey et al., 2019). FG-SBIR considers minute details while ZS-SBIR learns high-level cross-domain semantics and a joint latent space to perform retrieval." }, { "heading": "3 LEARNING TO IMITATE DRAWINGS", "text": "Here we describe learning to draw through sketch imitation. Our architecture is a generative encoderdecoder model with a CNN encoder for pixel images and an RNN decoder to output vector sketches as shown in Figure 1. Unlike other drawing models that only train on a single or few classes (Ha & Eck, 2018; Chen et al., 2017), SketchEmbedNet is not limited by class inputs and can sketch a wide variety of images. We also introduce a differentiable rasterization function for computing an additional pixel-based training loss.\nInput & output representation Unlike SketchRNN which encodes drawing sequences, we learn an image embedding by mapping pixels to sketches, similar to Chen et al. (2017). Training data for this task (adopted from Ha & Eck (2018)) consists of a tuple (x,y), where x ∈ RH×W×C is the input image and y ∈ RT×5 is the stroke target. T is the maximum sequence length of the stroke data y, and each stroke yt consists of 5 elements, (∆x,∆y, s1, s2, s3). The first 2 elements are horizontal and vertical displacements on the drawing canvas from the endpoint of the previous stroke. The latter 3 elements are mutually exclusive pen states: s1 indicates the pen is on paper for the next stroke, s2 indicates the pen is lifted, and s3 indicates the sketch sequence has ended. y0 is initialized with (0, 0, 1, 0, 0) to start the generative process. Note that no class information is available while training.\nSketchEmbedding as a compositional encoding of images We use a CNN to encode the input image x and obtain the latent space representation z, as shown in Figure 1. To model intra-class variance, z is a Gaussian random variable parameterized by CNN outputs, similar to a VAE (Kingma & Welling, 2014). Throughout this paper, we refer to z as the SketchEmbedding. In typical image representations the embedding is trained to classify object classes, or to reconstruct the input pixels. Here, since the SketchEmbedding is fed into an RNN decoder to produce a sequence of drawing actions, z is additionally encouraged to have a compositional understanding of the object structure, instead of just an unstructured set of pixel features. For example when drawing the legs of a turtle, the model must explicitly generate each leg instance. While pixel-based models suffer from blurriness and in generating the image at once, does not distinguish between individual components such as the legs, body and head. The loss of this component information by pixel models has been observed in GAN literature (Goodfellow, 2017) which we propose is avoided by our sketching task.\nTo accommodate the increased training data complexity by including hundreds of classes, we also upscale the size of our model in comparison to work by Chen et al. (2017); Ha & Eck (2018); Song et al. (2018). The backbone is either a 4-layer CNN (Conv4) (Vinyals et al., 2016) for consistent comparisons in the few-shot setting or a ResNet12 (Oreshkin et al., 2018) which produces better drawing results. In comparison, Chen et al. (2017) only use 2D convolution with a maximum of 8 filters.\nRNN decoder The RNN decoder used in SketchEmbedNet is the same as in SketchRNN (Ha & Eck, 2018). The decoder outputs a mixture density which represents the stroke distribution at each timestep. Specifically, the stroke distribution is a mixture of some hyperparameter M bivariate Gaussians denoting spatial offsets as well as the probability of the three pen states s1−3. The spatial offsets ∆ = (∆x,∆y) are sampled from the mixture of Gaussians, described by: (1) the normalized mixture weight πj ; (2) mixture means µj = (µx, µy)j ; and (3) covariance matrices Σj . We further reparameterize each Σj with its standard deviation σj = (σx, σy)j and correlation coefficient ρxy,j . Thus, the stroke offset distribution is p(∆) = ∑M j=1 πjN (∆|µj ,Σj).\nThe RNN is implemented using a HyperLSTM (Ha et al., 2017); LSTM weights are generated at each timestep by a smaller recurrent “hypernetwork” to improve training stability. Generation is autoregressive, using z ∈ RD, concatenated with the stroke from the previous timestep yt−1, to form the input to the LSTM. Stroke yt−1 is the ground truth supervision at train time (teacher forcing), or a sample y′t−1, from the mixture distribution output by the model during from timestep t− 1." }, { "heading": "3.1 LEARNING", "text": "We train the drawing model in an end-to-end fashion by jointly optimizing three losses: a pen loss Lpen for learning pen states, a stroke loss Lstroke for learning pen offsets, and our proposed pixel loss Lpixel for matching the visual similarity of the predicted and the target sketch: L = Lpen + (1− α)Lstroke + αLpixel, (1) where α is a loss weighting hyperparameter. Both Lpen and Lstroke were in SketchRNN, while the Lpixel is our novel contribution to stroke-based generative models. Unlike SketchRNN, we do not impose a prior using KL divergence as we are not interested in unconditional sampling and it worsens quantitative results in later sections.\nPen loss The pen-states predictions {s′1, s′2, s′3} are optimized as a simple 3-way classification with the softmax cross-entropy loss, Lpen = − 1T ∑T t=1 ∑3 m=1 sm,tlog(s ′ m,t).\nStroke loss The stroke loss maximizes the log-likelihood of the spatial offsets of each ground truth stroke ∆t given the mixture density distribution pt at each timestep: Lstroke = − 1T ∑T t=1 log pt(∆t).\nPixel loss While pixel-level reconstruction objectives are common in generative models (Kingma & Welling, 2014; Vincent et al., 2010; Gregor et al., 2015), we introduce a pixel-based objective for vector sketch generation. After decoding, a differentiable rasterization function fraster is used to map the sketch into a pixel image. fraster transforms a stroke sequence y into a set of 2D line segments (l0, l1), (l1, l2) . . . (lT−1, lT ) where lt = ∑t τ=0 ∆τ . It renders by fixing canvas dimensions, scaling and centering strokes before determining pixel intensity based on the L2 distance between each pixel to lines in the drawing. Further details on fraster can be found in Appendix A. fraster is applied to both the prediction y′ and ground truth y, to produce two pixel images. Gaussian blur gblur(·) is used to reduce strictness before computing the binary cross-entropy loss, Lpixel:\nI = gblur(fraster(y)), I ′ = gblur(fraster(y ′)), Lpixel = − 1\nHW H∑ i=1 W∑ j=1 Iij log(I ′ij). (2)\nCurriculum training schedule We find that α (in Equation 1) is an important hyperparameter that impacts both the learned embedding space and the generation quality of SketchEmbedNet. A curriculum training schedule is used, increasing α to prioritize Lpixel relative to Lstroke as training progresses; this makes intuitive sense as a single drawing can be produced by many different stroke sequences but learning to draw in a fixed manner is easier. While Lpen promotes reproducing a specific drawing sequence, Lpixel only requires that the generated drawing visually matches the image. Like a human, the model should learn to follow one drawing style (a la paint-by-numbers) before learning to draw freely." }, { "heading": "4 DRAWING IMITATION EXPERIMENTS", "text": "In this section, we introduce our experiments on training SketchEmbedNet using two sketching datasets. The first is based on pure stroke-based drawings, and the second consists of natural image and drawing pairs.\nQuickdraw: Stroke-based image sketching The Quickdraw (Jongejan et al., 2016) dataset consists of 345 classes of each with 70,000 examples, produced by human players participating in the game “Quick, Draw!”. Examples from the Quickdraw dataset are shown in Figure 2b. The input image x is a direct rasterization of the drawing data y. 300 of 345 classes are randomly selected for training; x is rasterized to a resolution of 28 × 28 and stroke labels y padded up to length T = 64. Any drawing samples exceeding this length were discarded. Note that this an unsupervised representation learning approach, as no class information is used by the system. Data processing procedures and class splits are in Appendix G.\nSketchy: Open domain natural image sketching We further extend our stroke-based generation model on open domain natural images. Here, the input is an RGB photo, and the output is a human drawing which does not align with the photo precisely and also does not match with the low-level image details. This is a novel setting, as prior efforts by Ha & Eck (2018); Chen et al. (2017); Song et al. (2018) have only applied their sketch RNN models on the Quickdraw dataset or natural images with only two object classes (shoe/chair) and scrubbed backgrounds (Yu et al., 2016). Learning to sketch open domain natural images is very challenging as it requires the model to identify the subject and filter unnecessary details not present in the sketch. At test time, we further challenge our method by evaluating on unseen data distributions necessitating generalization over natural images.\nFor this task we use the Sketchy dataset (Sangkloy et al., 2016) which consists of ImageNet images paired with vector sketches for a total of 56k examples after processing. Sketches are stored as SVGs with timestamps preserving their original drawing sequence which we adapt by sampling paths in this order. Images are also centered, padded and resized to resolution 84× 84 (see Figure 2a). We fix the maximum sequence length to T = 100, and use all 125 categories but remove classes that have overlapping child synsets with the test classes of mini-ImageNet (Vinyals et al., 2016). This enables testing on mini-ImageNet without any alterations to the benchmark. Once again this is an unsupervised learning formulation." }, { "heading": "4.1 RESULTS AND VISUALIZATIONS", "text": "Figure 3 shows drawings conditioned on sketch image inputs. There is little noticeable drop in quality when we sample sketches from unseen classes compared to those it has seen before. Figure 4 shows examples of sketches generated from natural images. While they are not fine-grained renditions, these sketches clearly demonstrate SketchEmbedNet’s ability to capture key components of seen and unseen classes. The model effectively isolates the subject in each natural image and captures the circular and square shapes in the cakes and storefronts respectively. Even with the challenging lion images, it identifies the silhouette of the laying lion despite low contrast and appropriately localizes the one on the mountainside.\nUnlike pixel-based auto-encoder models, our sketches do not follow the exact pose of the original strokes, but rather capture a general notion of component groupings. In the basketball example of Figure 3, the lines are not a good pixel-wise match to those in the original image yet they are placed in sensible relative positions. Weaker examples are presented in the last row of Figure 3 and 4; regardless, even poorer examples still capture some structural aspects of the original image. Implementation details can be found in Appendix B.\nIn later sections we explore the uses of SketchEmbeddings and fix models for all downstream tasks." }, { "heading": "5 FEW-SHOT CLASSIFICATION USING SKETCHEMBEDDING", "text": "We would like to assess the benefits of learning to draw by performing few-shot classification with our learned embedding space. Examining performance on discriminative tasks reveals that learning to imitate sketches allows the embeddings to capture salient information of novel object classes. Below we describe our few-shot classification procedure and summarize results on the Omniglot (Lake et al., 2015) and mini-ImageNet benchmarks (Vinyals et al., 2016).\nComparison to unsupervised few-shot classification In unsupervised few-shot classification, a model is not provided with any class labels during meta-training, until it is given a few labeled examples (\"shots\") of the novel classes at meta-test time. While our model is provided a \"target\"—a sequence of strokes—during training, it is not given any class information. Therefore, we propose that the presented sketch imitation training, though it uses sketch sequences, is comparable to other class-label-free representation learning approaches (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018) and the learned SketchEmbeddings can be applied to unsupervised few-shot classification methods.\nIn our experiments, we compare to previous unsupervised few-shot learning approaches: CACTUs (Hsu et al., 2019), AAL (Antoniou & Storkey, 2019), and UMTRA (Khodadadeh et al., 2019). These methods create pseudo-labels during meta-training using either clustering or data augmentation. As additional baselines, a Conv-VAE (Kingma & Welling, 2014) and a random CNN are also included, both using the same Conv4 backbone.\nFew-shot experimental setup The CNN encoder of SketchEmbedNet is used as an embedding function combined with a linear classification head to perform few-shot classification. The embedding is made deterministic by taking the mean of the random normal latent space z and discarding the variance parameter from the encoder. Otherwise, the conventional episodic setup for few-shot classification is used; each episode consists of a labeled \"support\" set of N ×K (N-way K-shot) embeddings and an unlabeled \"query\" set. The linear classification head is trained on the labeled support set and evaluated on the query set." }, { "heading": "5.1 FEW-SHOT CLASSIFICATION ON OMNIGLOT", "text": "The Omniglot (Lake et al., 2015) dataset contains 50 alphabets, 1623 unique character types, each with 20 examples and is presented as both a greyscale image and a stroke drawing. We use the same train-test split as Vinyals et al. (2016) along with randomly sampled episodes. Experiments using the more challenging Lake split where episodes are sampled within alphabet, as proposed by Lake et al. (2015), are in Appendix E and random seed experiments in Appendix F.\nTo ensure a fair comparison with other few-shot classification models, we use the same convolutional encoder (Conv4) as Vinyals et al. (2016). Results from training only on Omniglot (Lake et al., 2015) are also presented to demonstrate effectiveness without the larger Quickdraw(Jongejan et al., 2016) dataset. No significant improvements were observed using the deeper ResNet12(Oreshkin et al., 2018) architecture; additional results are in Appendix I.\nAll of our methods out-perform the previous state-of-the-art on the unsupervised Omniglot benchmark (Table 1). The Quickdraw trained model surpasses supervised MAML (Finn et al., 2017), and is on par with a supervised ProtoNet (Snell et al., 2017) model , especially in the 5-shot settings. Both baselines, a Conv-VAE and a random CNN, perform well compared to other unsupervised methods." }, { "heading": "5.2 FEW-SHOT CLASSIFICATION ON MINI-IMAGENET", "text": "We extend our investigation and assess SketchEmbeddings for the classification of natural images in the mini-ImageNet benchmark (Vinyals et al., 2016). The same CNN encoder model from the natural image sketching task is used to match the visual domain of the examples we hope to classify.\nThe mini-ImageNet (Vinyals et al., 2016) dataset consists of 100 classes each with 600 examples. The setup proposed by Ravi & Larochelle (2017) is used, where the classes are split 64-16-20 for training, validation and test. As noted earlier, any examples in the Sketchy dataset that are also present in the mini-ImageNet test were removed by filtering the synset (and children synset) IDs ensuring train and test classes are disjoint.\nClassification results on mini-ImageNet are shown in Table 2. Natural image classification is a far more challenging problem. Learning to reconstruct pixels of an image actually worsens our results; the trained Conv-VAE is outperformed by the VAE with random weights. However, sketch reconstruction is still a valuable task; our models are competitive while our best model out-performs previous state-of-the-art unsupervised methods on few-shot settings. A full table is in Appendix J, seeding results are in Appendix F." }, { "heading": "5.3 SKETCHING TO LEARN CLASS-IDENTIFIABLE INFORMATION", "text": "Existing sketch works have focused on generating better drawings or unifying sketches with other image domains. We present a new paradigm: using sketching as an auxiliary task to learn visual content. Only by training a drawing model that can sketch general image inputs are we able to transfer the learned understanding to new data distributions. By considering the stroke distribution of the Quickdraw dataset, we are able to interpret image inputs from the separate Omniglot dataset and tackle the few-shot classification task with surprising accuracy.\nWhile the natural image sketching task is challenging and does not always produce high-fidelity results, it still learns useful visual information. By training on the Sketchy dataset, we learn how to draw other data distributions for which no sequential stroke data exists. Then, by knowing how to sketch this mini-ImageNet data we are able to produce distinguishable embeddings that enable competitive few-shot classification performance.\nVarying weighting of pixel-loss For both settings we sweep the pixel loss coefficient αmax to ablate its impact on model performance on the Omniglot task (Table 3). There is a substantial improvement in few-shot classification when αmax is non-zero. αmax= 0.50 achieves the best results, and the trend goes downward when αmax approaches to 1.0, i.e. the weighting for Lstroke goes to 0.0. This is\nreasonable as the training of SketchEmbedNet is more stable under the guidance of ground truth strokes." }, { "heading": "6 PROPERTIES OF SKETCHEMBEDDINGS", "text": "We hypothesize that reproducing a sketch drawing rather than a pixel-based approach requires the preservation of more structural information due to sequential RNN generation. By learning in this manner, SketchEmbeddings are aware of spatial properties and the composition of elements in image space. We examine this compositionality through several comparisons of SketchEmbeddings with those generated by a Conv-VAE.\nComponent arrangements We construct examples that contain the same set of objects but in different arrangements to test sensitivity to component arrangement and composition in image space. We then embed these examples with both generative models and project into 2D space using UMAP (McInnes et al., 2018) to visualize their organization. In the first 2 panels of Figure 5-A, we see that the SketchEmbeddings are easily separated in unsupervised clustering. The rightmost panel of Figure 5-A exhibits non-synthetic classes with duplicated shapes; snowmen with circles and televisions with squares. With these results, we demonstrate the greater component level awareness of SketchEmbeddings. The 4 rearranged shapes and the nested circle and squares have similar silhouettes that are difficult to differentiate to a conventional pixel loss. To SketchEmbeddings, the canvas offset and different drawing sequence of each shape make them substantially different in embedding space.\nSpatial relationships Drawing also builds awareness of relevant underlying variables, such as spatial relationships between components of the image. We examine the degree to which the underlying variables of angle, distance or size are captured by the embedding, by constructing images that vary along each dimension respectively. The embeddings are again grouped by a 2D projection in Figure 5-B using the UMAP (McInnes et al., 2018) algorithm. When clustered, the 2D projection of SketchEmbeddings arranges the examples along an axis corresponding to the latent variable compared to the Conv-VAE embeddings which is visibly non-linear and arranges in clusters. This clear axis-alignment suggests a greater level of latent variable disentanglement in the SketchEmbeddings.\nConceptual composition Finally, we explore concept space composition using SketchEmbeddings (Figure 5-C) by embedding different Quickdraw examples then performing arithmetic with the latent vectors. By subtracting a circle embedding and adding a square embedding from a snowman composed of stacked circles, we produce stacked boxes. This property of vector arithmetic is reminiscent of language representations, as evidenced in analogies like King - Man + Woman = Queen (Ethayarajh et al., 2019). Our results indicate that this property is captured to a greater degree in SketchEmbedding than in the pixel-based VAE embeddings. Composing SketchEmbeddings produces decoded examples that appeal to our intuitive conceptual understanding while the VAE degrades to blurry, fragmented images. We provide more examples of the setup in Figure 5-C as well as additional classes in Appendix K." }, { "heading": "7 ONE-SHOT GENERATION", "text": "To evaluate the sketches generated by our model, we make qualitative comparisons to other one-shot generative models and quantitatively assess our model through visual classification via a ResNet101 (He et al., 2016). In this section, all models use the ResNet12 (Oreshkin et al., 2018) backbone.\nQualitative comparisons We compare SketchEmbedNet one-shot generations of Omniglot characters with examples from other few-shot (Reed et al., 2017) and one-shot (Rezende et al., 2016)\napproaches (Figure 6). In the settings shown, none of the models have seen any examples from the character class, or the parent alphabet. Furthermore, the drawer has seen no written characters during training and is trained only on the Quickdraw dataset. Visually, our generated images better resemble the support examples and the variations by stretching and shifting strokes better preserves the semantics of each character. Generations in pixel space may disrupt strokes and alter the character to human perception. This is especially true for written characters as they are frequently defined by a specific set of strokes instead of blurry clusters of pixels.\nQuantitative evaluation of generation quality Evaluating generative models is often challenging. Per-pixel metrics like in Reed et al. (2017); Rezende et al. (2016) may penalize generative variance that still preserves meaning. We propose an Inception Score (Salimans et al., 2016) inspired metric to quantify class-recognizability and generalization of generated examples. We train two separate ResNet classifiers (He et al., 2016), each on a different set of 45 Quickdraw classes. One set was part of the training set of SketchEmbedNet (referred to as “seen”) and the other set was held out during training (referred to as “unseen”). We then have SketchEmbedNet generate one-shot sketches from each set and have the corresponding classifier predict a class. The accuracy of the classifier on generated examples is compared with its training accuracy in Table 4. For a ResNet classifier, SketchEmbedNet generations are more recognizable for both classes seen and unseen classes." }, { "heading": "8 CONCLUSION", "text": "Learning to draw is not only an artistic pursuit but drives a distillation of real-world visual concepts. We present a generalized drawing model capable of producing accurate sketches and visual summaries of open-domain natural images. While sketch data may be challenging to source, we show that training to draw either sketch or natural images can generalize for downstream tasks, not only within each domain but also well beyond the training data. More generally research in this direction may lead to more lifelike image understanding inspired by how humans communicate visual concepts." }, { "heading": "A RASTERIZATION", "text": "The key enabler of our novel pixel loss for sketch drawings is our differentiable rasterization function fraster. Sequence based loss functions such as Lstroke are sensitive to the order of points while in reality, drawings are sequence invariant. Visually, a square is a square whether it is drawn clockwise or counterclockwise.\nThe purpose of a sketch representation is to lower the complexity of the data space and decode in a more visually intuitive manner. While it is a necessary departure point, the sequential generation of drawings is not key to our visual representation and we would like SketchEmbedNet to be agnostic to any specific sequence needed to draw the sketch that is representative of the image input.\nTo facilitate this, we develop our rasterization function fraster which renders an input sequence of strokes as a pixel image. However, during training, the RNN outputs a mixture of Gaussians at each timestep. To convert this to a stroke sequence, we sample from these Gaussians; this can be repeated to reduce the variance of the pixel loss. We then scale our predicted and ground truth sequences by the properties of the latter before rasterization.\nStroke sampling At the end of sequence generation we haveNs×(6M+3) parameters, 6 Gaussian mixture parameters, 3 pen states, Ns times, one for each stroke. To obtain the actual drawing we sample from the mixture of Gaussians:[\n∆xt ∆yt\n] = [ µx,t µy,t ] + [ σx,t 0 ρxy,tσy,t σy,t √ 1− ρ2xy,t ] , ∼ N (0,12). (3)\nAfter sampling we compute the cumulative sum of every stroke over the timestep so that we obtain the absolute displacement from the initial position:[\nxt yt\n] = T∑ τ=0 [ ∆xτ ∆yτ ] . (4)\nyt,abs = (xt, yt, s1, s2, s3). (5)\nScaling Each sketch generated by our model begins at (0,0) and the variance of all strokes in the training set is normalized to 1. On a fixed canvas the image is both very small and localized to the top left corner. We remedy this by computing a scale λ and shift xshift, yshift using labels y and apply them to both the prediction y′ as well as the ground truth y. These parameters are computed as:\nλ = min {\nW\nxmax − xmin ,\nH\nymax − ymin\n} , (6)\nxshift = xmax + xmin\n2 λ, yshift = ymax + ymin 2 λ. (7)\nxmax, xmin, ymax, ymin are the minimum and maximum values of xt, yt from the supervised stroke labels and not the generated strokes. W and H are the width and height in pixels of our output canvas.\nCalculate pixel intensity Finally we are able to calculate the pixel pij intensity of every pixel in our H ×W canvas.\npij = σ\n[ 2− 5× min\nt=1...Ns\n( dist ( (i, j), (xt−1, yt−1), (xt, yt) ) + (1− bs1,t−1e)106 )] , (8)\nwhere the distance function is the distance between point (i, j) from the line segment defined by the absolute points (xt−1, yt−1) and (xt, yt). We also blow up any distances where s1,t−1 < 0.5 so as to not render any strokes where the pen is not touching the paper.\nB IMPLEMENTATION DETAILS\nWe train our model for 300k iterations with a batch size of 256 for the Quickdraw dataset and 64 for Sketchy due to memory constraints. The initial learning rate is 1e-3 which decays by 0.85 every 15k steps. We use the Adam (Kingma & Ba, 2015) optimizer and clip gradient values at 1.0. σ = 2.0 is used for the Gaussian blur in Lpixel. For the curriculum learning schedule, the value of α is set to 0 initially and increases by 0.05 every 10k training steps with an empirically obtained cap at αmax = 0.50 for Quickdraw and αmax = 0.75 for Sketchy.\nThe ResNet12 (Oreshkin et al., 2018) encoder uses 4 ResNet blocks with 64, 128, 256, 512 filters respectively and ReLU activations. The Conv4 backbone has 4 blocks of convolution, batch norm (Ioffe & Szegedy, 2015), ReLU and max pool, identical to Vinyals et al. (2016). We select the latent space to be 256 dimensions, RNN output size to be 1024, and the hypernetwork embedding size to be 64. We use a mixture of M = 30 bivariate Gaussians for the mixture density output of the stroke offset distribution." }, { "heading": "C LATENT SPACE INTERPOLATION", "text": "Like in many encoding-decoding models we evaluate the interpolation of our latent space. We select 4 embeddings at random and use bi-linear interpolation to produce new embeddings. Results are in Figures 7a and 7b.\n(a) Interpolation of classes: power outlet, snowman, jacket, elbow (b) Interpolation of classes: cloud, power outlet, basket, compass\nFigure 7: Latent space interpolations of randomly selected examples\nWe observe that compositionality is also present in these interpolations. In the top row of Figure 7a, the model first plots a third small circle when interpolating from the 2-circle power outlet and the 3-circle snowman. This small circle is treated as single component that grows as it transitions between classes until it’s final size in the far right snowman drawing.\nSome other RNN-based sketching models (Ha & Eck, 2018; Chen et al., 2017) experience other classes materializing in interpolations between two unrelated classes. Our model does not exhibit this same behaviour as our embedding space is learned from more classes and thus does not contain local groupings of classes.\nD EFFECT OF α ON FEW-SHOT CLASSIFICATION\nWe performed additional experiments exploring the impact of our curriculum training schedule for α. The encoding component of our drawing model was evaluated on the few-shot classification task for different values of αmax every 25k iterations during training. A graph is shown in Figure 8 and the full table of all values of αmax is in Table 5.\nE INTRA-ALPHABET LAKE SPLIT\nThe creators of the Omniglot dataset and one-shot classification benchmark originally proposed an intra-alphabet classification task. This task is more challenging than the common Vinyals split as characters from the same alphabet may exhibit similar stylistics of sub-components that makes visual differentiation more difficult. This benchmark has been less explored by researchers; however, we still present the performance of our SketchEmbedding model against evaluations of other few-shot classification models on the benchmark. Results are shown in Table 6.\nUnsurprisingly, our model is outperformed by supervised models and does fall behind by a more substantial margin than in the Vinyals split. However, our SketchEmbedding approach still achieves respectable classification accuracy overall and greatly outperforms a Conv-VAE baseline." }, { "heading": "F EFFECT OF RANDOM SEEDING ON FEW-SHOT CLASSIFICATION", "text": "The training objective for SketchEmbedNet is to reproduce sketch drawings of the input. This task is unrelated to few-shot classification may perform variably given different initialization. We quantify this variance by training our model with 15 unique random seeds and evaluating the performance of the latent space on the few-shot classification tasks.\nWe disregard the per (evaluation) episode variance of our model in each test stage and only present the mean accuracy. We then compute a new confidence interval over random seeds. Results are presented in Tables 7, 8, 9." }, { "heading": "G DATA PROCESSING", "text": "We apply the same data processing methods as in Ha & Eck (2018) with no additional changes to produce our stroke labels y. When rasterizing for our input x, we scale, center the strokes then pad the image with 10% of the resolution in that dimension rounded to the nearest integer.\nThe following list of classes were used for training: The Eiffel Tower, The Mona Lisa, aircraft carrier, alarm clock, ambulance, angel, animal migration, ant, apple, arm, asparagus, banana, barn, baseball, baseball bat, bathtub, beach, bear, bed, bee, belt, bench, bicycle, binoculars, bird, blueberry, book,\nboomerang, bottlecap, bread, bridge, broccoli, broom, bucket, bulldozer, bus, bush, butterfly, cactus, cake, calculator, calendar, camel, camera, camouflage, campfire,\ncandle, cannon, car, carrot, castle, cat, ceiling fan, cell phone, cello, chair, chandelier, church, circle, clarinet, clock, coffee cup, computer, cookie, couch, cow, crayon,\ncrocodile, crown, cruise ship, diamond, dishwasher, diving board, dog, dolphin, donut, door, dragon, dresser, drill, drums, duck, dumbbell, ear, eye, eyeglasses, face,\nfan, feather, fence, finger, fire hydrant, fireplace, firetruck, fish, flamingo, flashlight, flip flops, flower, foot, fork, frog, frying pan, garden, garden hose, giraffe, goatee,\ngrapes, grass, guitar, hamburger, hand, harp, hat, headphones, hedgehog, helicopter, helmet, hockey puck, hockey stick, horse, hospital, hot air balloon, hot dog,\nhourglass, house, house plant, ice cream, key, keyboard, knee, knife, ladder, lantern, leaf, leg, light bulb, lighter, lighthouse, lightning, line, lipstick, lobster, mailbox,\nmap, marker, matches, megaphone, mermaid, microphone, microwave, monkey, mosquito, motorbike, mountain, mouse, moustache, mouth, mushroom, nail, necklace,\nnose, octopus, onion, oven, owl, paint can, paintbrush, palm tree, parachute, passport, peanut, pear, pencil, penguin, piano, pickup truck, pig, pineapple, pliers, police\ncar, pool, popsicle, postcard, purse, rabbit, raccoon, radio, rain, rainbow, rake, remote control, rhinoceros, river, rollerskates, sailboat, sandwich, saxophone, scissors,\nsee saw, shark, sheep, shoe, shorts, shovel, sink, skull, sleeping bag, smiley face, snail, snake, snowflake, soccer ball, speedboat, square, star, steak, stereo, stitches,\nstop sign, strawberry, streetlight, string bean, submarine, sun, swing set, syringe, t-shirt, table, teapot, teddy-bear, tennis racquet, tent, tiger, toe, tooth, toothpaste,\ntractor, traffic light, train, triangle, trombone, truck, trumpet, umbrella, underwear, van, vase, watermelon, wheel, windmill, wine bottle, wine glass, wristwatch,\nzigzag, blackberry, power outlet, peas, hot tub, toothbrush, skateboard, cloud, elbow, bat, pond, compass, elephant, hurricane, jail, school bus, skyscraper, tornado,\npicture frame, lollipop, spoon, saw, cup, roller coaster, pants, jacket, rifle, yoga, toilet, waterslide, axe, snowman, bracelet, basket, anvil, octagon, washing machine,\ntree, television, bowtie, sweater, backpack, zebra, suitcase, stairs, The Great Wall of China\nG.2 OMNIGLOT\nWe derive our Omniglot tasks from the stroke dataset originally provided by Lake et al. (2015) rather than the image analogues. We translate the Omniglot stroke-by-stroke format to the same one used in Quickdraw. Then we apply the Ramer-Douglas-Peucker (Douglas & Peucker, 1973) algorithm with an epsilon value of 2 and normalize variance to 1 to produce y. We also rasterize our images in the same manner as above for our input x.\nG.3 SKETCHY\nSketchy data is provided as an SVG image composed of line paths that are either straight lines or Bezier curves. To generate stroke data we sample sequences of points from Bezier curves at a high resolution that we then simplify with RDP, = 5. We also eliminate continuous strokes with a short\npath length or small displacement to reduce our stroke length and remove small and noisy strokes. Path length and displacement are considered with respect to the scale of the entire sketch.\nOnce again we normalize stroke variance and rasterize for our input image in the same manners as above.\nThe following classes were use for training after removing overlapping classes with mini-ImageNet: hot-air_balloon, violin, tiger, eyeglasses, mouse, jack-o-lantern, lobster, teddy_bear, teapot, helicopter, duck, wading_bird, rabbit, penguin, sheep, windmill, piano, jel-\nlyfish, table, fan, beetle, cabin, scorpion, scissors, banana, tank, umbrella, crocodilian, volcano, knife, cup, saxophone, pistol, swan, chicken, sword, seal, alarm_clock,\nrocket, bicycle, owl, squirrel, hermit_crab, horse, spoon, cow, hotdog, camel, turtle, pizza, spider, songbird, rifle, chair, starfish, tree, airplane, bread, bench, harp,\nseagull, blimp, apple, geyser, trumpet, frog, lizard, axe, sea_turtle, pretzel, snail, butterfly, bear, ray, wine_bottle, , elephant, raccoon, rhinoceros, door, hat, deer, snake,\nape, flower, car_(sedan), kangaroo, dolphin, hamburger, castle, pineapple, saw, zebra, candle, cannon, racket, church, fish, mushroom, strawberry, window, sailboat,\nhourglass, cat, shoe, hedgehog, couch, giraffe, hammer, motorcycle, shark" }, { "heading": "H AUTOREGRESSIVE DRAWING MODEL COMPARISONS", "text": "We summarize the key components of SketchEmbedNet in comparison to other autoregressive drawing models in Table 10." }, { "heading": "I FEW-SHOT CLASSIFICATION ON OMNIGLOT – FULL RESULTS", "text": "The full results table for few-shot classification on the Omniglot (Lake et al., 2015) dataset, including the ResNet12 (Oreshkin et al., 2018) model." }, { "heading": "J FEW-SHOT CLASSIFICATION ON MINI-IMAGENET – FULL RESULTS", "text": "The full results table for few-shot classification on the mini-ImageNet dataset, including the ResNet12 (Oreshkin et al., 2018) model and Conv4 models.\nK ADDITIONAL CONCEPTUAL COMPOSITIONALITY" }, { "heading": "L EMBEDDING PROPERTIES OF OTHER BASELINE MODELS", "text": "Here we substantiate the uniqueness of the properties observed in SketchEmbeddings by applying the same experiments to a β-VAE (Higgins et al., 2017) as well a vanilla autoencoder trained on the same dataset. We also include results of a SketchEmbedNet trained with a KL objective.\nL.1 β-VAE\nThe β-VAE (Higgins et al., 2017) exhibits similar unsupervised clustering in comparison to the Conv-VAE and is generally incapable of distinguishing input images that have different shape compositions but the same overall silhouette (first two examples from the left). Differently it is better at distinguishing non-synthetic examples that contain multiple squares or circles (3rd figure). However, it utterly fails the latent variable regression task and does not exhibit any significant conceptual composition in latent space.\nL.2 AUTOENCODER AND SKETCHEMBEDNET-KL\nWe show that the performance of SketchEmbedding embeddings in our experiments in Section 6 which focuses on organization in latent space is not correlated with the KL term. We present both a vanilla autoencoder without the KL objective and a SketchEmbedNet trained with a KL objective. We observe a drop in overall generation quality in the Conceptual Composition decoding as is expected with an additional constraint but maintained performance in the other tasks. Meanwhile, the autoencoder does not demonstrate any marked improvements over the Conv-VAE in the main paper or any other baseline comparison." }, { "heading": "M ADDITIONAL COMPOSITIONALITY MODES", "text": "We provide additional clustering methods t-SNE (Maaten & Hinton, 2008) and PCA as well as 2 new experiments that explore the compositionality of our latent SketchEmbedding.\nAdditional clustering methods We include additional t-SNE and PCA results of the experiments in the main paper. These are presented in Figures 13, 14, 15 16, 17. t-SNE and UMAP are stochastic and do not always produce the same visualization while PCA is deterministic and prioritizes the most important dimensions.\nAdditional Experiments Here we provide different investigations into the compositionality of our learned embedding space that were not present in our main paper. These results presented in Figure 18 and 19.\nIn Figure 18 we place a square in the center of the example and place a circle above, below or to the sides of it. Once again we find that our SketchEmbedding embedding clusters better than the VAE approach.\nNew examples are generated where each class has a different numbers of circles. Both the VAE approach and our SketchEmbedding cluster well and neither appear to learn the count manifold." }, { "heading": "N HYPERNETWORK ACTIVATIONS", "text": "To further explore how our network understands drawings, we examine the relationships between the activations of the hypernetwork of our HyperLSTM (Ha et al., 2017).\nThe hypernetwork determines the weights of the LSTM that generates the RNN at each decoding timestep. These activations are 512-dimensional vectors. We collect the activations from many\nexamples, cluster them in 512-dimensional space and visualize the strokes belonging to each cluster for each example. A full decoding is also rendered where each cluster within an example is assigned a color.\nSingle class: snowman First we explore this clustering using only the snowman class from Quickdraw (Jongejan et al., 2016). We expect substantial reuse of a \"circle\" both within and over many examples. Clustering of the strokes is done with the DBSCAN (Ester et al., 1996) and parameter = 3.9. Results are in Figure 20. Each row is a separate input; the far left column is the color-coded, composed image, the second is the noise cluster and every subsequent column is a unique cluster.\nWhile cluster re-use is limited, cluster 0 often contains a large, fully enclosed circle. Many other clusters may contain circles or partial strokes with some reuse. Larger, fully composed and coloured sketches are presented in Figure 21\nMany classes: round objects We repeat the above experiment with a mixture of classes that generally can be expected to contain circles. These classes were circles, snowmen, clocks and cups. The two former classes are frequently composed only of circles while the latter are expected to consistently contain other distinct shapes. Results are presented in Figure 22 and select examples in Figure 23.\nWe still observe that the model continues to isolate circles in the first column and note it continues to do so for the cup and clock classes which are not exclusively circular.\nMany random classes: Finally, we repeat the above clustering with the 45 randomly selected holdout classes from the Quickdraw training process of SketchEmbedding. Results are once again presented in Figure 24 and select examples in Figure 25." } ]
2,020
SKETCHEMBEDNET: LEARNING NOVEL CONCEPTS
SP:9f14c6cce4e92d92e0025b6ede2a04a862c3b5a9
[ "The problem of good predictive uncertainty-based out of distribution (OOD) detection is essential for classification systems to be deployed in safety-critical environments. The authors present a method RETO that achieves state-of-the-art performance in a transductive OOD detection setting. Like other predictive uncertainty-based approaches RETO can ultimately be used downstream on problems like active learning or abstaining on OOD samples in combination with selective classification." ]
Machine learning models are often used in practice once they achieve good generalization results on in-distribution (ID) holdout data. To predict test sets in the wild, they should detect samples they cannot predict well. We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios, e.g. when OOD data consists of unseen classes or corrupted measurements. This paper studies how such “hard” OOD scenarios can benefit from tuning the detection method after observing a batch of the test data. This transductive setting is relevant when the advantage of even a slightly delayed OOD detection outweighs the financial cost for additional tuning. We propose a novel method that uses an artificial labeling scheme for the test data and early stopping regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch. We show via comprehensive experiments that our approach is indeed able to significantly outperform both inductive and transductive baselines on difficult OOD detection scenarios, such as unseen classes on CIFAR-10/CIFAR-100, severe corruptions (CIFAR-C), and strong covariate shift ImageNet vs ObjectNet.
[]
[ { "authors": [ "Julia Angwin", "Jeff Larson", "Surya Mattu", "Lauren Kirchner" ], "title": "Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks", "venue": null, "year": 2016 }, { "authors": [ "Sanjeev Arora", "Simon Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Hyojin Bahng", "Sanghyuk Chun", "Sangdoo Yun", "Jaegul Choo", "Seong Joon Oh" ], "title": "Learning de-biased representations with biased representations", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Andrei Barbu", "David Mayo", "Julian Alverio", "William Luo", "Christopher Wang", "Dan Gutfreund", "Josh Tenenbaum", "Boris Katz" ], "title": "ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Emma Beede", "Elizabeth Baylor", "Fred Hersch", "Anna Iurchenko", "Lauren Wilcox", "Paisan Ruamviboonsuk", "Laura M. Vardoulakis" ], "title": "A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy", "venue": "In Proceedings of the CHI Conference on Human Factors in Computing Systems,", "year": 2020 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In Proceedings of the 32th International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Tianshi Cao", "Chin-Wei Huang", "David Yu-Tung Hui", "Joseph Paul Cohen" ], "title": "A benchmark of medical out of distribution detection", "venue": "arXiv preprint arXiv:2007.04250,", "year": 2020 }, { "authors": [ "Beidi Chen", "Weiyang Liu", "Zhiding Yu", "Jan Kautz", "Anshumali Shrivastava", "Animesh Garg", "Anima Anandkumar" ], "title": "Angular visual hardness", "venue": null, "year": 1912 }, { "authors": [ "Yining Chen", "Colin Wei", "Ananya Kumar", "Tengyu Ma" ], "title": "Self-training avoids using spurious features under domain shift", "venue": "arXiv preprint arXiv:2006.10032,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Yanwei Fu", "Timothy M. Hospedales", "Tao Xiang", "Shaogang Gong" ], "title": "Transductive multi-view zeroshot learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "Proceedings of Machine Learning Research,", "year": 2016 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Selective classification for deep neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko", "Julian Bitterwolf" ], "title": "Why ReLU networks yield highconfidence predictions far away from the training data and how to mitigate the problem", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Elyor Kodirov", "Tao Xiang", "Zhenyong Fu", "Shaogang Gong" ], "title": "Unsupervised domain adaptation for zero-shot learning", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Ananya Kumar", "Tengyu Ma", "Percy Liang" ], "title": "Understanding self-training for gradual domain adaptation", "venue": "arXiv preprint arXiv:2002.11361,", "year": 2020 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Mingchen Li", "Mahdi Soltanolkotabi", "Samet Oymak" ], "title": "Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks", "venue": "Proceedings of Machine Learning Research,", "year": 2020 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R. Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Alex X. Lu", "Amy X. Lu", "Wiebke Schormann", "David W. Andrews", "Alan M. Moses" ], "title": "The cells out of sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers", "venue": "arXiv preprint arXiv:1906.07282,", "year": 2019 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Radford M. Neal" ], "title": "Bayesian Learning for", "venue": "Neural Networks. Springer-Verlag,", "year": 1996 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D. Sculley", "Sebastian Nowozin", "Joshua Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do cifar-10 classifiers generalize to cifar-10? 2018", "venue": null, "year": 2018 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do ImageNet classifiers generalize to ImageNet?, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jie Ren", "Peter J. Liu", "Emily Fertig", "Jasper Snoek", "Ryan Poplin", "Mark Depristo", "Joshua Dillon", "Balaji Lakshminarayanan" ], "title": "Likelihood ratios for out-of-distribution detection", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B. Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Clayton Scott", "Gilles Blanchard" ], "title": "Transductive anomaly detection", "venue": "Technical report,", "year": 2008 }, { "authors": [ "Hidetoshi Shimodaira" ], "title": "Improving predictive inference under covariate shift by weighting the loglikelihood function", "venue": "Journal of Statistical Planning and Inference,", "year": 2000 }, { "authors": [ "Ravid Shwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "arXiv preprint arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Yu Sun", "Xiaolong Wang", "Liu Zhuang", "John Miller", "Moritz Hardt", "Alexei A. Efros" ], "title": "Test-time training with self-supervision for generalization under distribution shifts", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Antonio Torralba", "Rob Fergus", "William T. Freeman" ], "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1958 }, { "authors": [ "Ziyu Wan", "Dongdong Chen", "Yan Li", "Xingguang Yan", "Junge Zhang", "Yizhou Yu", "Jing Liao" ], "title": "Transductive zero-shot learning with visual structure constraint", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Fatih Furkan Yilmaz", "Reinhard Heckel" ], "title": "Image recognition from raw labels collected without annotators", "venue": "arXiv preprint arXiv:1910.09055,", "year": 2019 }, { "authors": [ "Qing Yu", "Kiyoharu Aizawa" ], "title": "Unsupervised out-of-distribution detection by maximum classifier discrepancy", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Komodakis" ], "title": "2016) for the RGB data sets", "venue": "For fine-tuning,", "year": 2016 }, { "authors": [ "Torralba" ], "title": "2019) argue that classifiers originally trained on CIFAR10 have a statistically significant drop in accuracy when evaluated on CIFAR10v2. Furthermore, Recht et al. (2019) argue that CIFAR10 and CIFAR10v2 come from the same distribution by training a binary classifier", "venue": null, "year": 2019 }, { "authors": [ "Chen" ], "title": "stopping, the training process is halted at different stages for test ID and test OOD samples, as indicated in Figure 5. Recent papers like Shwartz-Ziv", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern machine learning (ML) systems can achieve good test set performance and are gaining popularity in many real-world applications - from aiding medical diagnosis (Beede et al., 2020) to making recommendations for the justice system (Angwin et al., 2016). In reality however, some of the data points in a test set could come from a different distribution than the training (in-distribution) data. For example, sampling biases can lead to spurious correlations in the training set (Sagawa et al., 2020), a faulty sensor can produce novel data corruptions (Lu et al., 2019), or new unseen classes can emerge over time, like undiscovered bacteria (Ren et al., 2019). Many of these samples are so different compared to the training distribution that the model does not have enough information to predict their labels but still outputs predictions with high confidence. It is important to identify these out-of-distribution (OOD) samples in the test set and flag them, for example to at least temporarily abstain from prediction (Geifman & El-Yaniv, 2017) and involve a human in the loop.\nTo achieve this, Bayesian methods (Gal & Ghahramani, 2016; Malinin & Gales, 2018) or alternatives such as Deep Ensembles (Lakshminarayanan et al., 2017) try to identify samples on which a given model cannot predict reliably and include. Their aim is to obtain predictive models that simultaneously have low error on in-distribution (ID) data and perform well on OOD detection. Other approaches try to identify samples with low probability under the training distribution, independent of any prediction model, and use, for instance, density estimation (Nalisnick et al., 2019) or statistics of the intermediate layers of a neural network (Lee et al., 2018).\nMost prior work have reported good OOD detection performance, reaching an almost perfect area under the ROC curve (AUROC) value of nearly 1. However these settings generally consider differentiating two vastly different data sets such as SVHN vs CIFAR10. We show that the picture is very different in a lot of other relevant settings. Specifically, for unseen classes within CIFAR10 or for data with strong distribution shifts (e.g. (resized) ImageNet vs ObjectNet (Barbu et al., 2019)), the AUROC of state-of-the-art methods often drops below 0.8.\nAlmost all of these methods assume a setting where at test time, no training is possible and the OOD detection method can only be trained beforehand. This inductive setting allows real-time decision-making and is hence more broadly used. However, in many cases we can indeed do batch predictions, for example when sensor readings come in every second and it is sufficient to make a prediction and decision every few minutes (e.g. automatic irrigation system). In this case we have a batch of unlabeled test data available that we want to predict (and be warned about) that we can use together with the labeled training set to detect the OOD points in the set. We call this the transductive OOD setting (related to but quite different from transductive classification (Vapnik, 1998)). Even in an online setting, transductive OOD could be very useful (see Section 2.1).\n(How) Can we achieve significantly better OOD detection performance in the transductive setting?\nEven though the transductive setting improves test accuracy in small data settings for tasks such as classification or zero-shot learning, it is unclear how to successfully leverage simultaneous availability of training and test set in the transductive OOD setting which is quite distinct from the former problems. A concurrent recent work Yu & Aizawa (2019) tackles this challenge by encouraging two classifiers to maximally disagree on the test set (i.e. to produce different predictions on test samples). However this leads to models that disagree to a similar degree on both ID and OOD data and hence one cannot distinguish between the two, as indicated by the low AUROC in Figure 1. We introduce a new method called Regularized Ensembles for Transductive OOD detection (RETO) for overparameterized models, which heavily uses regularization to make sure that the ensemble disagrees only on the OOD samples in the test set, but not on the ID samples. In summary, our main contributions in this paper are as follows:\n• We experimentally identify many realistic OOD scenarios where SOTA methods achieve a subpar AUROC below 0.84. We hence argue that the field of OOD detection is far from satisfactorily solved and more methods will be proposed that include these (or other) hard OOD cases as benchmarks.\n• For the transductive OOD detection setting, we propose a new procedure, RETO, that manages to diversify the output of an ensemble only on the OOD portion of the test set and hence achieves significant improvements compared to SOTA methods (see Figure 1) with a relative gain of at least 32%." }, { "heading": "2 REGULARIZED ENSEMBLES FOR TRANSDUCTIVE OOD DETECTION", "text": "Our main goal is to detect samples that are outside of the training distribution and focus on classification tasks. We are only interested in situations where we can obtain a model that generalizes well given the training data. If the models do not generalize well in-distribution (ID), then the primary task should be to find a better classifier instead. Given a classifier with good generalization, the next challenge becomes to ensure that samples on which the model cannot make confident predictions (e.g. samples that are too far from the training data) are correctly identified. This constitutes the\nmain focus of our work. Recall that in our use case, we have a batch of unlabeled test samples at our disposal. This test set includes a mixture of samples drawn from the training distribution and samples we call OOD per our definition in the previous section. The goal is to distinguish between the ID and the OOD samples in the test set.\nIn this section we propose our method that uses the more numerous training data as a counterweight that does not allow a sufficiently smooth model to fit an arbitrary label on ID test samples, but only on OOD test samples." }, { "heading": "2.1 TRANSDUCTIVE OOD DETECTION", "text": "In an inductive OOD detection setting, one can only tune a method at training time, and then use it with unchanged parameters on any test set. In contrast, in a transductive setting, the training data is available during test time and it is possible to tune a method using both the training set and the unlabeled test set. We stress that no labels are available for the test data, so it is unknown which test samples are indeed anomalous. Moreover, we do not assume access to any known OOD samples, unlike some of the inductive methods, which sometimes use OOD data for training or calibration (Lee et al., 2018; Liang et al., 2018; Malinin & Gales, 2018; Cao et al., 2020).\nWhen deployed in the context of classification, transductive and semi-supervised learning methods leverage the unlabeled data to obtain low-dimensional representations that are more effective for the prediction task. A key assumption for the setting to be useful is that the data is related in some way, e.g. the unlabeled data comes from the same distribution as the labeled data. On the other hand, transductive OOD detection differs from the usual transductive classification setting, in that the training distribution does not carry information about the OOD data. As a consequence, it is not obvious how to adapt existing semi-supervised methods to work in this different regime.\nSome of the downsides that prevent transductive classification methods from being used more broadly are that for each test set that we want to predict, we would need to have access to the training data and computational resources. Furthermore, they do not allow predictions on the fly in the online setting. For transductive OOD however, we know that the inductive model predicts reliably in-distribution. Hence we can still predict test points on the fly, and only flag OOD samples with a slight delay after receiving a batch of test points. An example for which all these downsides are not limiting should be quite relatable to the reader. For example, Covid-19 test results have a crucial role for controlling the spread of the virus. Imagine a machine learning model were to be developed and deployed for reliable fast testing that works well under usual circumstances. If a test pipeline becomes defect, informing the patient of the potentially wrong test result is still crucial, in particular if it is to inform a negatively tested patient to repeat the test or to quarantine. In this case we would also be willing to allow access to labeled training data and computational resources for fine-tuning as precision is of utmost importance." }, { "heading": "2.2 THE COMPLETE RETO PROCEDURE", "text": "We now provide details on our approach, RETO, outlined in Algorithm 1.\nRecall that we have access to both a labeled training set, and the unlabeled test set. We begin by assigning an arbitrary label (selected from the set of labels of the training data) to all the test samples. We train a classifier on the union of the correctly-labeled training set, and the arbitrarily-labeled test set. To find the optimal classifier, we search among functions that are known to generalize well on the ID (training) distribution. If the classifiers are smooth enough, they will not be able to fit both the correct labels of the training set and the arbitrary label on the ID test samples, as illustrated in Figure 2 for linear classifiers. However, they will still fit the arbitrary label on the OOD test samples. Using regularization, we ensure that the models we obtain are not too complex. We search inside a function class of regularized functions, Freg, as discussed in more detail in Section 3. We ensemble several such classifiers, where each model fits a different label to the test set. We then use a disagreement statistic and flag as OOD all the points in the test set with high disagreement. To avoid training the ensemble from scratch for each new test batch, it is possible to instead start from pre-trained weights and perform a few iterations of fine-tuning, as detailed in Section 4. We stress that we do not calibrate or train our method on any known OOD data.\nAlgorithm 1: Pseudocode for RETO Input: Train set S, Test set T , Ensemble size K,\nTest statistic threshold t0, Regularized function class Freg, Disagreement metric\nResult: O, i.e. the elements of T which are OOD for c← {y1, ..., yK} do // train K models\nT c ← {(x, c) : x ∈ T} f̂c ← Train (S ∪ T c;Freg)\nO = ∅ for x ∈ T do // run two-sample test if disagreement ( f̂y1(x), . . . , f̂yK (x) ) > t0\nthen O ← O ∪ {x}\nreturn O\nDetermining OOD samples with RETO. We distinguish between ID and OOD samples using a two-sample statistical test, with the null hypothesis: H0 : x ∈ suppP , for a test sample x and where P denotes the training distribution1. Previous baselines have proposed their own choices of the test statistic, which is discussed in detail in Appendix A. For RETO, we use:\nTavg-TV(x) := 1 K(K − 1) ∑ i 6=j dTV (fi(x), fj(x)) ,\nthe average pairwise total variation distance between the softmax outputs fi(x), fj(x) ∈ R|Y| of models i, j ∈ {1, ...,K} in the ensemble, where dTV is the total variation distance. The null hypothesis is rejected for high values of Tavg-TV. Appendix L contains more details about the choice of the test statistic. It follows from the way in which the hypothesis test is stated that true positives are OOD samples that are indeed flagged as OOD, while the false positives are ID samples that are incorrectly predicted as OOD." }, { "heading": "3 WHY RETO CAN WORK WELL", "text": "In this section we provide insights as to why RETO can achieve a higher AUROC for hard OOD settings. We frame RETO as a way of learning diverse ensembles. We show that for RETO to perform well, the members of the ensemble must come from a restricted family of function classes. Finally, we argue that early-stopped neural networks are part of this restricted family.\nEnsemble-based OOD detection The main argument for using an ensemble for OOD detection is that a diverse enough set of models will lead to disagreement only on OOD samples. In other words, the models will produce contradictory predictions on OOD inputs, while giving similar predictions for ID data. In order to get a diverse set of models, Deep Ensembles (Lakshminarayanan et al., 2017) use the stochasticity of the training process. However, the models obtained with this procedure are still very similar and tend to agree on OOD data (Figure 3), especially in hard OOD detection scenarios. Our method uses the additional information available in the unlabeled test set to generate ensembles that are more diverse on the OOD test samples. This approach allows the ensembles to work well even when the ID and OOD data are very similar.\nWe denote the test set as T and consider a partition of it into a (unknown) test ID set TID and a (unknown) test OOD set TOOD. Recall that we do not know which test samples are outliers: if we\n1With a slight abuse of notation, we denote by suppP ⊂ X the support of the marginal distribution PX .\nwere given T = TOOD, we could just enforce that different models learn different labels on T by simply assigning one of the labels c ∈ Y to the whole test set to obtain T c := {(xi, c) : xi ∈ T}, and then training each model in the ensemble on S ∪ T c, for different c. But, obviously, we are given the union of the test ID and test OOD set T = TOOD ∪ TID, without being able to distinguish between the two.\nHow can we encourage an ensemble to disagree (only) on TOOD?\nCould we use the same strategy, and assign an arbitrary label to the entire test set? A neural network can easily learn random labels (Zhang et al., 2016). Hence, if we train a neural network to convergence on S ∪ T c, the models will also disagree on the test ID data! This is illustrated in Figure 3 - Right. How can we enforce models to have contradictory predictions on TOOD but to agree on TID?" }, { "heading": "3.1 KEY FOR TRANSDUCTIVE OOD DETECTION: REGULARIZATION", "text": "We can remedy this issue with strong regularization of the models in the ensemble. The key intuition is that it is difficult to fit an arbitrary label on ID data that is near the training samples. The signal in the correctly labeled ID points from the training set prevents an arbitrary label from being easily fit on the ID test samples. This is illustrated in Figure 2, for linear classifiers. Conversely, it is easy to learn the arbitrary label on samples that are far enough from the training data, which are exactly the OOD test samples! For instance, when training neural networks with SGD, the arbitrary label will be fit much faster on the OOD test samples than on the ID test samples.\nWhat is the right complexity for the models in the ensemble?\nIn the language of statistical testing, the model complexity should be small enough to limit false positives (i.e. ID samples incorrectly flagged as OOD) and large enough to have enough power (i.e. identify correctly OOD samples). In other words, we want the classifiers to not fit the wrong labels on test ID samples, but only on test OOD samples. We encourage high power by making the models to fit different labels on the test set, and reduce false positives by regularizing the function class just enough. Specifically, we constrain our search to a hypothesis class that is just able to learn the labels on OOD, but not on ID test samples.\nControlling false positives We now describe a necessary condition on the complexity of the model class, in order to control the false positive rate of RETO. First of all, we require our models to generalize well in-distribution, since we they need to have enough common ground to agree on the test ID samples. If a model has poor generalization, then we should first find a better classifier before concerning ourselves with flagging OOD samples on which they cannot predict well.\nHowever there may be model classes which generalize well but are still able to fit arbitrary labels on ID. Hence, we consider the set of regularized functions with low population error, i.e. F? := {f ∈ Freg : E [ If(x) 6=y ] < }, where Freg is a restricted function class (e.g. functions parametrized by neural networks regularized via early-stopping). The required amount of regularization is captured in the following condition.\nCondition 3.1 (F? cannot fit noisy labels). The probability of a point drawn from the ID distribution to be misclassified by a function in F? is at most δ for a small constant δ > .\nIn words, the functions in F? only disagree on a small set with respect to the marginal distribution of X . As a consequence, with very high probability 1 − (1 − δ)s we cannot fit a set of s random i.i.d. in-distribution points with the wrong label. This means the functions in F? are smooth enough to “ignore” wrong labels on in-distribution samples, as illustrated in Figure 3 - Middle. Note that unrestricted overparameterized models in the large function class F do not satisfy Condition 3.1 due to their capability to fit random labels (Figure 3 - Right).\nNotes on power Note that Condition 3.1 enforces ensemble agreement on ID points to limit the amount of false positives. The power (ensemble disagreement on OOD) depends very much on the the support of the OOD samples and its relation to F? (see Figure 4). If the boundary between the OOD and ID set requires higher complexity than F? , our model class might have too little power as illustrated in Figure 4.\nThe empirical OOD detection performance of our method indicates that earlystopping regularization finds just the right complexity - models that are complex enough for many hard OOD detection problems in image classification tasks that we consider. However, we leave as future work a thorough analysis of the\ntrade-off between the function class size and the detection capabilities of RETO." }, { "heading": "3.2 REGULARIZING NEURAL NETWORKS WITH EARLY STOPPING", "text": "An example of models that satisfy Condition 3.1 are deep neural networks trained with early stopping. The recent results of Yilmaz & Heckel (2019); Arora et al. (2019); Li et al. (2020) suggest that early stopping helps neural networks be more robust to label noise without sacrificing standard accuracy, thus satisfying Condition 3.1. Figure 5 shows the learning curves obtained when fitting a neural network on S, T cID and T c OOD for a chosen label c: the training set and test OOD samples are fit first and after epoch 50, the predictor starts fitting the wrong label on the test ID set as well. We can also observe that early stopping at the point with the highest accuracy on a validation set (drawn from the same distribution as the training set), captures this phase transition well." }, { "heading": "4 EXPERIMENTS", "text": "In this section we evaluate the OOD detection performance of RETO for deep neural networks on several image data sets. We find that our approach outperforms all baselines on difficult OOD detection settings. In addition, we provide insights into the trade-off between offline detection and the good performance of our algorithm." }, { "heading": "4.1 ID VS OOD SETTINGS", "text": "We report results on two broad types of OOD detection scenarios:\n1. Easy OOD data (most previous benchmarks): ID and OOD samples come from very different distributions (e.g. CIFAR10 vs SVHN). These are the settings usually considered in the OOD detection literature on which most baselines perform well.\n2. Hard OOD data: We explore two types of more difficult OOD detection tasks (i) The OOD data is sampled from “novel” classes: e.g. the first 5 digits of SVHN as training, the last 5 digits as OOD). (ii) The test data suffers from semi-strong covariate shift: e.g. the test set contains corrupted samples from the training distribution (e.g. CIFAR10 vs CIFAR10-C (Hendrycks & Dietterich, 2019) 2) or samples that violate the spurious correlations present in the training set (e.g. ImageNet vs ObjectNet (Barbu et al., 2019)3).\nAppendix C provides more insight on OOD detection hardness, while Appendix B presents examples of images for the various settings. Note that we are not too interested in the practical scenario of covariate shift (Shimodaira, 2000), where the distributions are so close that domain adaptation techniques could perform well.4 In our hard OOD data sets, domain adaptation also leads to unsatisfactory results. For instance, Sun et al. (2020) obtain a classification error of 20.4% on CIFAR10-C at severity 5, compared to the 8.3% error achieved on the CIFAR10 test set. Alternatively, when domain adaptation fails, OOD detection can prompt a system to abstain on samples from the shifted distribution to prevent erroneous predictions.\nApart from using these canonical data sets, we also compare the performance of our method on more realistic data, namely a recently proposed OOD detection benchmark for medical imaging (Cao et al., 2020). The authors collected a suite of data sets that cover the aforementioned categories of difficulty, as detailed in Appendix K." }, { "heading": "4.2 RETO VS. BASELINES", "text": "We compare our method against both inductive and transductive baselines. Importantly, some of the baselines require oracle knowledge of OOD data for training. For example, Outlier Exposure (Hendrycks et al., 2019) uses TinyImages for training as the set of outliers, irrespective of the OOD set used for evaluation. On the other hand, the Mahalanobis baseline (Lee et al., 2018) is tuned on samples from the same OOD distribution as the one seen at test time. We also present a transductive version of this approach, referred to as Mahalanobis-T, on which we elaborate in Appendix A.\nFor all the baselines we, use the default hyperparameters suggested by their authors and we do not adjust RETO for any of the OOD settings. We defer the details regarding training the models to Appendix A. For evaluation, we use two metrics that are common in the OOD detection literature: the area under the ROC curve (AUROC; larger values are better) and the false positive rate (FPR) at a true positive rate of 95% (FPR@95; smaller values are better)." }, { "heading": "4.3 MAIN RESULTS", "text": "For our method we train ensembles of five ResNet20 (He et al., 2016) networks (results for other architectures are presented in Appendix G). For each model in the ensemble we perform post-hoc\n2Both CIFAR10-C and CIFAR100-C contain 15 types of corruptions, at 5 severity levels. We consider corrupted samples with severity 5.\n3ObjectNet (Barbu et al., 2019) contains both novel classes that do not appear in ImageNet, and images from ImageNet classes, with strong distribution shift. We resize both ImageNet and ObjectNet to 32x32 images.\n4These situations when domain adaptation performs well, are sometimes more challenging for OOD detection, since it means that ID and OOD data are more similar. In Appendix G.1 we show that RETO maintains its remarkable performance even on the more difficult CIFAR10-C data set with severity 2.\nearly stopping: we train each model for 100 epochs and select the iteration with the lowest validation loss. For all settings, we used a labeled training set of 40,000 samples, a validation set of 10,000 ID samples and an unlabeled test set of 10,000 ID samples and 10,000 OOD samples. We present results for training the models from random initializations, and for fine-tuning pretrained models (pretraining is always performed on the training set for 100 epochs). When using pretrained weights, as few as three epochs of fine-tuning are enough on average to achieve the performance that we report, which is a significant cut in computation cost. In addition, Appendix D shows the dependence on the ensemble size for RETO and vanilla ensemble.\nTable 1 summarizes the main empirical results. For the corruption data sets, the table shows the average of the AUROC and FPR@95 taken over all corruptions, and the value for the worst-case setting. Appendix G contains a more detailed breakdown of these numbers. In addition to being successful at identifying OOD samples, our method also maintains a good classification performance on the training distribution. The validation accuracy of the early stopped ensembles, averaged over all settings, is only 1.4% smaller than that of vanilla ensembles.\nThe evaluation for the scenarios presented in Table 1 is performed on the same test set that was used for training, as usual in transductive learning. In addition to that, the OOD detection performance of RETO extrapolates well to unseen samples from the same distribution.5 In order to show this, we run experiments in which we compute the AUROC on a hold-out test set drawn from the same ID and OOD distributions as the one used during training. The AUROC on the hold-out test set is within 0.01 from the one calculated on the test set observed during training.\nFor the medical OOD detection benchmark we present the average AUROC achieved by some representative baselines in Figure 6a. We refer the reader to Cao et al. (2020) for precise details on the methods. Appendix K contains more results for the medical settings, as well as additional baselines.\nLimitations and trade-offs. Having access to the test set in the transductive setting provides enough information to discriminate well between the ID and the OOD test samples, succeeding when inductive approaches are less effective. In order to bridge the gap between (offline) transductive OOD detection and online anomaly detection we investigate the impact of the size of the test set on the OOD detection performance. In addition, we also vary the ratio of OOD samples in the test set, i.e. |TOOD||TID|+|TOOD| . Our findings suggest that there is a broad spectrum of values for which RETO maintains a good performance. In the cases when either the size of the test set or the test OOD ratio is small, the OOD detection performance deteriorates to the point where it is comparable to vanilla ensembles, as shown in Figure 6b where we report the gap in AUROC between RETO and a vanilla ensemble. When there are only a few very diverse OOD test samples, their contribution to the gradient is small. Moreover, if the number of ID test samples is large, fitting a single arbitrary\n5This setting is similar for instance to the one in the Mahalanobis baseline, which assumes oracle knowledge of the OOD distribution at training time.\nlabel on them is significantly easier. This, combined with the signal in the OOD test data being weaker, means that the arbitrary label can take longer to fit on the few OOD samples, than on the (numerous) test ID samples, leading to many false positives (i.e. ID samples incorrectly flagged as OOD). The loss in efficacy can be mitigated by either splitting the test set in smaller batches, or by using a different labeling scheme for the test set, the details of which we leave as future work." }, { "heading": "5 RELATED WORK", "text": "Transductive learning. Transductive learning (Vapnik, 1998) has been successfully used for practical applications in problems like zero-shot learning (Fu et al., 2015; Kodirov et al., 2015; Wan et al., 2019). Scott & Blanchard (2008) proposed to solve the transductive anomaly detection problem simply by discriminating between the ID and OOD distribution with a constraint on the false positive rate. However, it is difficult to assess the predictive uncertainty of a classifier trained on the source set, using this binary classifier. The setting we consider in which one has access to both a labeled and an unlabeled data set, is reminiscent of the problem of semi-supervised learning. Approaches based on self-trained predictors have recently been proposed by Kumar et al. (2020); Chen et al. (2020); Sun et al. (2020) in the context of domain adaptation.\nEnsemble diversity. Some limitations of ensemble-based OOD detection approaches have been highlighted in recent years. First, neural networks tend to make overconfident predictions even on test samples far from the training distribution (Hein et al., 2019). Moreover, the stochasticity of conventional NN training approaches (e.g. random initialization, SGD) is not sufficient to obtain ensembles that are diverse enough to give good uncertainty estimates on OOD samples (Maennel & T, ifrea, 2020). Some recent works address this problem by adding explicit regularizers that incentivize model diversity, either in a transductive setting, like MCD (Yu & Aizawa, 2019), or inductively (Bahng et al., 2020), but do not manage to detect OOD samples well in hard scenarios.\nBayesian prediction. One of the important appeals of the Bayesian framework is that it directly provides uncertainty estimates together with the predictions, in the form of a posterior distribution. Approaches like MC-Dropout (Gal & Ghahramani, 2016) or Deep Prior Networks (Malinin & Gales, 2018) have been proposed in the context of OOD detection, but the uncertainty estimates they provide are often inaccurate on OOD samples (Ovadia et al., 2019). Even though Bayesian Neural Networks (Neal, 1996) have seen some progress in recent years (Graves, 2011; Blundell et al., 2015), sampling efficiently from the posterior over parameters remains an important open problem." }, { "heading": "6 CONCLUSIONS", "text": "Reliable OOD detection is essential in order for classification systems to be deployed in safetycritical environments. We present a method that achieves state-of-the-art performance in a transductive OOD detection setting, and which, like other approaches, can ultimately be used to abstain on OOD samples. As future work, we propose a more thorough investigation of the influence of the labeling scheme of the test set on the sample complexity of the method, as well as an analysis of the trade-off governed by the complexity of the model class of the classifiers." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 BASELINES\nWe instantiate all baselines with the hyperparameters suggested by the authors for the respective settings (e.g. different hyperparameters for CIFAR10 or ImageNet). For all methods, we use pretrained models provided by the authors when available and we pre-train our own models when that is not the case. When doing our own pre-training, we always use the parameters described in the original paper. The code published for the Mahalanobis method performs a hyperparameter search automatically for each of the settings on which we ran it.\n• k-Nearest Neighbors: We take k = 8. For each test sample, we take the average distance to the nearest neighbors in the input (pixel) space, and we use this as the test statistic.\n• Vanilla Ensembles (Lakshminarayanan et al., 2017): We train an ensemble on the training set according to the true labels. For a test sample, average the models’ probabilities, and use the entropy of the resulting distribution as the test statistic. We use ensembles of 5 models, with the same architecture and hyperparameters as the ones used for RETO.\n• Outlier Exposure (Hendrycks et al., 2019): It makes the model’s softmax predictions close to the uniform distribution on the known outliers, while maintaining a good classification performance on the training distribution. We use the WideResNet (Zagoruyko & Komodakis, 2016) for the RGB data sets. For fine-tuning, we use their recommended settings of 10 epochs at learning rate 0.001. For training from scratch, we train for 100 epochs with an initial learning rate of 0.1. When the training dataset is either CIFAR and ImageNet, we use the default WRN parameters of the author’s code, namely 40 layers, 2 widen-factor, droprate 0.3. For when the training dataset is SVHN, we use the author’s recommended parameters of 16 layers, 4 widen-factor and droprate 0.4. All settings use the cosine annealing learning rate scheduler provided with the author’s code, without any modifications.\n• Deep Prior Networks (DPN) (Malinin & Gales, 2018): Bayesian Method: It trains a neural network (Prior Network) to parametrize a Dirichlet distribution over the class probabilities. We train for 100 epochs an WRN-28-10 using SGD with momentum 0.9, with an initial learning rate of 0.01, which is decayed by 0.2 at epochs 50, 70, and 90. For MNIST, we use EMINST/Letters as OOD for tuning. For all other settings, we use TinyImages as OOD for tuning.\n• Mahalanobis (Lee et al., 2018): It pretrains models on the training data. For a data point, it uses the intermediate representations of each layer as “extracted features”. It then performs binary classification using logistic regression using these extracted features. In the original setting, the classification is done on “training” ID vs “training” OOD samples (which are from the same distribution as the test OOD samples). Furthermore, hyperparameter tuning for the optimal amount of noise is performed on “validation” ID and OOD data. We use the WRN-28-10 architeture, pretrained for 200 epochs. The initial learning rate is 0.1, which is decayed at epochs 60, 120, and 160 by 0.2. We use SGD with momentum 0.9, and the standard weight decay of 5 · 10−4.\n• Mahalanobis-Transductive: The methodology proposed by Lee et al. (2018) is very different from the other settings, where we do not have access to samples which are known to be OOD and from the same distribution as test OOD. Therefore, we propose a transductive alternative: early-stopped logistic regression is used to distinguish between the training set and the test set (instead of ID vs OOD samples). The early stopping iteration is chosen to minimize the classification errors on a validation set that contains only ID data (recall that we do not assume to know which are the OOD samples).\n• Maximum Classifier Discrepancy (MCD) (Yu & Aizawa, 2019): It is a transductive method that trains two classifiers at the same time, and makes them disagree on the test data, while maintaining good classification performance. We use the WRN-28-10 architecture as suggested in the paper. We did not change the default parameters which came with the author’s code, so weight decay is 10−4, and the optimizer is SGD with momentum 0.9. When available (for CIFAR10 and CIFAR100), we use the pretrained models provided\nby the authors. For the other training datasets, we use their methodology to generate pretrained models: We train a WRN-28-10 for 200 epochs. The learning rate started at 0.1 and dropped by a factor of 10 at 50% and 75% of the training progress, respectively.\nA.2 TRAINING CONFIGURATION FOR REGULARIZED ENSEMBLES\nWhen training RETO with a certain neural network architecture, we use hyperparameters that give the best test accuracy when training a model on the ID training set. We do not perform further hyperparameter tuning for the different OOD data sets on which we evaluate our approach.\nFor MNIST, we train a 3-layer MLP with ReLU activations. Each intermediate layer has 100 neurons. The model is optimized using Adam, using a learning rate of 0.001, for 10 epochs.\nFor CIFAR and ImageNet, we train a WideResNet WRN-28-10. The model is trained using SGD with momentum 0.9, and the learning rate starts at 0.1, and is multiplied by 0.2 at epochs 50, 70 and 90. The weights have a l2 regularization coefficient of 5e − 4. We use a batch size of 128 for all scenarios. The hyperparameters have been selected to achieve high accuracy on the CIFAR100 classification problem, thus obtaining an ensemble validation accuracy of 80.8%, where each individual model has between 77% and 78% acuracy. After that we used the same hyperparameters for all settings. For the fine-tuning scenarios, we trained for 10 epochs with a constant learning rate of 0.001 for all scenarios.\nFor the medical datasets, we train a Densenet-121 as the authors do in the original paper. For training from scratch, we do not use random weight initializations, but instead we start with the ImageNet weights provided with tensorflow. The training configuration is exactly the same as for WRN-28-10, except for: we use a batch size of 32 because of GPU memory restrictions, and for fine tuning we use a constant learning rate of 10−5.\nB ID AND OOD DATA SETS\nB.1 DATA SETS\nFor evaluation, we use the following image data sets:\n• MNIST (Lecun et al., 1998) and Fashion MNIST (Xiao et al., 2017).\n• SVHN (Netzer et al., 2011).\n• CIFAR10, CIFAR100 (Krizhevsky, 2009), and their corrupted variants (Hendrycks & Dietterich, 2019).\n• ImageNet (Deng et al., 2009) and ObjectNet (Barbu et al., 2019), both resized to 32x32.\nB.2 SAMPLES FOR THE SETTINGS WITH NOVEL CLASSES\nB.3 SAMPLES FROM OBJECTNET\nB.4 SAMPLES FROM CIFAR10-C" }, { "heading": "C OOD DETECTION HARDNESS", "text": "Out of distribution detection benchmarks can be assessed based on their difficulty. In what follows, we propose a simple way to evaluate the hardness of an OOD detection setting and provide empirical evidence that shows that the scenarios we looked at are indeed more complicated than some of the common OOD detection benchmarks.\nConsider the task of distinguishing between samples that come from two distributions P,Q with disjoint supports suppP , suppQ. Let us assign labels according to the distribution the points are coming from: D = {(xi, yi) : yi = −1 if xi ∈ suppP , yi = 1 if xi ∈ suppQ}. We search solve the classification problem by searching for a minimizer of the empirical risk inside a function class F . The intuition for our measure of hardness is as follows: if it is difficult for a binary classifier to separate samples from P and Q, then it will also be difficult to detect test samples from Q as OOD, when only a training set drawn from P is available.\nTo quantify the difficulty of the binary classification problem, we use the area under the training curve, i.e. the curve of the training loss as a function of iterations of the optimization algorithm. The larger the area, the more iterations it takes to converge, which in turn indicates that the classification problem is difficult.\nFormally, we define the hardness of the OOD detection task with respect to a function class F as: HOODD(D;F) := ∫ 1 0 L(ft)dt,\nwhere ft is the model after a fraction t of the training epochs are finished.\nFor our task, we start with a VGG model and train it for 30 epochs. In order to approximate the integral, we take the training loss for the whole data set every 5 epochs, and we average these losses.\nNotice that the settings with novel classes and the one with hard covariate shift are generally more difficult than the common benchmarks used in the OOD detection literature.\nApart from the three categories of settings that we introduced in Section 4, we also present numbers for CIFAR10-C and CIFAR100-C with lower-severity corruptions, i.e. severity 2. These scenarios are usually easier to solve with domain adaptation techniques. Nevertheless, in Appendix G we show that RETO performs well on OOD detection on these settings as well. Even though performing OOD detection is redundant here, the good results of our method go to show that it can still work well, even in difficult situations." }, { "heading": "D RESOURCE REQUIREMENTS FOR RETO", "text": "Computational cost Our method can work with training each model in the ensemble from scratch (i.e. random initialization), but it also preforms well when fine-tuning a network from pretrained weights. This reduces significantly the inference time for each batch of test data. For the settings we considered, on average, as few as three epochs of fine tuning are enough to achieve the best performance: the training is stopped early, on average, after three epochs, according to the condition on the validation loss.\nDependence on ensemble size Figure 10 shows that the good performance of RETO does not rely on a large number of models in the ensemble. Unlike vanilla ensembles, our method achieves a high AUROC with as few as 2 models. This is because, in vanilla ensembles, the networks are diverse ‘by chance’, due to the stochasticity of the training procedure. On the other hand, in RETO, our training method actively encourages the ensembles to be diverse, and hence two models will already disagree on OOD data almost as much as five." }, { "heading": "E GENERALIZATION TO HOLD-OUT TEST SET", "text": "In this section we present experiments which show that after training/fine-tuning on a test set with ID and OOD samples, one can also use our method to detect OOD samples from the same distribution, that have not been seen during training. Concretely, we use a test set of 5000 ID and 5000 OOD samples to train RETO ensembles (we reiterate that we do not have access to which samples are indeed OOD in the test set). For evaluation, we compute the metrics on a separate data set, with 5000 ID and 5000 OOD samples, where the OOD samples come from the same distribution as the samples seen during training. As revealed in Table 3, the performance does not change substantially, when evaluating on the hold-out test set. For the corruption data sets, we report for each metric the average taken over all corruptions (A), and the value for the worst-case setting (W)." }, { "heading": "F REGULARIZED TRAINING/TEST DISCRIMINATOR FOR TRANSDUCTIVE OOD DETECTION", "text": "The paper Scott & Blanchard (2008) suggests that training a binary classifier with bounded false positive rate to distinguish between the training set S and the test set T can successfully separate the OOD samples from the ID samples in the test set. However, this approach does not fall in the category of predictive uncertainty-based OOD detection methods, since it does not provide a good classifier of the labeled training set as well.\nWe present in what follows a set of experiments run to check if a similar technique works for the data sets we considered. Early stopping with respect to a validation set that contains only ID samples is enough to obtain good OOD detection performance.\nFor the corruption data sets, the table shows the average of the AUROC taken over all corruptions (A), and the value for the worst-case setting (W)." }, { "heading": "G MORE EXPERIMENTS", "text": "G.1 EXTENDED RESULTS WITH RESNET\nG.2 RESULTS WITH A SMALLER TEST SET\nG.3 RESULTS WITH VGG\nG.4 RESULTS ON MNIST AND FASHIONMNIST\nFor FashionMNIST we chose this particular split (i.e. classes 0,2,3,7,8 vs classes 1,4,5,6,9) because the two partitions are more similar to each other. This makes OOD detection more difficult than the 0-4 vs 5-9 split.\nG.5 MORE RESULTS FOR OUTLIER EXPOSURE\nThe Outlier Exposure method needs access to a set of OOD samples during training. The numbers we report in the rest of paper for Outlier Exposure are obtained by using the TinyImages data set as the OOD samples that are seen during training. In this section we explore the use of an OODtrain data set that is more similar to the OOD data observed at test time. This is a much easier setting for\nthe Outlier Exposure method: the closer OODtrain is to OODtest, the easier it will be for the model tuned on OODtrain to detect the test OOD samples.\nIn the table below we focus only on the settings with corruptions. For each corruption type, we use the lower severity corruption as OODtrain and evaluate on the higher severity data and vice versa. We report for each metric the average taken over all corruptions (A), and the value for the worst-case setting (W)." }, { "heading": "H EXPERIMENTS ON CIFAR10V2", "text": "Here we will present our results on distinguishing between CIFAR10 (Krizhevsky (2009)) and CIFAR10v2 (Recht et al. (2018)), a dataset meant to be drawn from the same distribution as CIFAR10 (generated from the Tiny Images collection Torralba et al. (2008)). Recht et al. (2018) and Recht et al. (2019) argue that classifiers originally trained on CIFAR10 have a statistically significant drop in accuracy when evaluated on CIFAR10v2. Furthermore, Recht et al. (2019) argue that CIFAR10 and CIFAR10v2 come from the same distribution by training a binary classifier to distinguish between them, and observing that the accuracy is obtained is very close to random (50.1% for the randomly initialized models, and 52.9% for models with pre-trained weights).\nOur experiments show that the two datasets are actually distinguishable, contrary to what previous work has argued. First, our own binary classifier trained on CIFAR10 vs CIFAR10v2 obtains a test accuracy of 67%, without any hyperparameter tuning (the model is a ResNet20 trained for 200 epochs using SGD with momentum 0.9. The learning rate is decayed by 0.2 at epochs 90, 140, 160 and 180. We use 1600 examples from each dataset for training, and we validate using 400 examples from each dataset).\nFurthermore, our OOD experiments (as shown in Table 10) show that most baselines are able to distinguish between the two datasets, with RETO achieving the highest performance. The methods which require OOD data for tuning (Outlier Exposure and DPN) use CIFAR100 for tuning." }, { "heading": "I DEPENDENCE ON THE TEST SET CONFIGURATION", "text": "" }, { "heading": "J EFFECT OF LEARNING RATE AND BATCH SIZE", "text": "" }, { "heading": "K MEDICAL OOD DETECTION BENCHMARK", "text": "The medical OOD detection benchmark is organized as follows. There are four training (ID) data sets, from three different domains: two data sets with chest X-rays, one with fundus imaging and one with histology images. For each ID data set, the authors consider three different OOD scenarios:\n1. Use case 1: The OOD data set contains images from a completely different domain, similar to our category of easy OOD detection settings.\n2. Use case 2: The OOD data set contains images with various corruptions, similar to our category of hard covariate shift settings.\n3. Use case 3: The OOD data set contains images that are not seen during training due to various selection biases, similar to our category of novel class settings.\nThe authors evaluate a number of methods on all these scenarios. The methods can be roughly categorized as follows:\n1. Data-only methods: Fully non-parametric approaches like kNN.\n2. Classifier-only methods: Methods that use a classifier trained on the training set, e.g. ODIN (Liang et al., 2018), Mahalanobis (Lee et al., 2018). RETO falls into this category as well.\n3. Methods with Auxiliary Models: Methods that use an utoencoder or a generative model, like a Variational Autoencoder or a Generative Adversarial Network. Some of these approaches can be expensive to train and difficult to optimize and tune.\nWe stress the fact that for most of these methods the authors use (known) OOD data during training. Oftentimes the OOD samples observed during training come from a distribution that is very similar to the OOD distribution used for evaluation.\nFor exact details regarding the data sets and the methods used for the benchmark, we refer the reader to the paper Cao et al. (2020).\nWe did not evaluate RETO on the histology image data set due to resource limitations; the data set is much larger than the others.\nIn Figures 14, 15, 16 we present AUROC and AUPR (Area under the Precision Recall curve) for RETO for each of the training data sets, and each of the use cases. Figure 13 presents averages over all settings that we considered, for all the baseline methods in the benchmark." }, { "heading": "L TWO-SAMPLE TEST FOR OOD DETECTION", "text": "Distinguishing between ID and OOD samples can be cast as a two-sample hypothesis test, with H0 : x ∈ suppP and H1 : x /∈ suppP , for a sample x. The various OOD detection methods differ in their choice of the test statistic. For approaches that use ensembles of classifiers, the test statistic should reflect the belief that the models have similar outputs on ID samples, and disagree on OOD samples. For example, Lakshminarayanan et al. (2017) propose averaging the softmax outputs of all\nthe models in the ensemble and then taking the maximum or the entropy of the resulting probability vector as the test statistic. For a K-model ensemble and an input x this can be written as follows:\nTmax-p(x) := max i∈[C]\n1\nK K∑ k=1 (fk(x))i , with fk(x) ∈ R C the softmax output of the kth model\nAveraging the softmax vectors loses some information about the model predictions, because different initial probability vectors can map to the same averaged vector. In our approach, models are more uncertain on ID samples than on OOD samples, which can make the averaged softmax vector\nfall at the same location for an ID and an OOD sample. This makes it impossible to distinguish between the two, as illustrated in Figure 17.\nFor neural network ensembles, following a standard training procedure of minimizing the crossentropy loss leads to models that make confident predictions on both ID and OOD samples, as shown by Hein et al. (2019); Maennel & T, ifrea (2020). Consequently, the information lost through averaging is not causing any issues: on ID samples, the models will tend to give the same prediction, while on OOD samples the models tend to disagree, giving different predictions with high confidence.\nHowever, in our case, because of early stopping, the training process is halted at different stages for test ID and test OOD samples, as indicated in Figure 5.\nRecent papers like Shwartz-Ziv & Tishby (2017); Chen et al. (2019) analyze the dynamics of optimizing the cross-entropy loss with SGD. They suggest that there might exist two stages: one in which a good decision boundary is found, and another in which the margin is increased between the representations of inputs from different classes. It is this second stage that also leads to overconfident predictions on both ID and OOD samples. Thus, early stopping may cause the models to be more uncertain on test ID samples than on test OOD. This is indeed confirmed in Figure 18.\nTo avoid the problem of information loss described previously, we compute the pairwise total variation distances between the softmax outputs of the models in the ensemble, and we take the average of these distances as our test statistic:\nTavg-TV(x) := 2\nK(K − 1) K∑ i,j=1 i<j dTV (fi(x), fj(x))" } ]
2,020
null
SP:d3a089d045255fe67d84efc540969b6ce8bb4448
[ "This paper presents a new contrastive audio-visual learning method. Like previous work, they use self-supervision to learn a video feature set by training a network to associate audio and visual \"views\" taken from the same video. Their main contribution is to jointly learn from both \"local\" and \"global\" information. They simultaneously optimize two contrastive objectives. First, there is a global objective, which computes a feature set using a low framerate video, pools over time, and obtains negatives from other different videos. Second, there is a local contrastive objective that uses a higher framerate video, pools over space but not time, and gets negatives from other timesteps of the video. They optimize both losses jointly using a spatially-aware pooling method that provides information from the global pathway to the local pathway. They compute attention by taking dot products between the visual and audio features, and using this attention to pool local visual features (instead of a global pooling)." ]
Contrastive self-supervised learning has delivered impressive results in many audio-visual recognition tasks. However, existing approaches optimize for learning either global representations useful for high-level understanding tasks such as classification, or local representations useful for tasks such as audio-visual source localization and separation. While they produce satisfactory results in their intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose a versatile self-supervised approach to learn audio-visual representations that can generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require fine-grained spatio-temporal information (e.g. localization). We achieve this by optimizing two cross-modal contrastive objectives that together encourage our model to learn discriminative global-local visual information given audio signals. To show that our approach learns generalizable video representations, we evaluate it on various downstream scenarios including action/sound classification, lip reading, deepfake detection, and sound source localization.
[]
[ { "authors": [ "Darius Afchar", "Vincent Nozick", "Junichi Yamagishi", "Isao Echizen" ], "title": "Mesonet: a compact facial video forgery detection network", "venue": "IEEE International Workshop on Information Forensics and Security (WIFS),", "year": 2018 }, { "authors": [ "T. Afouras", "J.S. Chung", "A. Senior", "O. Vinyals", "A. Zisserman" ], "title": "Deep audio-visual speech recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Jean-Baptiste Alayrac", "Adrià Recasens", "Rosalia Schneider", "Relja Arandjelović", "Jason Ramapuram", "Jeffrey De Fauw", "Lucas Smaira", "Sander Dieleman", "Andrew Zisserman" ], "title": "Self-supervised multimodal versatile networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Humam Alwassel", "Dhruv Mahajan", "Lorenzo Torresani", "Bernard Ghanem", "Du Tran" ], "title": "Selfsupervised learning by cross-modal audio-video clustering", "venue": "arXiv preprint arXiv:1911.12667,", "year": 2019 }, { "authors": [ "Yuki M Asano", "Mandela Patrick", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Labelling unlabelled videos from scratch with multi-modal self-supervision", "venue": "arXiv preprint arXiv:2006.13662,", "year": 2020 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Joao Carreira", "Eric Noland", "Chloe Hillier", "Andrew Zisserman" ], "title": "A short note on the kinetics-700 human action dataset", "venue": null, "year": 1907 }, { "authors": [ "Donatella Castelli", "Pasquale Pagano" ], "title": "Opendlib: A digital library service system", "venue": "Research and Advanced Technology for Digital Libraries,", "year": 2002 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Komal Chugh", "Parul Gupta", "Abhinav Dhall", "Ramanathan Subramanian" ], "title": "Not made for each other-audio-visual dissonance-based deepfake detection and localization", "venue": "arXiv preprint arXiv:2005.14405,", "year": 2020 }, { "authors": [ "J.S. Chung", "A. Zisserman" ], "title": "Lip reading in the wild", "venue": "In Asian Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Joon Son Chung", "Andrew Senior", "Oriol Vinyals", "Andrew Zisserman" ], "title": "Lip reading sentences in the wild", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Soo-Whan Chung", "Joon Son Chung", "Hong-Goo Kang" ], "title": "Perfect match: Improved cross-modal embeddings for audio-visual synchronisation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Brian Dolhansky", "Russ Howes", "Ben Pflaum", "Nicole Baram", "Cristian Canton Ferrer" ], "title": "The deepfake detection challenge (dfdc) preview dataset", "venue": null, "year": 1910 }, { "authors": [ "A. Ephrat", "I. Mosseri", "O. Lang", "T. Dekel", "K Wilson", "A. Hassidim", "W.T. Freeman", "M. Rubinstein" ], "title": "Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation", "venue": "arXiv preprint arXiv:1804.03619,", "year": 2018 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "title": "Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet", "venue": null, "year": 2018 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "R. Devon Hjelm", "Philip Bachman" ], "title": "Representation learning with video deep infomax", "venue": "arXiv preprint arXiv:2007.13278,", "year": 2020 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Longlong Jing", "Yingli Tian" ], "title": "Self-supervised spatiotemporal feature learning by video geometric transformations", "venue": "arXiv preprint arXiv:1811.11387,", "year": 2018 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "In So Kweon" ], "title": "Self-supervised video representation learning with space-time cubic puzzles", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hildegard Kuehne", "Hueihan Jhuang", "Estı́baliz Garrote", "Tomaso Poggio", "Thomas Serre" ], "title": "Hmdb: a large video database for human motion recognition", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Yuezun Li", "Siwei Lyu" ], "title": "Exposing deepfake videos by detecting face warping artifacts", "venue": "arXiv preprint arXiv:1811.00656,", "year": 2018 }, { "authors": [ "F. Matern", "C. Riess", "M. Stamminger" ], "title": "Exploiting visual artifacts to expose deepfakes and face manipulations", "venue": "IEEE Winter Applications of Computer Vision Workshops (WACVW),", "year": 2019 }, { "authors": [ "A. Miech", "J. Alayrac", "L. Smaira", "I. Laptev", "J. Sivic", "A. Zisserman" ], "title": "End-to-end learning of visual representations from uncurated instructional videos", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Pedro Morgado", "Nuno Vasconcelos", "Ishan Misra" ], "title": "Audio-visual instance discrimination with cross-modal agreement", "venue": "arXiv preprint arXiv:2004.12943,", "year": 2020 }, { "authors": [ "Huy H Nguyen", "Fuming Fang", "Junichi Yamagishi", "Isao Echizen" ], "title": "Multi-task learning for detecting and segmenting manipulated facial images and videos", "venue": "arXiv preprint arXiv:1906.06876,", "year": 2019 }, { "authors": [ "Huy H Nguyen", "Junichi Yamagishi", "Isao Echizen" ], "title": "Capsule-forensics: Using capsule networks to detect forged images and videos", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Mandela Patrick", "Yuki M Asano", "Ruth Fong", "João F Henriques", "Geoffrey Zweig", "Andrea Vedaldi" ], "title": "Multi-modal self-supervision from generalized data transformations", "venue": "arXiv preprint arXiv:2003.04298,", "year": 2020 }, { "authors": [ "Karol J Piczak" ], "title": "Environmental sound classification with convolutional neural networks", "venue": "In International Workshop on Machine Learning for Signal Processing (MLSP),", "year": 2015 }, { "authors": [ "Karol J Piczak" ], "title": "ESC: Dataset for environmental sound classification", "venue": "In Proceedings of the 23rd ACM international conference on Multimedia,", "year": 2015 }, { "authors": [ "Andreas Rossler", "Davide Cozzolino", "Luisa Verdoliva", "Christian Riess", "Justus Thies", "Matthias Nießner" ], "title": "Faceforensics++: Learning to detect manipulated facial images", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Hardik B Sailor", "Dharmesh M Agrawal", "Hemant A Patil" ], "title": "Unsupervised filterbank learning using convolutional restricted boltzmann machine for environmental sound classification", "venue": null, "year": 2017 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "venue": "arXiv preprint arXiv:1212.0402,", "year": 2012 }, { "authors": [ "Themos Stafylakis", "Georgios Tzimiropoulos" ], "title": "Combining residual networks with lstms for lipreading", "venue": "arXiv preprint arXiv:1703.04105,", "year": 2017 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Contrastive bidirectional transformer for temporal representation learning", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Jiangliu Wang", "Jianbo Jiao", "Linchao Bao", "Shengfeng He", "Yunhui Liu", "Wei Liu" ], "title": "Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics", "venue": null, "year": 2019 }, { "authors": [ "Xinshuo Weng", "Kris Kitani" ], "title": "Learning spatio-temporal features with two-stream deep 3d cnns for lipreading", "venue": "arXiv preprint arXiv:1905.02540,", "year": 2019 }, { "authors": [ "Fanyi Xiao", "Yong Jae Lee", "Kristen Grauman", "Jitendra Malik", "Christoph Feichtenhofer" ], "title": "Audiovisual slowfast networks for video recognition", "venue": "arXiv preprint arXiv:2001.08740,", "year": 2020 }, { "authors": [ "Jingyun Xiao", "Shuang Yang", "Yuanhang Zhang", "Shiguang Shan", "Xilin Chen" ], "title": "Deformation flow based two-stream network for lip reading", "venue": "arXiv preprint arXiv:2003.05709,", "year": 2020 }, { "authors": [ "Dejing Xu", "Jun Xiao", "Zhou Zhao", "Jian Shao", "Di Xie", "Yueting Zhuang" ], "title": "Self-supervised spatiotemporal learning via video clip order prediction", "venue": null, "year": 2019 }, { "authors": [ "Xin Yang", "Yuezun Li", "Siwei Lyu" ], "title": "Exposing deep fakes using inconsistent head poses", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Xingxuan Zhang", "Feng Cheng", "Shilin Wang" ], "title": "Spatio-temporal fusion based convolutional sequence learning for lip reading", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Peng Zhou", "Xintong Han", "Vlad I Morariu", "Larry S Davis" ], "title": "Two-stream neural networks for tampered face detection", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-supervised learning aims to learn representations of data that generalize to a large variety of downstream tasks. Recently, contrastive self-supervised learning (CSL) has achieved impressive results on several computer vision tasks (Oord et al., 2018; Hjelm et al., 2018; He et al., 2020; Chen et al., 2020). In CSL, the choice of “views” determines the types of information that the representation captures (Bachman et al., 2019), as the framework learns representations that focus on the shared information between views. It has been demonstrated that the optimal choice of views depends critically on the downstream task (Tian et al., 2020). Therefore, existing works mainly focus on finding different views tailored for the intended downstream tasks. For example, when tailoring views for action classification, Hjelm & Bachman (2020) extends DIM (Hjelm et al., 2018) to the spatio-temporal setting by assuming that global and local information useful for action classification (i.e, global semantics) should be invariant across time and space within a given video. When dealing with multimodal data, several approaches utilize audio-visual correspondence from videos (Morgado et al., 2020). Such a CSL approach is based on an assumption that information needed for audio/video classification should be shared between the two modalities.\nAlthough they achieve impressive results in their intended downstream tasks, existing approaches often fail to generalize to tasks that they were not originally designed for. For example, in lip reading (Chung & Zisserman, 2016), the desired information is the fine-grained spatio-temporal representation around the mouth. However, if we directly apply existing CSL approaches, the shared information across views is that a there is a face, while the useful information, the lip movements, will be suppressed as they are changing across views from the sample clip.\nMotivated by this, we propose a versatile CSL approach to learn representations that can generalize to both scenarios that require global representations (e.g., classification) and scenarios that require local representations (e.g., localization) (see Fig. 1). Our approach, which we call global-local cross-modal (GLCM) contrastive learning, has four key properties that we assume to be important for our learning objective: 1) observations from the same time span of a video should reflect the same content regardless of modalities; 2) the same observations captured at different time scales can reflect both global and local information; 3) when learning on a local temporal scale, the contrasting\nviews should only share the time-varying information (e.g. only the moving lip) while ignoring globally invariant information; 4) multi-scale (global-local) observations can be trained jointly in a collaborative way so that representations learned at either scale can be reused.\nsidering only the audio-visual features that lie in the same time window as positive pairs; the others are all negative pairs. Finally, we utilize information captured at the global scale (e.g. localizing the source of a sound) to assist efficient learning at the local scale, thus capturing the fourth property.\nWe show that GLCM pretraining learns representations with global and fine-grained spatio-temporal information from audio-visual signals. The learned representations perform effectively on a variety of downstream tasks. We evaluate our proposed approach on tasks that needs local spatio-temporal information (i.e lip reading, deep-fake detection, and sound-source localization) and also discriminative tasks that needs global information (i.e. action classification and audio-event classification)." }, { "heading": "2 RELATED WORK", "text": "Contrastive self-supervised learning. CSL has contributed to strong performance on many tasks and in cases produced comparable results to supervised learning (Chen et al., 2020; Caron et al., 2020). Contrastive learning leverage multiple views of the same data (Hjelm & Bachman, 2020; Oord et al., 2018), e.g., multiple perspectives within the same modality (e.g., augmentations of the same image, different frames of a video, etc.) (He et al., 2020; Hjelm & Bachman, 2020; Han et al., 2019a) or perspectives from different modalities (e.g., depth and RGB images, visual and textual signals) (Tian et al., 2019; Sun et al., 2019; Miech et al., 2020; Alayrac et al., 2020). Chen et al. (2020) and Hjelm et al. (2018) show that leveraging local information to perform contrastive learning further improves the performance on image classification tasks. DIM (Hjelm et al., 2018) has been extended to multi-scale (Bachman et al., 2019) and video data Hjelm & Bachman (2020). However, evaluation is still focused on “discriminative” tasks (image classification and video event classification), while there is little evidence that these models will adapt well to the local information.\nAudio-visual representation learning. Several approaches have been proposed to leverage the natural correspondence between audio and visual signals to perform CSL (Asano et al., 2020; Korbar et al., 2018; Alwassel et al., 2019; Morgado et al., 2020; Patrick et al., 2020; Chung et al., 2019). Most existing approaches aim to capture high-level semantic information from observations. It has been empirically demonstrated that such learned information is very effective for “discrimination tasks” (classification). However, in tasks that needs local information the learned representations may not perform well. Xiao et al. (2020a) design their approach by utilizing different temporal scales of the audio and visual data, which encourages the model to capture fine-grained temporal information and hence improves the performance. However, the evaluation was limited to classification tasks. In contrast with previous work, we demonstrate that our approach effectively learns global-local audio-visual representations by evaluating on a variety of downstream tasks.\n3 APPROACH\nWe propose using the audio and visual channels as cross-modal views of video data. As we aim to learn both local and global temporal information, we utilize the same visual sequence processed at different sampling rates to reflect the same observation at different temporal scales. Given that we want each signal to capture complementary views of the same data, we use different encoders to extract the representations from the audio sequence (Ea), subsampled visual sequence (Egv ) and full sampling rate visual sequence (Elv). The question is, then, how to design a contrastive loss to learn representations from these different views. We achieve this goal by jointly training the model using two contrastive losses: global and local. As shown in Fig. 2, the global loss is computed by contrasting audio signals with subsampled visual sequence (Sec.3.1); while the local loss is computed by contrasting audio signals with visual sequence at a full sampling rate (Sec.3.2). To jointly train the global and local pathways, we propose a spatially-aware attention pooling mechanism to effectively reuse the information that was captured from the global pathway in the local pathway (Sec.3.3)." }, { "heading": "3.1 GLOBAL CONTRASTIVE OBJECTIVE", "text": "We design the global contrastive objective to capture slowly changing information with high audiovisual correlation. We use video sequences captured at low sampling rates, which will inevitably lack local temporal information. Ea encodes an audio sequence into an audio embedding za ∈ RT×F , where F is the frequency, and T is the sequence length. After temporal global pooling, it becomes za ∈ R1×F . Similarly, we perform global temporal pooling on features encoded by the global visual encoder Egv , which produces the global visual embedding z g v ∈ R1×H×W×C . Note that, for the visual features, we perform global pooling only along the temporal dimension while keeping the spatial dimension intact. The reason is that when learning in a global temporal space, the model has capacity to capture more spatial information. To compute the global contrastive loss, we consider the audio features za and the visual features zgv that come from the same video sample as positive pairs, while features coming from different video samples are negative pairs. In order to encourage the model to also capture spatial information, we adopt MIL-NCE (Miech et al., 2020) to compute the loss. Specifically, we consider all H × W spatial grids in zgv as the instances, and therefore, instead of just taking a single audio-visual positive pair za ↔ zgv , the new positive pair becomes multiple visual instances zgv [i]H×W , i.e. za ↔ zgv [i]H×W . The loss is then defined as:\nLg = −log\n( ∑ zgv∈P exp(z T a z g v)∑\nzgv∈P exp(z T a z\ng v) + ∑ z′∈N exp(z ′ a T z′v g)\n) (1)\nwhere N is a set of negative audio and visual pairs, P is a set of spatial grids in zgv ." }, { "heading": "3.2 LOCAL CONTRASTIVE OBJECTIVE", "text": "We design the local contrastive objective to capture fine-grained spatio-temporal information that is sensitive to temporal changes while being invariant to different modalities. We thus contrast between local audio features za and local spatio-temporal visual features zlv . Specifically, we consider the temporal local audio and visual features that lie in the same time window to be the positive pairs, za[t]↔ zlv[t], where z[t] represents features in the time window t. As shown in Figure 2, the video and audio features shaded in the same color refer to those in the same time window. The features in different time windows (e.g. green and orange blocks) are considered as negative pairs even if they are from the same video sample. As such, the shared information between the modalities is principally how the features vary over time. We obtain the local audio features by using the same audio encoder Ea but without global temporal pooling. The local visual features zlocalv are obtained by feeding the visual sequence with a high sampling rate into the local visual encoder Elv , which produces the visual features zlv ∈ RT×H×W×C . We then perform spatial pooling while keeping the temporal scale the same, the visual features become zlv ∈ RT×1×1×C . As the audio channel in a video generally has a higher sampling rate than the visual channel, visual feature at a single time step will be mapped to multiple audio feature slices. As shown in Figure 2, at time t1, the visual features zlv[t1] (green block) correspond to multiple audio features za[t1]t∈M (green blocks), where M = 5 in Figure 2. Specifically, we use a sliding window of size M to map each set of visual features at a given time step to a window of audio feature slices. Then the positive pair is considered as a visual feature and the corresponding window of audio feature slices. Once again we use MIL-NCE (Miech et al., 2020) to compute the contrastive loss. The reasoning for applying MIL-NCE here is different than in the case of the global contrastive loss. In the global contrastive loss, we aim to let the network capture spatial information. While in the local contrastive objective, the goal of using MIL-NCE is to mitigate the missing strict temporal mapping problem. The loss is therefore defined as:\nLl = −log( ∑ za∈Q exp(z T a z l v)∑\nza∈Q exp(z T a z l v) + ∑ z′∈N exp(z ′ a T z′v l) ) (2)\nwhere Q is a set of audio feature slices in the same time window as zlv , and N is a set of negative audio and visual pairs." }, { "heading": "3.3 SPATIALLY-AWARE ATTENTION POOLING", "text": "As discussed in Sec. 3.1, when computing the global contrastive loss, we focus on the spatial dimension. Therefore, we can utilize spatial information captured from the global pathway to assist the local contrastive loss. Specifically, we compute the correlation (i.e. dot product between the audio feature and each of the visual features in a spatial grid) computed in the global pathway as the attention score; intuitively, this captures the regions of the spatial grid which likely correspond to the source of the sound. For example, in a video of someone talking, the lips will have a relatively higher score when compared to the background, and in a video of someone playing a guitar, fingers on the guitar will have high scores. We thus use the score as anH×W attention map (see Figure 2). We utilize this attention map to perform spatial attention pooling on local visual features at each time step Patten(zlv[t]). Comparing with regular spatial average pooling, it helps the network give greater weight to parts within a frame with high audio-visual correspondence. This way, the efficiency of the local contrastive loss can be much improved. We empirically demonstrate the effectiveness of spatial attention pooling mechanism in Table 1 and 2." }, { "heading": "4 EXPERIMENTS", "text": "Implementation details. We use 3D-ResNet18 (Hara et al., 2018) for our visual encoders (Egv and Elv) and 1D-ResNet18 for our audio encoder, in both cases using Batch Normalization (BN) (Ioffe & Szegedy, 2015). All models are trained end-to-end with the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate γ = 10−3 after a warm-up period of 500 iterations. We use 16 NVIDIA Tesla P100 GPUs with a batch size of 32 for our experiments. For pretraining, we preprocess video frames by sampling at 10 FPS and applying random cropping, horizontal flipping, gray-scaling, and temporal jittering. We resize video frames to three-channel images of 112 × 112; we set the clip\nlength to 32 frames for the local visual pathway, and a 18 sampling rate for the global path way (8 frames). For the audio channel, we extract mel-spectrograms from the raw waveform using the LibROSA library and get a 80×T matrix with 80 frequency bands; T is proportional to the length of an audio clip. We then segment the mel-spectrogram according to the corresponding video clips to ensure temporal synchrony. We treat the mel-spectrograms as an 80-channel 1D signal. To compute the global MIL-NCE loss, we use features at a 16 × 16 spatial resolution. To compute the local contrastive loss, we adopt a temporal window of size three without overlap.\nDatasets. Many downstream tasks of interest involve human faces (e.g., deepfake detection), speech (e.g., lip reading) and activity (e.g., audio and video classification). Therefore, we use a combination of Kinetics-700 (Carreira et al., 2019) and AVSpeech (Ephrat et al., 2018) for pretraining. Specifically, we randomly select 120K video samples from each datasets, which gives us a dataset, we term as K-AV, in total 240K samples. For comparison with the other state-of-the-art approaches, we pretrain our model on K-AV dataset which is at the same scale as the Kinetics-700 dataset. For the ablation study, we pretrain our model on a subset of 15K samples from the K-AV dataset, we term as K-AV-15K. As for downstream tasks, we evaluate our models on action recognition using UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011), and on sound classification using ESC50 (Piczak, 2015b). For lip reading, we evaluate our model on both LRW (Chung & Zisserman, 2016) and LRS2 Chung et al. (2017). For deepfake detection, we evaluate our model on a subset of DFDC (Dolhansky et al., 2019)." }, { "heading": "4.1 DOWNSTREAM SCENARIOS", "text": "Lip Reading. Visually recognizing a speaker’s utterance is useful and challenging task. Lip movements for different letters can be visually similar to each other (e.g., b and p, d and t). This requires the learned visual representation to contain fine-grained spatio-temporal information, rather than global semantics. In evaluating our pretrained model on the lip reading task, we focus on investigating whether our approach successfully learns fine-grained spatio-temporal visual information. For a fair comparison with state-of-the-art (SOTA) approaches, we use the same data processing protocol as Zhang et al. (2019). For LRW and LRS2, we detect 68 facial landmarks in each video frame using dlib (Castelli & Pagano, 2002). We use the outer eye and nose tip landmarks to align the detected face in each frame using an affine transform. Finally, an image of size 112×112 is cropped from the aligned face with the lip landmarks at the center. The cropping is such that the lips occupy 13 of the image width. During finetuning, we apply random horizontal flipping. We concatenate the global\nfeatures and local features produced by our pretrained Egv and E l v , respectively. Both encoders are finetuned with the whole model.\nWe compare our approach with SOTA supervised lip reading methods. For LRW, we evaluate on a 500-way word classification task and report top-1 accuracy. For LRS, we report the word error rate (WER). Table 1 shows that our approach outperforms SOTA approaches on both LRS and LRW by a large margin, i.e. 4.7% WER reduction on LRS and 5.1% accuracy improvements on LRW. These results show that our proposed approach can capture the fine-grained spatio-temporal information necessary for lip reading. We also compare our model with other self-supervised approaches with the same backbone and using the same pretraining dataset. Our approach outperforms these by a large margin and demonstrates the effectiveness of our proposed approach.\nDeepfake Detection. Our hypothesis here is that deepfakes tend to be characterized by audio-visual inconsistencies such as a misalignment between lip motions and audio, unnatural facial and lip appearance/movements or asymmetry between facial regions such as the left and right eyes. Such artifacts could be detected through local spatio-temporal features. We use our pretrained model and finetune it on the DFDC dataset and evaluate performance using video-wise Area Under the Curve (AUC). We follow the same data preprocessing protocol as in SOTA approaches for this task. We perform face detection to crop the face region in each video frame. We concatenate the global and local visual features that are produced by our pretrained global and local visual encoders, and finetune the pretrained encoders with the whole model. The results are shown in Table 2. For a fair comparison, we use the same training and test video sets as Chugh et al. (2020). Among all the compared approaches, Chugh et al. (2020) and Mittal et al. use both visual and audio sequences, while the other approaches use only the visual sequences for detection. As we can see, when using only visual signal, our approach outperforms all previous SOTA approaches (AUC=96.7). We also compare our model with the other SOTA self-supervised approaches (shown in blue color). Again, our model outperform the best benchmark by a large margin (90.1 vs. 85.3).\nAction and Sound Classification. To evaluate performance in learning discriminative global spatiotemporal representations, we use our pretrained model for action and sound classification. For action classification, we finetune both pretrained global and local visual encoders by concatenating the global and local representations. For audio classification, we finetune our pretrained audio encoder\nEa with the audio classification model. To evaluate on action and audio classification, we compare both our models that were pretrained on Kinetics-700 and K-AV-240K with SOTA approaches pretrained on a dataset of the same scale (Kinetics). We find that on all three benchmarks, i.e. UCF101, HMDB51 and ESC50, our approach achieves a new SOTA (91.2% on UCF101, 61.9% on HMDB51 and 80.1% on ESC50) - see Table 3." }, { "heading": "4.2 ABLATION AND ANALYSIS", "text": "The importance of global-local contrast for local information needed tasks. Here, we want to investigate specifically how the pretraining pretext task impacts local information needed for downstream tasks (i.e., lip reading and deepfake detection) and compare our task with those used in other work. We pretrain our model and the other state-of-the-art self-supervised pretraining approaches with the same backbone (3DResNet-18) and pretrain dataset (K-AV-15K). After pretraining, we finetune each model on the downstream benchmarks follow the same protocol. The results are listed in Table 4. As we can see that, when we only vary the pretext task during pretraining, our models outperform all the other SSL-based approaches by a large margin, which demonstrate the effectiveness of our proposed approach. We also find that InfoMax and AVSlowFast performs better than the others (MoCo, DPC and CPC). We believe this is because InfoMax draws more attention to the spatial local information and AVSlowFast is capable of learning more fine-grained temporal information, which are critical for the tasks of lip reading and deepfake detection. MoCo, which is successful for visual classification tasks fails in both lip reading and deepfake detection. This supports our argument that naı̈vely using an SSL approach may not achieve good performance for a large variety of different tasks.\nThe roles of global and local information. To demonstrate the importance of jointly learning global-local representations during pretraining, we evaluate a baseline model that was pretrained without the local contrastive objective (Ours w/o local cont.). As we can see from Table 5, when compared with the model which was pretrained using our full objective (Ours), the performance significantly drops on all the benchmarks. Optimizing only for global representations during pretraining generalizes poorly to the tasks that require local information. Note that, for a fair comparison, we only use the global features (Global Feat.) for each downstream tasks. Furthermore, we test whether using the local, global, or local and global features after pretraining yields better performance. We can see that, the best performance is achieved by utilizing both the global and local features and this is true for all the benchmarks. For comparison, we report the results achieved by the other SOTA approaches when using both the global and local features.\nThe role of MIL-NCE in local contrastive objective. When performing the local contrastive objective, we adopt the MIL-NCE as our loss function. Miech et al. (2020) also employed the MIL-NCE as the loss function to mitigate the misalignment in narrated videos. Different from their motivation, our goal here is to encourage fine-grained temporal alignment of the audio with video features. To validate its effectiveness, we evaluate an alternative without MIL-NCE as the local contrastive loss function. Specifically, we adopt an average temporal pooling on each window of the audio features and use the vanilla contrastive loss over the synchronized audio and visual features. The results are shown in Table 6. As we can see, when we perform the local contrast without the MIL-NCE objective, the performance on lip reading and deepfake detection drops considerably. While for activity classification, both loss function achieves comparable results.\nThe role of attention. Our pipeline allows us to leverage the global contrastive objective to capture local spatial information and use it as an attention map to assist local representation learning. Intuitively, our attention maps measure the amount of audio-visual correlation; such attention maps can highlight discriminative face regions useful for lip reading. We demonstrate the quality of our attention maps by replacing lip bounding boxes typically used in lip reading with our attention map. Specifically, instead of extracting features from the cropped lip/face region, we extract features from the entire frame (no lip/face cropping) and use our attention map to pool the features spatially. Note that the purpose of this experiment is to evaluate the quality of attention maps; we use audio signal just to obtain attention maps and discard it for word classification/deepfake detection. The results (“Ours Attention”) show that it achieves results comparable to our best setting. On LRW and DFDC, it even outperforms SOTA approaches without relying on lip/face region detectors. It indicates that using global and local information in a collaborative way can yield good performance. In the lip reading task, local spatial information makes the local-contrast pathway pay more attention to lip movement, and thus achieves comparable effects of using a lip region detector. We also evaluate our model which finetuned directly on full frames. As we can see, when discarding the localized region\ngcrazgdl396:9000/case3.html 1/1\nachieved either by detectors or attention mechanism, the performance significantly drops on all three benchmarks. It further validates the critical role of the attention mechanism in our approach.\nInterpretation of the learned representation. To investigate how well local spatial information is captured through the global audio-visual contrast, we visualize the attention maps that induced by the pretrained audio and global video encoders. Such visualization can also be considered as performing sound source localization, i.e. locate objects that making sound. To achieve this goal, the network should capture the audio-visual correlation in a spatio-temporal grid. We thus use the attention map obtained by our pretrained model to visualize the sounding source in each frame. To investigate further, we use the Kinetics-sound (Carreira et al., 2019) dataset as videos in the dataset generally have a high-level of audio-visual correspondence. Specifically, we add another softmax layer on the obtained attention map, and then do bilinear interpolation of the 16× 16 attention map back to the original image size, i.e. 192× 192. Fig. 3 shows that our learned attention maps tend to localize sounding sources in videos accurately. Especially, when visual content is highly related to the corresponding audio signal, our model performs particularly well. For example, the first row (frames from “playing instruments” videos) shows that our model can precisely localize the sounding region. For the other activities, like “baby talking,” “playing basketball,” “running,” our model highlights regions with humans. We find that the attention map incorrectly highlights regions on samples that have ambiguous audio-visual relation. We show failure cases in the last two frames of the third row. As we can see, there is no visual content that clearly relates with the audio signal, and thus the model fails to find sounding sources." }, { "heading": "5 CONCLUSION", "text": "We have presented a contrastive self-supervised approach to learning global and local audio-visual representations. Using audio and low-sampled and high-sampled video sequences as separate “views” of the data, we find that the learned representations can generalize well to tasks that involve global semantic understanding and fine-grained spatio-temporal understanding. We perform experiments on lip reading, deepfake detection, sound source localization, and action/sound classification tasks and in each case achieve strong results." } ]
2,020
null
SP:9cf8f7dba8b4e672d685bc89295f237f422937cf
[ "This paper proposed a layer-wise adversarial defense which added perturbations in each hidden layer considering the influence of hidden features in latent space from the ODE perspective. It is essential to enhance the adversarial model robustness by stabilizing both inputs and hidden layers. The proposed method leveraged two operator splitting theory w.r.t. the Lie-Trotter and the Strang-Marchuk splitting schemes to discretize the specially designed ODE formulation by integrating the continuous limit of back-propagated gradients into the forward process. The main contribution of this paper is to generate perturbations with the idea of ODE in each layer. Empirical studies were performed to show the effectiveness of the proposed method on two benchmarks with two attack methods." ]
Deep neural networks are observed to be fragile against adversarial attacks, which have dramatically limited their practical applicability. On improving model robustness, the adversarial training techniques have proven effective and gained increasing attention from research communities. Existing adversarial training approaches mainly focus on perturbations to inputs, while the effect of the perturbations in hidden layers remains underexplored. In this work, we propose layer-wise adversarial defense which improves adversarial training by a noticeable margin. The basic idea of our method is to strengthen all of the hidden layers with perturbations that are proportional to the back-propagated gradients. In order to study the layer-wise neural dynamics, we formulate our approach from the perspective of ordinary differential equations (ODEs) and build up its extended relationship with conventional adversarial training methods, which tightens the relationship between neural networks and ODEs. In the implementation, we propose two different training algorithms by discretizing the ODE model with the Lie-Trotter and the Strang-Marchuk splitting schemes from the operator-splitting theory. Experiments on CIFAR-10 and CIFAR-100 benchmarks show that our methods consistently improve adversarial model robustness on top of widely-used strong adversarial training techniques.
[]
[ { "authors": [ "Uri M Ascher", "Linda R Petzold" ], "title": "Computer methods for ordinary differential equations and differential-algebraic equations, volume 61", "venue": null, "year": 1998 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Mislav Balunovic", "Martin Vechev" ], "title": "Adversarial training and provable defenses: Bridging the gap", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Rajendra Bhatia", "Chandler Davis" ], "title": "More matrix forms of the arithmetic-geometric mean inequality", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 1993 }, { "authors": [ "Alexander V Bobylev", "Taku Ohwada" ], "title": "The error of the splitting scheme for solving evolutionary equations", "venue": "Applied Mathematics Letters,", "year": 2001 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Alvin Chan", "Yi Tay", "Yew Soon Ong", "Jie Fu" ], "title": "Jacobian adversarially regularized networks for robustness", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Bo Chang", "Lili Meng", "Eldad Haber", "Lars Ruthotto", "David Begert", "Elliot Holtham" ], "title": "Reversible architectures for arbitrarily deep residual neural networks", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Bo Chang", "Minmin Chen", "Eldad Haber", "Ed H Chi" ], "title": "Antisymmetricrnn: A dynamical system view on recurrent neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ricky T Q Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural Ordinary Differential Equations", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "E. Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Eldad Haber", "Lars Ruthotto" ], "title": "Stable architectures for deep neural networks", "venue": "Inverse Problems,", "year": 2017 }, { "authors": [ "YAN Hanshu", "DU Jiawei", "TAN Vincent", "FENG Jiashi" ], "title": "On robustness of neural ordinary differential equations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Mingjie Li", "Lingshen He", "Zhouchen Lin" ], "title": "Implicit euler skip connections: Enhancing adversarial robustness via numerical stability", "venue": "In Proceedings of ICML,", "year": 2020 }, { "authors": [ "Xuanqing Liu", "Tesi Xiao", "Si Si", "Qin Cao", "Sanjiv Kumar", "Cho-Jui Hsieh" ], "title": "How does noise help robustness? explanation and exploration under the neural sde framework", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations", "venue": null, "year": 2018 }, { "authors": [ "Yiping Lu", "Zhuohan Li", "Di He", "Zhiqing Sun", "Bin Dong", "Tao Qin", "Liwei Wang", "Tie-Yan Liu" ], "title": "Understanding and improving transformer from a multi-particle dynamic system point of view", "venue": null, "year": 1906 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Robert I McLachlan", "G Reinout W" ], "title": "Quispel. Splitting methods", "venue": "Acta Numerica,", "year": 2002 }, { "authors": [ "Tianyu Pang", "Chao Du", "Yinpeng Dong", "Jun Zhu" ], "title": "Towards robust detection of adversarial examples", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Improving adversarial robustness via promoting ensemble diversity", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Jun Zhu" ], "title": "Mixup inference: Better exploiting mixup to defend adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax crossentropy loss for adversarial robustness", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Lev Semenovich Pontryagin", "EF Mishchenko", "VG Boltyanskii", "RV Gamkrelidze" ], "title": "The mathematical theory of optimal processes", "venue": null, "year": 1962 }, { "authors": [ "Edward Raff", "Jared Sylvester", "Steven Forsyth", "Mark McLean" ], "title": "Barrage of random transforms for adversarially robust defense", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Viktor Reshniak", "Clayton Webster" ], "title": "Robust learning with implicit residual networks", "venue": null, "year": 2019 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Proceedings of NIPS,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Bao Wang", "Binjie Yuan", "Zuoqiang Shi", "Stanley J Osher" ], "title": "Enresnet: Resnet ensemble via the feynman-kac formalism", "venue": "In Proceedings of NeurIPS,", "year": 2019 }, { "authors": [ "Eric Wong", "Leslie Rice", "J Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zonghan Yang", "Yang Liu", "Chenglong Bao", "Zuoqiang Shi" ], "title": "Interpolation between residual and non-residual networks", "venue": "In Proceedings of ICML,", "year": 2020 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Painless adversarial training using maximal principle", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In Proceedings of ICML,", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Bo Han", "Laura Wynter", "Kian Hsiang Low", "Mohan Kankanhalli" ], "title": "Towards robust resnet: A small step but a giant leap", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Mai Zhu", "Bo Chang", "Chong Fu" ], "title": "Convolutional Neural Networks combined with Runge-Kutta Methods", "venue": null, "year": 2018 }, { "authors": [ "He" ], "title": "2016), we pad 4 pixels on each side of the image and sample a 32 × 32 crop from it or its horizontal flip. We use pre-activated ResNet-56 as our backbone architecture and experiment with our LAD (Eq. (5)) and LAD-SM (Eq. (8)) methods. For all experiments, we use the SGD optimizer with the batch size of 128. We train for 160 (300) epochs for the CIFAR-10", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent years have witnessed the prosperity of deep learning in many tasks (Hinton & Salakhutdinov, 2006; Sutskever et al., 2014; He et al., 2016; LeCun et al., 2015; Huang et al., 2017; Vaswani et al., 2017). Stacked with multiple layers, neural networks provide an end-to-end solution to all the tasks and prove to be highly effective. However, the seminal study by Szegedy et al. (2013) has shown that deep neural networks (DNNs) can be fragile against attacks: minor perturbations on inputs lead to significant change in model predictions. Regarding the defense approaches, intensive studies on adversarial defense techniques have been proposed (Athalye et al., 2018a; Goodfellow et al., 2014; Zheng et al., 2016; Madry et al., 2018; Zhang et al., 2019b; Kurakin et al., 2017; Pang et al., 2019a; 2020; 2019b; Raff et al., 2019; Guo et al., 2018; Zhang et al., 2020a; Balunovic & Vechev, 2019; Wong et al., 2020; Chan et al., 2020; Zhang et al., 2020b). Among these techniques, adversarial training algorithms (Madry et al., 2018; Zhang et al., 2019b) incorporate the effect of perturbed inputs into the loss function, which are shown to be competent and boasts the dominant impact in the adversarial defense research field.\nWhile adversarial training techniques have gained increasing attention in the robust deep learning research community, most of current approaches concentrate on deriving perturbations on the inputs with gradients back-propagated from the loss function. However, as information flow in neural networks starts from inputs and passes through hidden layers, it is essential to robustify both the inputs and the hidden layers. While previous studies have made successful attempts on introducing damping terms (Yang et al., 2020) or stochastic noise (Liu et al., 2020; Wang et al., 2019) to each layer in neural architectures, they concentrate on improving general model robustness and are less focused on adversarial model robustness. We ask the following question: Can we take the hidden layers of neural networks into account to improve adversarial model robustness?\nIn this work, we propose layer-wise adversarial defense to improve adversarial training, which enhances adversarial model robustness by stabilizing both inputs and hidden layers. In our method,\nthe layer-wise perturbations are incorporated into the robust optimization framework of adversarial training. We propose to inject scaled back-propagated gradients into the architecture as layer-wise perturbations. Besides, we formulate our method from the perspective of ordinary differential equations and propose a novel ODE as its the continuous limit in order to study the neural dynamics. Inspired from the rich literature on numerical analysis, we use the Lie-Trotter and the Strang-Marchuk splitting schemes to solve the proposed ODE. We refer to the resulted discrete algorithms as Layerwise Adversarial Defense (LAD) and LAD-SM, respectively. Furthermore, we build up the extended relationship between our methods with current natural training and adversarial training techniques by analyzing the second order dynamics. Our analysis shows that our methods have introduced additional perturbations in the first order initial value of the second order dynamics compared with current adversarial training algorithms.Experiments on the CIFAR-10 and CIFAR-100 benchmarks show that our methods improve adversarial model robustness on top of different widely-used strong adversarial training techniques.\nWe summarize our contributions as follows:\n• We propose layer-wise adversarial defense which generalizes conventional adversarial training approaches with layer-wise adversarial perturbations (Section 3.1);\n• We investigate the continuous limit of our layer-wise adversarial defense methods and propose an ODE that integrates the adjoint state into the forward dynamics (Section 3.2);\n• We build up the extended relationship between our methods and current adversarial training approaches by analyzing the second order neural dynamics in theory. Experiments have also shown the effectiveness of our methods in practice. (Section 3.3 and Section 4)." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ADVERSARIAL MODEL ROBUSTNESS", "text": "In this section we review the literature on gradient-based attack and defense approaches in the field of adversarial model robustness. For adversarial attacks, widely-used approaches include Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) and Iterated Fast Gradient Sign Method (IFGSM) (Madry et al., 2018). For a given data point, FGSM induces the adversarial example by moving with the attack radius at each component along the gradient ascent direction. Iterated FGSM performs FGSM with inner iteration updates with smaller step size α. Prior studies have inspired multiple adversarial attack techniques (Athalye et al., 2018b; Carlini & Wagner, 2017; Ilyas et al., 2018; Dong et al., 2018; Pang et al., 2018). Adversarial defense techniques can be categorized by training phase (Athalye et al., 2018a; Goodfellow et al., 2014; Zheng et al., 2016; Madry et al., 2018; Zhang et al., 2019b; Kurakin et al., 2017; Pang et al., 2019a; 2020; Zhang et al., 2020a; Balunovic & Vechev, 2019; Wong et al., 2020; Chan et al., 2020; Zhang et al., 2020b) and inference phase (Pang et al., 2019b; Raff et al., 2019; Xie et al., 2018; Guo et al., 2018).\nThe widely-used approach in training phase is Projected Gradient Descent (PGD) training (Madry et al., 2018), which integrates the effect of the perturbed inputs into its loss function. The current state-of-the-art defense approach in training phase is TRADES (Zhang et al., 2019b), which additionally introduces the boundary error as a regularization term into its loss function. In our experiments, we select PGD training and TRADES as our baselines. While substantially enhancing adversarial model robustness, the gradient-based perturbations in adversarial training are currently only performed on inputs. As cascaded hidden layers comprise the passage for information flow in neural networks, it is essential to stabilize hidden layers as well. In our work, we introduce layerwise gradient-based perturbations to neural architectures to improve adversarial model robustness." }, { "heading": "2.2 ODE-INSPIRED ARCHITECTURE DESIGNS", "text": "Research about the relationship between neural networks and ODEs starts with the continuous limit formulation of ResNet (E, 2017), which has inspired many novel neural architecture designs (Lu et al., 2018; Zhu et al., 2018; Chang et al., 2018; Haber & Ruthotto, 2017; Chen et al., 2018; Dupont et al., 2019). Regarding model robustness, most prior studies have focused on improving dynamic system stability by Lyapunov analysis, more stable numerical schemes, or imposing regularization.\nFrom Lyaponov stability perspective, Yang et al. (2020) introduce damping terms to residual networks to stabilize dynamic systems. Appropriately adjusting the damping factor introduces damping effect to the dynamic system and enhances model robustness. Similarly, Chang et al. (2019) improve Lyapunov stability of RNNs by imposing antisymmetric constraints on the weight matrix. On more stable numerical schemes, prior studies include taking small step sizes in the forward Euler scheme (Zhang et al., 2019c) or leveraging implicit schemes to enhance stability (Reshniak & Webster, 2019; Li et al., 2020). For imposing regularization, stochasticity is introduced into ODEs for stabilization (Liu et al., 2020; Wang et al., 2019). Hanshu et al. (2019) regularize neural ODE models to be timeinvariant and add a regularization term about the upper bound of the effect from input perturbation. Zhang et al. (2019a) propose a differential game formulation for adversarial training and accelerate the process from the optimal control theory. Our work differs from the prior studies by integrating gradient-based perturbations into the neural dynamics, which proves to be an extension to current approaches on improving adversarial model robustness in both theory and practice." }, { "heading": "3 LAYER-WISE ADVERSARIAL DEFENSE", "text": "" }, { "heading": "3.1 THE MODEL FORMULATION", "text": "The objective of conventional adversarial training approaches can be formulated into a min-max problem (Madry et al., 2018; Zhang et al., 2019a). We introduce layer-wise perturbations {∆xn}Nn=0 and rewrite the min-max problem as follows:\nmin {θn}Nn=1 max {∆xn}Nn=0 L(Θ,∆x) := L(xN )\nsubject to x̃0 = x0 + ∆x0, xn+1 = x̃n + f(x̃n,θn),\nx̃n+1 = xn+1 + ∆xn+1, n = 0, 1, 2, · · · , N − 1.\n(1)\nwhere N is the number of layers in a neural network, Θ = {θn}Nn=1 represent its trainable parameters and the {∆xn}Nn=0 represent the layer-wise perturbations. In our formulation, we ignore the bounded assumptions on the perturbations for simplicity, since if there are additional bounded constraints on the perturbations, we can project the gradient onto the intervals. It is noted that when ∆xn = 0 for all n = 1, . . . , N , the model (1) reduces to the conventional adversarial training formulation. More specifically, for adversarial training algorithms (Madry et al., 2018; Zhang et al., 2019b), let M be the maximum number of inner iterations for the perturbations, we have the following update rule (ignoring the bounded constraints):\nx (m+1) 0 = x (m) 0 + η\n∂L\n∂x0 ∣∣∣∣ (m) , x (0) 0 = x0, (2)\nwhere the perturbation ∆x0 is defined as x (M) 0 − x0.\nIn this work, we generalize the above idea to introducing perturbations to hidden layers, i.e. determine ∆xn+1 in Eq. (1) so that the objective in Eq. (1) is maximized. Similar to Eq. (2), we perturb\nxn+1 with iterative gradient descent:\nx (m+1) n+1 = x (m) n+1 + η\n∂L\n∂xn+1 ∣∣∣∣ (m) , x (0) n+1 = xn+1, (3)\nwhere η is the step size as a scaling factor over the gradients ∂L/∂xn+1. We set ∆xn+1 = x (M) n+1 − xn+1, where M is the number of the iteration steps. When M = 1, the layer-wise adversarial perturbations are given by\n∆xn+1 = x (1) n+1 − xn+1 = η\n∂L\n∂xn+1 , n = 0, 1, 2, · · · , N − 1. (4)\nReplacing the maximization of layer-wise perturbations {∆xn}Nn=0 by Eq. (4), we obtained a simplified problem as follows.\nmin {θn}Nn=1 L(Θ) := L(xN )\nsubject to x̃n+1 = xn + f(xn,θn),\nxn+1 = x̃n+1 + η ∂L\n∂x̃n+1 , n = 0, 1, 2, · · · , N − 1,\n(5)\nwith the inputs x̃0 = x0 +∆x0 determined by Eq. (4). Notice that directly applying Eq. (5) requires alternative computations of x̃n+1 and ∂L/∂x̃n+1 with iterative forward and backward passes, which can be extremely time-consuming. In our implementation, we leverage a two-stage approach: first record the gradients with respect to each layer in a forward and backward pass, then add the recorded gradients to each layer in another forward pass as layer-wise adversarial perturbations. We refer to this algorithm as our Layer-wise Adversarial Defense (LAD) method." }, { "heading": "3.2 ODE-INSPIRED ALGORITHM DESIGN", "text": "In this section, we explore the continuous formulation of the constraints in Eq. (5) to study the layerwise dynamics from the ODE perspective. Recall that the conventional ODE formulation (E, 2017) takes the following form:\ndx̂(t)\ndt = f(x̂(t), t), x̂(0) = x̂0, (6)\nwhich is the continuous limit of the discrete ResNet x̂n+1 = x̂n+fn(x̂n) with the time step ∆t = 1. In the conventional ODE, L = L(x̂(T )) is defined as the loss function. In our work, we propose to integrate dL/dx̂(t) into the forward process:\ndx(t)\ndt = f(x(t), t) + η\ndL\ndx̂(t) , x(0) = x0. (7)\nwhere x0 is the (perturbed) inputs and η is a scaling factor. The introduced dL/dx̂(t) represents the continuous limit of back-propagated gradients. As we record the gradients from an earlier backward pass, the corresponding forward pass can be approximately treated as solving the original ODE. As shown in Eq. (7), there are two operators on the right hand side. According to the rich literature on the operator-splitting theory from numerical analysis, we have the following proposition. Proposition 3.1. The LAD method is the numerical solution of our proposed ODE with the LieTrotter splitting scheme with step size ∆t = 1.\nIn the operator splitting theory, the Strang-Marchuk (SM) splitting scheme (Ascher & Petzold, 1998) is also a widely-used technique of solving ODE with multiple terms. Compared with the Lie-Trotter splitting scheme, the Strang-Marchuk splitting scheme enjoys much lower local truncation errors (Bobylev & Ohwada, 2001). Lu et al. (2019) propose to leverage the SM splitting scheme to improve the Transformer architecture, which results in higher model accuracy. We also propose to use the SM splitting scheme to discretize our ODE in Eq. (7), but the direct application of SM method is intractable. With proper approximation, we have the following modified version. Theorem 3.2. An approximated numerical scheme of Eq. (7) with the SM splitting is x̃n = xn + η 2 ∂L ∂x̂n , x̄n = x̃n + fn(x̃n),\nxn+1 = x̄n + η 4 ( ∂L ∂x̂n + ∂L∂x̂n+1 ) .\n(8)\nThe proofs and a self-contained introduction to the numerical background are provided in Appendix A. We refer to Eq. (8) as our LAD-SM method." }, { "heading": "3.3 ANALYSIS OF THE SECOND ORDER DYNAMICS", "text": "In this section, we provide an analysis of the second order dynamics and connect our ODE (Eq. (7)) with the original ODE (Eq. (6)). Define a(t) = dL/dx̂(t), it is known that the dynamics of a(t) (Pontryagin et al., 1962; Chen et al., 2018) satisfies\nda dt = −a(t)T ∂f(x̂, t) ∂x̂ . (9)\nThe next theorem presents the second order dynamics of the proposed Eq. (7).\nTheorem 3.3. The second order dynamics of Eq. (7) is given by\nd2x\ndt2 = ∂f(x, t) ∂x f(x, t) + ∂f(x, t) ∂t + ηa(t)T (∂f(x, t) ∂x − ∂f(x̂, t) ∂x̂ ) (10)\nwith\nx(0) = x0, dx\ndt ∣∣∣∣ t=0 = f(x(0), 0) + η dL dx(0) . (11)\nProof. By the direct computation, we know the second order dynamics of Eq. (7) is\nd2x\ndt2 = ∂f(x, t) ∂x dx dt + ∂f(x, t) ∂t − ηa(t)T ∂f(x̂, t) ∂x̂ (12)\n= ∂f(x, t)\n∂x\n( f(x, t) + ηa(t) ) + ∂f(x, t)\n∂t − ηa(t)T ∂f(x̂, t) ∂x̂ (13)\n= ∂f(x, t)\n∂x f(x, t) +\n∂f(x, t)\n∂t + ηa(t)T (∂f(x, t) ∂x − ∂f(x̂, t) ∂x̂ ) (14)\nwhere the first equality is from Eq. (9) and the second equlaity is from Eq. (7).\nMoreover, the second order dynamics of Eq. (6) is given by\nd2x̂\ndt2 = ∂f(x̂, t) ∂x̂ f(x̂, t) + ∂f(x̂, t) ∂t ,\nwith x̂(0) = x0, dx̂\ndt ∣∣∣∣ t=0 = f(x0, 0).\n(15)\nSince x is a small perturbation of x̂ as well as the existence of ηa(t), the last term in Eq. (10) can be negligible and the main difference of the seccond order dynamics of our ODE (Eq. (7)) with the original ODE (Eq. (6)) lies in the first order initial values. The extra momentum of the input in Eq. (11) leads to extra perturbations in all of the hidden layers during the propagation process. The implementation details of our methods can be found in Appendix B." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our proposed methods on the CIFAR (Krizhevsky et al., 2009) benchmarks. For each experiment, we conduct 3 runs with different random seeds and report the averaged result to reduce the impact of random variations. We also select Yang et al. (2020) as our baseline methods, which introduce layer-wise damping terms to each layer in neural networks. The details of experimental settings can be found in Appendix B." }, { "heading": "4.1 LAD WITH NATURAL TRAINING", "text": "We first evaluate our methods in the setting of natural training (He et al., 2016). This setting is equivalent to setting ∆x0 = 0 in our layer-wise adversarial defense framework (Eq. (1)). Table 4.1 shows the accuracy and robustness results of our methods composed with natural training under different attack radii .\nResults show that our methods perform best under most attack settings with small radii. While all methods fail under attacks with larger radii because of the vulnerability of natural training, results under attacks with small radii have still shown the enhanced robustness by only introducing perturbations on hidden layers." }, { "heading": "4.2 LAD WITH PGD TRAINING", "text": "In this section, we report our experiments with PGD Training (Madry et al., 2018). Table 2 shows the results of our methods and the baseline techniques with PGD training.\nAccording to the experimental results, it is shown that our methods consistently outperform the baselines in the robustness performance. Another interesting finding is that in our experiments, our approaches achieve more significant improvements over baselines on the CIFAR-100 benchmark. We provide a possible interpretation: from the discrete neural network perspective, the stacked\nlayers represent different levels of representation learning. While our methods have perturbed each layers, it can be interpreted as an augmentation with adversarial “examples” in each levels of feature learning. Given that each class in the CIFAR-100 training set consists only 500 images (which is far less than that in the CIFAR-10 training set (5,000 images)), it is inferred that our methods have a potential positive effect in the data-scarce regime." }, { "heading": "4.3 LAD WITH TRADES", "text": "In this section, we report our experimental results with the TRADES adversarial training approach (Zhang et al., 2019b). Table 3 shows the results of our methods composed with the TRADES method. According to the results shown in Table 2, we only select TRADES with the original ResNet architecture as our baseline methods. Results show that our methods have also surpassed the baseline methods on top of the state-of-the-art TRADES technique. We also provide robustness results of our methods as well as the baselines under stronger attacks in Appendix C." }, { "heading": "4.4 LAD WITH STOCHASTICITY", "text": "Prior studies have shown that model robustness is improved with layer-wise noise injection as regularization (Liu et al., 2020; Wang et al., 2019). We also experiment on introducing stochasticity to our methods in order to further boost the robustness performance. We augment our experiments with PGD training with additive Gaussian noise, which is proposed by Wang et al. (2019). During training, Gaussian noise n ∼ N(0, 0.01) is added to each layer in the model. During inference, we perform forward pass with noise sampled for 100 times and take the averaged output logits as the expectation outputs. Table 4 shows the results on the CIFAR-10 benchmark. As shown from the results, our approaches consistently surpass the baseline approaches under both the deterministic and the stochastic settings.\n4.5 THE EFFECT OF η\nIn this section, we introduce how we determine the scaling factor η. The hyperparameter η scales the back-propagated gradients, which are further added to the network layer-wisely. We start to get basic knowledge about the proper range of η by comparing the norms of back-propagated gradients with the norms of each layers. According to Eq. (11), our methods have essentially perturbed the first order initial value in the view of the second order dynamics. As a result, we let\nratio = ‖x(0) + f(x(0), 0)‖2∥∥∥ dLdx(0)∥∥∥\n2\n(16)\nand trace the ratio during training.\nAs Figure 2 shows, the ratio is ultimately large at the beginning of training (around 2×106) and becomes smaller during training. While it achieves its minimum at around the end of the training, the ratio is always larger than 1×105. As a result, the ratio serves as a quantitative comparison between the norm of the back-propagated gradient with the norm of the layer in a network. It suggests that the η should not be too small; otherwise the layer-wise perturbations can be too small to affect the training. Model robustness results under different η’s have also proven the finding. As Table 5 shows, the effect on adversarial model robustness from a small η is marginal compared with that from a larger η. Table 5 also shows that η should not be too large. A too large η may unstabilize the training process and deteriorate the model robustness." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose layer-wise adversarial defense as an extension to current adversarial training approaches. The hidden layers are robustified by the introduced layer-wise perturbations, which are proportional to the back-propagated gradients. We build up the extended relationship of our methods with conventional adversarial training methods from the ODE perspective by providing the analysis of the second order dynamics. We use the Lie-Trotter and the Strang-Marchuk splitting schemes to discretize the proposed ODE model, resulting in two different training algorithms. Experiments on the CIFAR-10 and CIFAR-100 benchmarks show that our methods improve adversarial model robustness on top of different widely-used strong adversarial training techniques." }, { "heading": "A A BRIEF INTRODUCTION TO THE SPLITTING SCHEMES", "text": "A.1 NUMERICAL BACKGROUND\nConsider an ODE with two coupled terms in the right-hand side:\ndx dt = F (x(t), t) +G(x(t), t), x(0) = x0. (17)\nIt is difficult to solve the ODEs with two coupled terms in the right-hand side. The splitting methods provide a natural way to decompose the coupled terms into individual calculation for different differential operators (McLachlan & Quispel, 2002). The simplest splitting scheme is the Lie-Trotter splitting scheme which alternatively calculates F (·) andG(·). The Lie-Trotter splitting scheme with the forward Euler method discretizes Eq. (17) as follows:\nx̃(t) = x(t) + ∆tF (x(t), t),\nx(t+ ∆t) = x̃(t) + ∆tG(x(t), t). (18)\nThe Lie-Trotter splitting scheme first solves the ODE with respect to F (·) to acquire the intermediate state x̃(t). Starting from x̃(t), it continues to solve the ODE with respect to G(·) to complete the discretization from time t to time t + ∆t. In the proposed ODE (Eq. (7)), we treat the f function as the operator F (·) and the dL/dx function as the operator G(·). From the formulation of the Lie-Trotter splitting scheme (Eq. (18)), we have the following discretization:{\nx̃n+1 = xn + fn(xn), xn+1 = x̃n+1 + η ∂L\n∂x̂n+1 ,\n(19)\nwhich is equivalent to our two-stage approximation approach (note that since we use the first order forward Euler method for each ODE, it is equivalent to either use ∂L/∂x̂n+1 or use ∂L/∂x̂n in solving G(·)). It is thus straightforward to see that Proposition 3.1 holds. The Strang-Marchuk splitting scheme extends the Lie-Trotter splitting scheme by dividing the onestep solver for G(·) into two half steps. Using the Strang-Marchuk splitting scheme to solve Eq. (17) yields\nx̂(t) = x(t) + ∆t\n2 G(x(t), t),\nx̃(t) = x̂(t) + ∆tF (x̂(t), t),\nx(t+ ∆t) = x̃(t) + ∆t\n2 G\n( x̃(t), t+ ∆t\n2\n) .\n(20)\nThe Strang-Marchuk splitting scheme enjoys lower local truncation error than the Lie-Trotter splitting scheme (Bobylev & Ohwada, 2001). As a result, the Strang-Marchuk splitting scheme may lead to a more accurate solution in terms of solving the proposed ODE (Eq. (7)). In the next section we provide the proof for Theorem 3.2.\nA.2 PROOF OF THEOREM 3.2\nApplying the Strang-Marchuk splitting scheme (Eq. (20)) with step size ∆t = 1 to solve our ODE (7), we have the following algorithm: x̃n = xn + η 2 ∂L ∂x̂n , x̄n = x̃n + fn(x̃n),\nxn+1 = x̄n + η 2 ∂L ∂x̂(n+1/2) .\n(21)\nIn Equation (21), the essential part is to calculate ∂L/∂x̂(n+ 1/2). As shown in Eq. (9), our goal is to estimate a(n+ 1/2) with a(n) and a(n+ 1). As the learned filters in the deep layers converges, we can mildly treat ∂f(x̂, t)/∂x̂ in the RHS of Eq. (9) as a constant C. In this way, the adjoint dynamics is relaxed as a linear ODE, the solution of which reads as follows:\na(t) = exp(−Ct). (22)\nThen we have the following relationship:\na2(n+ 1/2) = a(n) · a(n+ 1). (23)\nAs shown by Bhatia & Davis (1993), we have that the matrix form of the arithmetic-geometric mean inequality\n2‖A1/2B1/2‖ ≤ ‖A+B‖ (24)\nholds for any positive definite matrices A,B ∈ Rn×n and unitarily invariant norm ‖ · ‖. We thus further bound above a(n + 1/2) with (a(n) + a(n + 1))/2 in Eq. (23). Substituting the upper bound for a(n + 1/2) into Eq. (21) leads to Theorem 3.2. Notice that while our relaxation may have the potential negative effect on the accuracy of the splitting scheme itself, the formulation we provide differs from the LAD method and is easy to implement. Besides, slightly larger layerwise perturbations may also contribute to the adversarial model robustness. We leave more accurate algorithms implementing the Strang-Marchuk splitting scheme for future work." }, { "heading": "B EXPERIMENTAL SETTINGS", "text": "B.1 GENERAL SETTINGS\nFollowing He et al. (2016), we pad 4 pixels on each side of the image and sample a 32 × 32 crop from it or its horizontal flip. We use pre-activated ResNet-56 as our backbone architecture and experiment with our LAD (Eq. (5)) and LAD-SM (Eq. (8)) methods. For all experiments, we use the SGD optimizer with the batch size of 128. We train for 160 (300) epochs for the CIFAR-10 (-100) benchmark; the learning rate starts with 0.1, and is divided it by 10 at 80 (150) and 120 (225) epochs. We apply weight decay of 1e-4 and momentum of 0.9. We determine the scaling factor η by cross-validation on training set, with η = 50 for CIFAR-10 and η = 1000 for CIFAR-100. For natural training, We set α = 0.5/255 and M = 20 in the IFGSM attack. For PGD training, we set α = 2/255, with iteration times M = 10 during PGD training and M = 20 for the IFGSM attacks. In the TRADES setting, we set λ = 1/6 and γ = 1/2. For other hyperparameters, we follow the settings in the PGD training experiments.\nB.2 IMPLEMENTATION DETAILS OF OUR METHODS\nIn implementation, we set hooks on each layers in the neural architectures and perform a forward pass with the perturbed inputs to calculate the loss. Gradients back-propagated from the loss function are caught by the hooks on each layers. In another forward pass, the recorded gradients are further scaled and added to each layer following Eq. (5) or Eq. (8). Following conventional adversarial training algorithms, we add the infinity norm bounded constraints to the input perturbations.We propose to replace the (adversarial) loss term with the lately calculated loss function. Denote that for a given data point 〈x, y〉, the perturbed input is x′. The natural loss term and the adversarial loss term are L(f(x), y) and L(f(x′), y), respectively. Then for natural training, we replace the term L(f(x), y) with the resulted loss function LLAD/LAD−SM(f(x′), y); for PGD training, we replace the term L(f(x′), y) with LLAD/LAD−SM(f(x′), y); for TRADES training, we replace the term L(f(x), y) with γL(f(x), y) + (1− γ)LLAD/LAD−SM(f(x′), y), where γ is a hyperparameter for weighted averaging." }, { "heading": "C STRONGER ATTACKS", "text": "Table 6 shows the accuracy and robustness results of our methods as well as baselines with adversarial training under stronger attacks on the CIFAR benchmarks. It can be seen that our methods have consistently outrun the baseline methods under attacks of different strength levels." }, { "heading": "D RESULTS OF LAD WITH STOCHASTICITY ON CIFAR-100", "text": "In this section, we provide additional experimental results of LAD with stochasticity on CIFAR-100, shown in Table 7." } ]
2,020
LAYER-WISE ADVERSARIAL DEFENSE: AN ODE PER-
SP:075f74ff0eec8a4d36e3d9d6c62276776dd465ba
[ "This paper asks a simple question: do extreme-activating synthetic images for a CNN unit help a human observer to predict that unit’s response to natural images, compared with maximally/minimally activating natural images. The authors present human observers with images synthesized to maximally or minimally activate a CNN unit, and then ask observers to make a binary choice as to which of two subsequently presented natural images will yield a larger unit response. They find that the synthetic images provide useful information for prediction, but that the benefit is smaller than that provided by simply presenting people with other natural images that maximally or minimally activate a unit. " ]
Feature visualizations such as synthetic maximally activating images are a widely used explanation method to better understand the information processing of convolutional neural networks (CNNs). At the same time, there are concerns that these visualizations might not accurately represent CNNs’ inner workings. Here, we measure how much extremely activating images help humans to predict CNN activations. Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. (2017) with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map. Given either synthetic or natural reference images, human participants choose which of two query images leads to strong positive activation. The experiment is designed to maximize participants’ performance, and is the first to probe intermediate instead of final layer representations. We find that synthetic images indeed provide helpful information about feature map activations (82 ± 4% accuracy; chance would be 50%). However, natural images — originally intended to be a baseline — outperform these synthetic images by a wide margin (92 ± 2%). Additionally, participants are faster and more confident for natural images, whereas subjective impressions about the interpretability of the feature visualizations by Olah et al. (2017) are mixed. The higher informativeness of natural images holds across most layers, for both expert and lay participants as well as for handand randomly-picked feature visualizations. Even if only a single reference image is given, synthetic images provide less information than natural images (65±5% vs. 73±4%). In summary, synthetic images from a popular feature visualization method are significantly less informative for assessing CNN activations than natural images. We argue that visualization methods should improve over this simple baseline.
[ { "affiliations": [], "name": "Judy Borowski" }, { "affiliations": [], "name": "Roland S. Zimmermann" }, { "affiliations": [], "name": "Judith Schepers" }, { "affiliations": [], "name": "Robert Geirhos" }, { "affiliations": [], "name": "Thomas S. A. Wallis" }, { "affiliations": [], "name": "Matthias Bethge" }, { "affiliations": [], "name": "Wieland Brendel" } ]
[ { "authors": [ "Vincent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Sravanti Addepalli", "Dipesh Tamboli", "R Venkatesh Babu", "Biplab Banerjee" ], "title": "Saliency-driven class impressions for feature visualization of deep neural networks", "venue": "arXiv preprint arXiv:2007.15861,", "year": 2020 }, { "authors": [ "Ahmed Alqaraawi", "Martin Schuessler", "Philipp Weiß", "Enrico Costanza", "Nadia Berthouze" ], "title": "Evaluating saliency map explanations for convolutional neural networks: a user study", "venue": "In Proceedings of the 25th International Conference on Intelligent User Interfaces,", "year": 2020 }, { "authors": [ "Yasmeen Alufaisan", "Laura R Marusich", "Jonathan Z Bakdash", "Yan Zhou", "Murat Kantarcioglu" ], "title": "Does explainable artificial intelligence improve human decision-making", "venue": "arXiv preprint arXiv:2006.11194,", "year": 2020 }, { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PloS one,", "year": 2015 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Agata Lapedriza", "Bolei Zhou", "Antonio Torralba" ], "title": "Understanding the role of individual units in a deep neural network", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Felix Biessmann", "Dionysius Irza Refiano" ], "title": "A psychophysics approach for quantitative comparison of interpretable computer vision models", "venue": "arXiv preprint arXiv:1912.05011,", "year": 2019 }, { "authors": [ "Santiago A Cadena", "Marissa A Weis", "Leon A Gatys", "Matthias Bethge", "Alexander S Ecker" ], "title": "Diverse feature visualizations reveal invariances in early layers of deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Carrie J Cai", "Emily Reif", "Narayan Hegde", "Jason Hipp", "Been Kim", "Daniel Smilkov", "Martin Wattenberg", "Fernanda Viegas", "Greg S Corrado", "Martin C Stumpe" ], "title": "Human-centered tools for coping with imperfect algorithms during medical decision-making", "venue": "In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems,", "year": 2019 }, { "authors": [ "Shan Carter", "Zan Armstrong", "Ludwig Schubert", "Ian Johnson", "Chris Olah" ], "title": "Activation atlas. Distill, 2019", "venue": "doi: 10.23915/distill.00015", "year": 2019 }, { "authors": [ "Diogo V Carvalho", "Eduardo M Pereira", "Jaime S Cardoso" ], "title": "Machine learning interpretability", "venue": "A survey on methods and metrics. Electronics,", "year": 2019 }, { "authors": [ "Arjun Chandrasekaran", "Deshraj Yadav", "Prithvijit Chattopadhyay", "Viraj Prabhu", "Devi Parikh" ], "title": "It takes two to tango: Towards theory of ai’s mind", "venue": "arXiv preprint arXiv:1704.00717,", "year": 2017 }, { "authors": [ "Eric Chu", "Deb Roy", "Jacob Andreas" ], "title": "Are visual explanations useful? a case study in model-inthe-loop prediction", "venue": "arXiv preprint arXiv:2007.12248,", "year": 2020 }, { "authors": [ "Dennis Collaris", "Jarke J van Wijk" ], "title": "Explainexplore: Visual exploration of machine learning explanations", "venue": "IEEE Pacific Visualization Symposium (PacificVis),", "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Jürgen Dieber", "Sabrina Kirrane" ], "title": "Why model why? assessing the strengths and limitations of lime", "venue": "arXiv preprint arXiv:2012.00093,", "year": 2020 }, { "authors": [ "Jonathan Dinu", "Jeffrey Bigham", "J Zico Kolter" ], "title": "Challenging common interpretability assumptions in feature attribution explanations", "venue": "arXiv preprint arXiv:2012.02748,", "year": 2020 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Imme Ebert-Uphoff", "Kyle Hilburn" ], "title": "Evaluation, tuning and interpretation of neural networks for working with images in meteorological applications", "venue": "Bulletin of the American Meteorological Society,", "year": 2020 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Visualizing higher-layer features of a deep network", "venue": "University of Montreal,", "year": 2009 }, { "authors": [ "Thomas Fel", "David Vigouroux" ], "title": "Representativity and consistency measures for deep neural network explanations", "venue": "arXiv preprint arXiv:2009.04521,", "year": 2020 }, { "authors": [ "Ruth Fong", "Andrea Vedaldi" ], "title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Amirata Ghorbani", "James Wexler", "James Y Zou", "Been Kim" ], "title": "Towards automatic concept-based explanations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Leilani H Gilpin", "David Bau", "Ben Z Yuan", "Ayesha Bajwa", "Michael Specter", "Lalana Kagal" ], "title": "Explaining explanations: An overview of interpretability of machine learning", "venue": "IEEE 5th International Conference on data science and advanced analytics (DSAA),", "year": 2018 }, { "authors": [ "Bryce Goodman", "Seth Flaxman" ], "title": "European Union regulations on algorithmic decision-making and a “right to explanation", "venue": "AI magazine,", "year": 2017 }, { "authors": [ "Fabio M. Graetz" ], "title": "How to visualize convolutional features in 40 lines of code, Jan 2019", "venue": "URL https://towardsdatascience.com/how-to-visualize-convolutionalfeatures-in-40-lines-of-code-70b7d87b0030", "year": 2019 }, { "authors": [ "Umut Güçlü", "Marcel AJ van Gerven" ], "title": "Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream", "venue": "Journal of Neuroscience,", "year": 2015 }, { "authors": [ "Siavash Haghiri", "Patricia Rubisch", "Robert Geirhos", "Felix Wichmann", "Ulrike von Luxburg" ], "title": "Comparison-based framework for psychophysics: Lab versus crowdsourcing", "venue": null, "year": 1905 }, { "authors": [ "Peter Hase", "Mohit Bansal" ], "title": "Evaluating explainable ai: Which algorithmic explanations help users predict model behavior", "venue": "arXiv preprint arXiv:2005.01831,", "year": 2020 }, { "authors": [ "Fred Hohman", "Haekyu Park", "Caleb Robinson", "Duen Horng Polo Chau" ], "title": "Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations", "venue": "IEEE transactions on visualization and computer graphics,", "year": 2019 }, { "authors": [ "Sara Hooker", "Dumitru Erhan", "Pieter-Jan Kindermans", "Been Kim" ], "title": "A benchmark for interpretability methods in deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jeya Vikranth Jeyakumar", "Joseph Noor", "Yu-Hsi Cheng", "Luis Garcia", "Mani Srivastava" ], "title": "How can i explain this to you? an empirical study of deep neural network explanation methods", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Andrej Karpathy", "Justin Johnson", "Li Fei-Fei" ], "title": "Visualizing and understanding recurrent networks", "venue": "arXiv preprint arXiv:1506.02078,", "year": 2015 }, { "authors": [ "Been Kim", "Rajiv Khanna", "Oluwasanmi O Koyejo" ], "title": "Examples are not enough, learn to criticize! criticism for interpretability", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "venue": "In International conference on machine learning,", "year": 2018 }, { "authors": [ "Nikolaus Kriegeskorte" ], "title": "Deep neural networks: a new framework for modeling biological vision and brain information processing", "venue": "Annual review of vision science,", "year": 2015 }, { "authors": [ "Jean-Philippe Kröll", "Simon B Eickhoff", "Felix Hoffstaedter", "Kaustubh R Patil" ], "title": "Evolving complex yet interpretable representations: application to alzheimer’s diagnosis and prognosis", "venue": "IEEE Congress on Evolutionary Computation (CEC),", "year": 2020 }, { "authors": [ "Nesaretnam Barr Kumarakulasinghe", "Tobias Blomberg", "Jintai Liu", "Alexandra Saraiva Leao", "Panagiotis Papapetrou" ], "title": "Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models", "venue": "IEEE 33rd International Symposium on ComputerBased Medical Systems (CBMS),", "year": 2020 }, { "authors": [ "Sebastian Lapuschkin", "Stephan Wäldchen", "Alexander Binder", "Grégoire Montavon", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Unmasking clever hans predictors and assessing what machines really learn", "venue": "Nature communications,", "year": 2019 }, { "authors": [ "Matthew L Leavitt", "Ari Morcos" ], "title": "Towards falsifiable interpretability research", "venue": "arXiv preprint arXiv:2010.12016,", "year": 2020 }, { "authors": [ "Yi-Shan Lin", "Wen-Chuan Lee", "Z Berkay Celik" ], "title": "What do you see? evaluation of explainable artificial intelligence (xai) interpretability through neural backdoors", "venue": "arXiv preprint arXiv:2009.10639,", "year": 2020 }, { "authors": [ "Zachary C Lipton" ], "title": "The mythos of model", "venue": "interpretability. Queue,", "year": 2018 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Tim Miller" ], "title": "Explanation in artificial intelligence: Insights from the social sciences", "venue": "Artificial Intelligence,", "year": 2019 }, { "authors": [ "Grégoire Montavon", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Methods for interpreting and understanding deep neural networks", "venue": "Digital Signal Processing,", "year": 2018 }, { "authors": [ "Ari S Morcos", "David GT Barrett", "Neil C Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "arXiv preprint arXiv:1803.06959,", "year": 2018 }, { "authors": [ "W James Murdoch", "Chandan Singh", "Karl Kumbier", "Reza Abbasi-Asl", "Bin Yu" ], "title": "Interpretable machine learning: definitions, methods, and applications", "venue": null, "year": 1901 }, { "authors": [ "Karthikeyan Natesan Ramamurthy", "Bhanukiran Vinzamuri", "Yunfeng Zhang", "Amit Dhurandhar" ], "title": "Model agnostic multilevel explanations", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Anh Nguyen", "Alexey Dosovitskiy", "Jason Yosinski", "Thomas Brox", "Jeff Clune" ], "title": "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks", "venue": "arXiv preprint arXiv:1602.03616,", "year": 2016 }, { "authors": [ "Anh Nguyen", "Jeff Clune", "Yoshua Bengio", "Alexey Dosovitskiy", "Jason Yosinski" ], "title": "Plug & play generative networks: Conditional iterative generation of images in latent space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Fabian Offert" ], "title": " i know it when i see it”. visualization and intuitive interpretability", "venue": "arXiv preprint arXiv:1711.08042,", "year": 2017 }, { "authors": [ "Fabian Offert", "Peter Bell" ], "title": "Perceptual bias and technical metapictures: critical machine vision as a humanities challenge", "venue": "AI & SOCIETY,", "year": 2020 }, { "authors": [ "Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter" ], "title": "An overview of early vision in inceptionv1", "venue": "Distill, 2020a. doi: 10.23915/distill.00024.002", "year": 2020 }, { "authors": [ "Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter" ], "title": "Zoom in: An introduction to circuits", "venue": "Distill, 2020b. doi: 10.23915/distill.00024.001", "year": 2020 }, { "authors": [ "Jonathan Peirce", "Jeremy R Gray", "Sol Simpson", "Michael MacAskill", "Richard Höchenberger", "Hiroyuki Sogo", "Erik Kastman", "Jonas Kristoffer Lindeløv" ], "title": "Psychopy2: Experiments in behavior made easy", "venue": "Behavior research methods,", "year": 2019 }, { "authors": [ "Alec Radford", "Rafal Jozefowicz", "Ilya Sutskever" ], "title": "Learning to generate reviews and discovering sentiment", "venue": "arXiv preprint arXiv:1704.01444,", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Anchors: High-precision model-agnostic explanations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Wojciech Samek", "Grégoire Montavon", "Sebastian Lapuschkin", "Christopher J Anders", "KlausRobert Müller" ], "title": "Toward interpretable machine learning: Transparent deep neural networks and beyond", "venue": "arXiv preprint arXiv:2003.07631,", "year": 2020 }, { "authors": [ "Philipp Schmidt", "Felix Biessmann" ], "title": "Quantifying interpretability and trust in machine learning systems", "venue": "arXiv preprint arXiv:1901.08558,", "year": 2019 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Hua Shen", "Ting-Hao’Kenneth’ Huang" ], "title": "How useful are the machine-generated interpretations to general users? a human evaluation on guessing the incorrectly predicted labels", "venue": "arXiv preprint arXiv:2008.11721,", "year": 2020 }, { "authors": [ "Rui Shi", "Tianxing Li", "Yasushi Yamaguchi" ], "title": "Group visualization of class-discriminative features", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "arXiv preprint arXiv:1706.03825,", "year": 2017 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "arXiv preprint arXiv:1412.6806,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Erico Tjoa", "Cuntai Guan" ], "title": "Quantifying explainability of saliency methods in deep neural networks", "venue": "arXiv preprint arXiv:2009.02899,", "year": 2020 }, { "authors": [ "Julian Tritscher", "Markus Ring", "Daniel Schlr", "Lena Hettinger", "Andreas Hotho" ], "title": "Evaluation of posthoc xai approaches through synthetic tabular data", "venue": "In International Symposium on Methodologies for Intelligent Systems,", "year": 2020 }, { "authors": [ "Zijie J Wang", "Robert Turko", "Omar Shaikh", "Haekyu Park", "Nilaksh Das", "Fred Hohman", "Minsuk Kahng", "Duen Horng Chau" ], "title": "Cnn explainer: Learning convolutional neural networks with interactive visualization", "venue": null, "year": 2004 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Anh Nguyen", "Thomas Fuchs", "Hod Lipson" ], "title": "Understanding neural networks through deep visualization", "venue": "arXiv preprint arXiv:1506.06579,", "year": 2015 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Quan-shi Zhang", "Song-Chun Zhu" ], "title": "Visual interpretability for deep learning: a survey", "venue": "Frontiers of Information Technology & Electronic Engineering,", "year": 2018 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Object detectors emerge in deep scene cnns", "venue": "arXiv preprint arXiv:1412.6856,", "year": 2014 }, { "authors": [ "Luisa M Zintgraf", "Taco S Cohen", "Tameem Adel", "Max Welling" ], "title": "Visualizing deep neural network decisions: Prediction difference analysis", "venue": "arXiv preprint arXiv:1702.04595,", "year": 2017 }, { "authors": [ "Russakovsky" ], "title": "Note that the Inception V1 network used in previously mentioned work", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "As Deep Learning methods are being deployed across society, academia and industry, the need to understand their decisions becomes ever more pressing. Under certain conditions, a “right to explanation” is even required by law in the European Union (GDPR, 2016; Goodman & Flaxman, 2017). Fortunately, the field of interpretability or explainable artificial intelligence (XAI) is also growing: Not only are discussions on goals and definitions of interpretability advancing (DoshiVelez & Kim, 2017; Lipton, 2018; Gilpin et al., 2018; Murdoch et al., 2019; Miller, 2019; Samek et al., 2020) but the number of explanation methods is rising, their maturity is evolving (Zeiler & Fergus, 2014; Ribeiro et al., 2016; Selvaraju et al., 2017; Kim et al., 2018) and they are tested and\n∗Joint first and corresponding authors: firstname.lastname@uni-tuebingen.de †Current affiliation: Institute of Psychology and Center for Cognitive Science, Technische Universität\nDarmstadt ‡Joint senior authors\n...\nor\nM ax\nim al ly Ac tiv at in g\n...\nSynthetic Natural\n...\nM in\nim al ly Ac tiv at in g\n... Choose or\nQueries\n...which image is strongly activating?A Given these reference images... B\nSynthetic Natural Chance\n0.6\n0.7\n0.8\n0.9\n1.0\nPr op\nor tio\nn Co\nrr ec\nt\nSynthetic images are helpful Natural even more\nor\nFigure 1: How useful are synthetic compared to natural images for interpreting neural network activations? A: Human experiment. Given extremely activating reference images (either synthetic or natural), a human participant chooses which out of two query images is also a strongly activating image. Synthetic images were generated via feature visualization (Olah et al., 2017). B: Core result. Participants are well above chance for synthetic images — but even better when seeing natural reference images.\nused in real-world scenarios like medicine (Cai et al., 2019; Kröll et al., 2020) and meteorology (Ebert-Uphoff & Hilburn, 2020).\nWe here focus on the popular post-hoc explanation method (or interpretability method) of feature visualizations via activation maximization1. First introduced by Erhan et al. (2009) and subsequently improved by many others (Mahendran & Vedaldi, 2015; Nguyen et al., 2015; Mordvintsev et al., 2015; Nguyen et al., 2016a; 2017), these synthetic, maximally activating images seek to visualize features that a specific network unit, feature map or a combination thereof is selective for. However, feature visualizations are surrounded by a great controversy: How accurately do they represent a CNN’s inner workings—or in short, how useful are they? This is the guiding question of our study.\nOn the one hand, many researchers are convinced that feature visualizations are interpretable (Graetz, 2019) and that “features can be rigorously studied and understood” (Olah et al., 2020b). Also other applications from Computer Vision and Natural Language Processing support the view that features are meaningful (Mikolov et al., 2013; Karpathy et al., 2015; Radford et al., 2017; Zhou et al., 2014; Bau et al., 2017; 2020) and might be formed in a hierarchical fashion (LeCun et al., 2015; Güçlü & van Gerven, 2015; Goodfellow et al., 2016). Over the past few years, extensive investigations to better understand CNNs are based on feature visualizations (Olah et al., 2020b;a; Cammarata et al., 2020; Cadena et al., 2018), and the technique is being combined with other explanation methods (Olah et al., 2018; Carter et al., 2019; Addepalli et al., 2020; Hohman et al., 2019).\nOn the other hand, feature visualizations can be equal parts art and engineering as they are science: vanilla methods look noisy, thus human-defined regularization mechanisms are introduced. But do the resulting beautiful visualizations accurately show what a CNN is selective for? How representative are the seemingly well-interpretable, “hand-picked” (Olah et al., 2017) synthetic images in publications for the entirety of all units in a network, a concern raised by e.g. Kriegeskorte (2015)? What if the features that a CNN is truly sensitive to are imperceptible instead, as might be suggested by the existence of adversarial examples (Szegedy et al., 2013; Ilyas et al., 2019)? Morcos et al. (2018) even suggest that units of easily understandable features play a less important role in a network. Another criticism of synthetic maximally activating images is that they only visualize extreme features, while potentially leaving other features undetected that only elicit e.g. 70% of the maximal activation. Also, polysemantic units (Olah et al., 2020b), i.e. units that are highly activated by different semantic concepts, as well as the importance of combinations of units (Olah et al., 2017; 2018; Fong & Vedaldi, 2018) already hint at the complexity of how concepts are encoded in CNNs.\nOne way to advance this debate is to measure the utility of feature visualizations in terms of their helpfulness for humans. In this study, we therefore design well-controlled psychophysical experiments that aim to quantify the informativeness of the popular visualization method by Olah et al. (2017). Specifically, participants choose which of two natural images would elicit a higher activa-\n1Also known as input maximization or maximally exciting images (MEIs).\ntion in a CNN given a set of reference images that visualize the network selectivities. We use natural query images because real-world applications of XAI require understanding model decisions to natural inputs. To the best of our knowledge, our study is the first to probe how well humans can predict intermediate CNN activations. Our data shows that:\n• Synthetic images provide humans with helpful information about feature map activations.\n• Exemplary natural images are even more helpful.\n• The superiority of natural images mostly holds across the network and various conditions.\n• Subjective impressions of the interpretability of the synthetic visualizations vary greatly between participants." }, { "heading": "2 RELATED WORK", "text": "Significant progress has been made in recent years towards understanding CNNs for image data. Here, we mention a few selected methods as examples of the plethora of approaches for understanding CNN decision-making: Saliency maps show the importance of each pixel to the classification decision (Springenberg et al., 2014; Bach et al., 2015; Smilkov et al., 2017; Zintgraf et al., 2017), concept activation vectors show a model’s sensitivity to human-defined concepts (Kim et al., 2018), and other methods - amongst feature visualizations - focus on explaining individual units (Bau et al., 2020). Some tools integrate interactive, software-like aspects (Hohman et al., 2019; Wang et al., 2020; Carter et al., 2019; Collaris & van Wijk, 2020; OpenAI, 2020), combine more than one explanation method (Shi et al., 2020; Addepalli et al., 2020) or make progress towards automated explanation methods (Lapuschkin et al., 2019; Ghorbani et al., 2019). As overviews, we recommend Gilpin et al. (2018); Zhang & Zhu (2018); Montavon et al. (2018) and Carvalho et al. (2019).\nDespite their great insights, challenges for explanation methods remain. Oftentimes, these techniques are criticized as being over-engineered; regarding feature visualizations, this concerns the loss function and techniques to make the synthetic images look interpretable (Nguyen et al., 2017). Another critique is that interpretability research is not sufficiently tested against falsifiable hypotheses and rather relies too much on intuition (Leavitt & Morcos, 2020).\nIn order to further advance XAI, scientists advocate different directions. Besides the focus on developing additional methods, some researchers (e.g. Olah et al. (2020b)) promote the “natural science” approach, i.e. studying a neural network extensively and making empirical claims until falsification. Yet another direction is to quantitatively evaluate explanation methods. So far, only decision-level explanation methods have been studied in this regard. Quantitative evaluations can either be realized with humans directly or with mathematically-grounded models as an approximation for human perception. Many of the latter approaches show great insights (e.g. Hooker et al. (2019); Nguyen & Martı́nez (2020); Fel & Vigouroux (2020); Lin et al. (2020); Tritscher et al. (2020); Tjoa & Guan (2020)). However, a recent study demonstrates that metrics of the explanation quality computed without human judgment are inconclusive and do not correspond to the human rankings (Biessmann & Refiano, 2019). Additionally, Miller (2019) emphasizes that XAI should build on existing research in philosophy, cognitive science and social psychology.\nThe body of literature on human evaluations of explanation methods is growing: Various combinations of data types (tabular, text, static images), task set-ups and participant pools (experts vs. laypeople, on-site vs. crowd-sourcing) are being explored. However, these studies all aim to investigate final model decisions and do not probe intermediate activations like our experiments do. For a detailed table of related studies, see Appendix Sec. A.3. A commonly employed task paradigm is the “forward simulation / prediction” task, first introduced by Doshi-Velez & Kim (2017): Participants guess the model’s computation based on an input and an explanation. As there is no absolute metric for the goodness of explanation methods (yet), comparisons are always performed within studies, typically against baselines. The same holds for additional data collected for confidence or trust ratings. According to the current literature, studies reporting positive effects of explanations (e.g. Kumarakulasinghe et al. (2020)) slightly outweigh those reporting inconclusive (e.g. Alufaisan et al. (2020); Chu et al. (2020)) or even negative effects (e.g. Shen & Huang (2020)).\nTo our knowledge, no study has yet evaluated the popular explanation method of feature visualizations and how it improves human understanding of intermediate network activations. This study therefore closes an important gap: By presenting data for a forward prediction task of a CNN, we provide a quantitative estimate of the informativeness of maximally activating images generated with the method of Olah et al. (2017). Furthermore, our experiments are unique as they probe for the first time how well humans can predict intermediate model activations." }, { "heading": "3 METHODS", "text": "We perform two human psychophysical studies2 with different foci (Experiment I (N = 10) and Experiment II (N = 23)). In both studies, the task is to choose the one image out of two natural query images (two-alternative forced choice paradigm) that the participant considers to also elicit a strong activation given some reference images (see Fig. 2). Apart from the image choice, we record the participant’s confidence level and reaction time. Specifically, responses are given by clicking on the confidence levels belonging to either query image. In order to gain insights into how intuitive participants find feature visualizations, their subjective judgments are collected in a separate task and a dynamic conversation after the experiment (for details, see Appendix Sec. A.1.1 and Appendix Sec. A.2.6).\nAll design choices are made with two main goals: (1) allowing participants to achieve the best performance possible to approximate an upper bound on the helpfulness of the explanation method, and (2) gaining a general impression of the helpfulness of the examined method. As an example, we choose the natural query images from among those of lowest and highest activations (→ best possible performance) and test many different feature maps across the network (→ generality). For more details on the human experiment besides the ones below, see Appendix Sec. A.1.\nIn Experiment I, we focus on comparing the performance of synthetic images to two baseline conditions: natural reference images and no reference images. In Experiment II, we compare lay vs. expert participants as well as different presentation schemes of reference images. Expert participants qualify by being familiar or having practical experience with feature visualization techniques or at least CNNs. Regarding presentation schemes, we vary whether only maximally or both maximally and minimally activating images are shown; as well as how many example images of each of these are presented (1 or 9).\nFollowing the existing work on feature visualization (Olah et al., 2017; 2018; 2020b;a), we use an Inception V1 network3 (Szegedy et al., 2015) trained on ImageNet (Deng et al., 2009; Russakovsky\n2Code and data is available at https://bethgelab.github.io/testing visualizations/ 3also known as GoogLeNet\nSynthetic Natural None Reference Images\n0\n0.2\n0.4\n0.6\n0.8\n1\nPr op\nor tio\nn Co\nrre ct\np = 0.005\np = 0.003\np = 0.003A\nSynthetic Natural None Reference Images\n0.0\n2.5\n5.0\n7.5\n10.0\n12.5\n15.0\n17.5\nRe ac\ntio n\nTi m\ne [s\nec ] p = 0.002\np < 0.001 p = 0.003 CB Synthetic\nPr op\nor tio\nn of\nC or\nre ct\nT ria\nls\nNatural None\n1 2 3 0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n1 2 3 1 2 3 Confidence Rating\nChance\nFigure 3: Participants are better, more confident and faster at judging which of two query images causes higher feature map activation with natural than with synthetic reference images. A: Performance. Given synthetic reference images, participants are well above chance (proportion correct: 82± 4%), but even better for natural reference images (92± 2%). Without reference images (baseline comparison “None”), participants are close to chance. B: Confidence. Participants are much more confident (higher rating = more confident) for natural than for synthetic images on correctly answered trials (χ2, p < .001). C: Reaction time. For correctly answered trials, participants are on average faster when presented with natural than with synthetic reference images. We show additional plots on confidence and reaction time for incorrectly answered trials and all trials in the Appendix (Fig. 16); for Experiment II, see Fig. 17.). The p-values in A and C correspond to Wilcoxon signed-rank tests.\net al., 2015). The synthetic images throughout this study are the optimization results of the feature visualization method by Olah et al. (2017) with the spatial average of a whole feature map (“channel objective”). The natural stimuli are selected from the validation set of the ImageNet ILSVRC 2012 dataset (Russakovsky et al., 2015) according to their activations for the feature map of interest. Specifically, the images of the most extreme activations are sampled, while ensuring that each lay or expert participant sees different query and reference images. A more detailed description of the specific sampling process for natural stimuli and the generation process of synthetic stimuli is given in Sec. A.1.2." }, { "heading": "4 RESULTS", "text": "In this section, all figures show data from Experiment I except for Fig. 5A+C, which show data from Experiment II. All figures for Experiment II, which replicate the findings of Experiment I, as well as additional figures for Experiment I (such as a by-feature-map analysis), can be found in the Appendix Sec. A.2. Note that (unless explicitly noted otherwise), error bars denote two standard errors of the mean of the participant average metric." }, { "heading": "4.1 PARTICIPANTS ARE BETTER, MORE CONFIDENT AND FASTER WITH NATURAL IMAGES", "text": "Synthetic images can be helpful: Given synthetic reference images generated via feature visualization (Olah et al., 2017), participants are able to predict whether a certain network feature map prefers one over the other query image with an accuracy of 82±4%, which is well above chance level (50%) (see Fig. 3A). However, performance is even higher in what we intended to be the baseline condition: natural reference images (92±2%). Additionally, for correct answers, participants much more frequently report being highly certain on natural relative to synthetic trials (see Fig. 3B), and their average reaction time is approximately 3.7 seconds faster when seeing natural than synthetic reference images (see Fig. 3C). Taken together, these findings indicate that in our setup, participants are not just better overall, but also more confident and substantially faster on natural images." }, { "heading": "4.2 NATURAL IMAGES ARE MORE HELPFUL ACROSS A BROAD RANGE OF LAYERS", "text": "Next, we take a more fine-grained look at performance across different layers and branches of the Inception modules (see Fig. 4). Generally, feature map visualizations from lower layers show lowlevel features such as striped patterns, color or texture, whereas feature map visualizations from" }, { "heading": "BA Synthetic Natural", "text": "higher layers tend to show more high-level concepts like (parts of) objects (LeCun et al., 2015; Güçlü & van Gerven, 2015; Goodfellow et al., 2016). We find performance to be reasonably high across most layers and branches: participants are able to match both low-level and high-level patterns (despite not being explicitly instructed what layer a feature map belonged to). Again, natural images are mostly more helpful than synthetic images." }, { "heading": "4.3 FOR EXPERT AND LAY PARTICIPANTS ALIKE: NATURAL IMAGES ARE MORE HELPFUL", "text": "Explanation methods seek to explain aspects of algorithmic decision-making. Importantly, an explanation should not just be amenable to experts but to anyone affected by an algorithm’s decision. We here test whether the explanation method of feature visualization is equally applicable to expert and lay participants (see Fig. 5A). Contrary to our prior expectation, we find no significant differences in expert vs. lay performance (RM ANOVA, p = .44, for details see Appendix Sec. A.2.2). Hence, extensive experience with CNNs is not necessary to perform well in this forward simulation task. In line with the previous main finding, both experts and lay participants are both better in the natural than in the synthetic condition." }, { "heading": "4.4 EVEN FOR HAND-PICKED FEATURE VISUALIZATIONS, PERFORMANCE IS HIGHER ON", "text": "NATURAL IMAGES\nOften, explanation methods are presented using carefully selected network units, raising the question whether author-chosen units are representative for the interpretability method as a whole. Olah et al. (2017) identify a number of particularly interpretable feature maps in Inception V1 in their appendix overview. When presenting either these hand-picked visualizations4 or randomly selected ones, performance for hand-picked feature maps improves slightly (Fig. 5B); however this performance difference is small and not significant for both natural (Wilcoxon test, p = .59) and synthetic (Wilcoxon test, p = .18) reference images (see Appendix Sec. A.2.4 for further analysis). Consistent with the findings reported above, performance is higher for natural than for synthetic reference images even on carefully selected hand-picked feature maps." }, { "heading": "4.5 ADDITIONAL INFORMATION BOOSTS PERFORMANCE, ESPECIALLY FOR NATURAL IMAGES", "text": "Publications on feature visualizations vary in terms of how optimized images are presented: Often, a single maximally activating image is shown (e.g. Erhan et al. (2009); Carter et al. (2019); Olah et al. (2018)); sometimes a few images are shown simultaneously (e.g. Yosinski et al. (2015); Nguyen et al. (2016b)), and on occasion both maximally and minimally activating images are shown in unison (Olah et al. (2017)). Naturally, the question arises as to what influence (if any) these choices have, and whether there is an optimal way of presenting extremely activating images. For this reason, we systematically compare approaches along two dimensions: the number of reference images (1 vs. 9) and the availability of minimally activating images (only Max vs. Min+Max). The results can\n4All our hand-picked feature maps are taken from the pooling branch of the Inception module. As the appendix overview in Olah et al. (2017) does not contain one feature map for each of these, we select interpretable feature maps for the missing layers mixed5a and mixed5b ourselves.\nbe found in Fig. 5C. When just a single maximally activating image is presented (condition Max 1), natural images already outperform synthetic images (73 ± 4% vs. 64 ± 5%). With additional information along either dimension, performance improves both for natural as well as for synthetic images. The stronger boost in performance, however, is observed for natural reference images. In fact, performance is higher for natural than for synthetic reference images in all four conditions. In the Min+Max 9 condition, a replication of the result from Experiment I shown in Fig. 3A, natural images now outperform synthetic images by an even larger margin (91± 3 vs. 72± 4%)." }, { "heading": "4.6 SUBJECTIVELY, INTERPRETABILITY OF FEATURE VISUALIZATIONS VARIES GREATLY", "text": "While our data suggests that feature visualizations are indeed helpful for humans to predict CNN activations, we want to emphasize again that our design choices aim at an upper bound on their informativeness. Another important aspect of evaluating an explanation method is the subjective impression. Besides recording confidence ratings and reaction times, we collect judgments on intuitiveness trials (see Appendix Fig. 14) and oral impressions after the experiments. The former ask for ratings of how intuitive feature visualizations appear for natural images. As Fig. 6A+B show, participants perceive the intuitiveness of synthetic feature visualizations for strongly activating natural dataset images very differently. Further, the comparison of intuitiveness judgments before and after the main experiments reveals only a small significant average improvement for one out of three feature maps (see Fig. 6B+C, Wilcoxon test, p < .001 for mixed4b). The interactive conversations paint a similar picture: Some synthetic feature visualizations are perceived as intuitive while others do not correspond to understandable concepts. Nonetheless, four participants report that their first “gut feeling” for interpreting these reference images (as one participant phrased it) is more reliable. Further, a few participants point out that the synthetic visualizations are exhausting to understand. Finally, three participants additionally emphasize that the minimally activating reference images played an important role in their decision-making.\nIn a by-feature-map analysis (see Appendix A.2.7 for details and images, as well as Supplementary Material 1 for more images), we compare differences and commonalities for feature maps of different performance levels. According to our observations, easy feature maps seem to contain clear object parts or shapes. In contrast, difficult feature maps seem to have diverse reference images, features that do not correspond to human concepts, or contain conflicting information as to which commonalities between query and reference images matter more. Bluntly speaking, we are also often surprised that participants identified the correct image — the reasons for this are unclear to us." }, { "heading": "5 DISCUSSION & CONCLUSION", "text": "Feature visualizations such as synthetic maximally activating images are a widely used explanation method, but it is unclear whether they indeed help humans to understand CNNs. Using wellcontrolled psychophysical experiments with both expert and lay participants, we here conduct the\nvery first investigation of intermediate synthetic feature visualizations by Olah et al. (2017): Can participants predict which of two query images leads to a strong activation in a feature map, given extremely activating visualizations? Specifically, we shed light on the following questions:\n(1.) How informative are synthetic feature visualizations — and how do they compare to a natural image baseline? We find above-chance performance given synthetic feature visualizations, but to our own surprise, synthetic feature visualizations are systematically less informative than the simple baseline of strongly activating natural images. Interestingly, many synthetic feature visualizations contain regularization mechanisms to introduce more “natural structure” (Olah et al., 2017), sometimes even called a “natural image prior” (Mahendran & Vedaldi, 2015; Offert & Bell, 2020). This raises the question: Are natural images maybe all you need? One might posit that extremely activating natural (reference) images would have an unfair advantage because we also test on extremely activating natural (query) images. However, our task design ultimately reflects that XAI is mainly concerned with explaining how units behave on natural inputs. Furthermore, the fact that feature visualization are not bound to the natural image manifold is often claimed as an advantage because it supposedly allows them to capture more precisely which features a unit is sensitive to (Olah et al., 2017). Our results, though, demonstrate that this is not the case if we want to understand the behavior of units on natural inputs.\n(2.) Do you need to be a CNN expert in order to understand feature visualizations? To the best of our knowledge, our study is the first to compare the performances of expert and lay people when evaluating explanation methods. Previously, publications either focused on only expert groups (Hase & Bansal, 2020; Kumarakulasinghe et al., 2020) or only laypeople (Schmidt & Biessmann, 2019; Alufaisan et al., 2020). Our experiment shows no significant difference between expert and lay participants in our task — both perform similarly well, and even better on natural images: a replication of our main finding. While a few caveats remain when moving an experiment from the well-controlled lab to a crowdsourcing platform (Haghiri et al., 2019), this suggests that future studies may not have to rely on selected expert participants, but may leverage larger lay participant pools.\n(3.) Are hand-picked synthetic feature visualizations representative? An open question was whether the visualizations shown in publications represent the general interpretability of feature visualizations (a concern voiced by e.g. Kriegeskorte, 2015), even though they are hand-picked (Olah et al., 2017). Our finding that there is no large difference in performance between hand- and randomlypicked feature visualizations suggests that this aspect is minor.\n(4.) What is the best way of presenting images? Existing work suggests that more than one example (Offert, 2017) and particularly negative examples (Kim et al., 2016) enhance human understanding of data distributions. Our systematic exploration of presentation schemes provides evidence that increasing the number of reference images as well as presenting both minimally and maximally activating reference images (as opposed to only maximally activating ones) improve human performance. This finding might be of interest to future studies aiming at peak performance or for developing software for understanding CNNs.\n(5.) How do humans subjectively perceive feature visualizations? Apart from the high informativeness of explanations, another relevant question is how much trust humans have in them. In our experiment, we find that subjective impressions of how reasonable synthetic feature visualizations are for explaining responses to natural images vary greatly. This finding is in line with Hase & Bansal (2020) who evaluated explanation methods on text and tabular data.\nCaveats. Despite our best intentions, a few caveats remain: The forward simulation paradigm is only one specific way to measure the informativeness of explanation methods, but does not allow us to make judgments about their helpfulness in other applications such as comparing different CNNs. Further, we emphasize that all experimental design choices were made with the goal to measure the best possible performance. As a consequence, our finding that synthetic reference images help humans predict a network’s strongly activating image may not necessarily be representative of a less optimal experimental set-up with e.g. query images corresponding to less extreme feature map activations. Knobs to further de- or increase participant performance remain (e.g. hyper-parameter choices could be tuned to layers). Finally, while we explored one particular method in depth (Olah et al., 2017); it remains an open question whether the results can be replicated for other feature visualizations methods.\nFuture directions. We see many promising future directions. For one, the current study uses query images from extreme opposite ends of a feature map’s activation spectrum. For a more fine-grained measure of informativeness, we will study query images that elicit more similar activations. Additionally, future participants could be provided with even more information—such as, for example, where a feature map is located in the network. Furthermore, it has been suggested that the combination of synthetic and natural reference images might provide synergistic information to participants (Olah et al., 2017), which could again be studied in our experimental paradigm. Finally, further studies could explore single neuron-centered feature visualizations, combinations of units as well as different network architectures.\nTaken together, our results highlight the need for thorough human quantitative evaluations of feature visualizations and suggest that example natural images provide a surprisingly challenging baseline for understanding CNN activations." }, { "heading": "AUTHOR CONTRIBUTIONS", "text": "The initiative of investigating human predictability of CNN activations came from WB. JB, WB, MB and TSAW jointly combined it with the idea of investigating human interpretability of feature visualizations. JB led the project. JB, RSZ and JS jointly designed and implemented the experiments (with advice and feedback from TSAW, RG, MB and WB). The data analysis was performed by JB and RSZ (with advice and feedback from RG, TSAW, MB and WB). JB designed, and JB and JS implemented the pilot study. JB conducted the experiments (with help from JS). RSZ performed the statistical significance tests (with advice from TSAW and feedback from JB and RG). MB helped shape the bigger picture and initiated intuitiveness trials. WB provided day-to-day supervision. JB, RSZ and RG wrote the initial version of the manuscript. All authors contributed to the final version of the manuscript." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Felix A. Wichmann and Isabel Valera for helpful discussions. We further thank Alexander Böttcher and Stefan Sietzen for support as well as helfpul discussions on technical details. Additionally, we thank Chris Olah for clarifications via slack.distill.pub. Moreover, we thank Leon Sixt for valuable feedback on the introduction and related work. From our lab, we thank Matthias Kümmerer, Matthias Tangemann, Evgenia Rusak and Ori Press for helping in piloting our experiments, as well as feedback from Evgenia Rusak, Claudio Michaelis, Dylan Paiton and Matthias Kümmerer. And finally, we thank all our participants for taking part in our experiments.\nWe thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting JB, RZ and RG. We acknowledge support from the German Federal Ministry of Education and Research (BMBF) through the Competence Center for Machine Learning (TUE.AI, FKZ 01IS18039A) and the Bernstein Computational Neuroscience Program Tübingen (FKZ: 01GQ1002), the Cluster of Excellence Machine Learning: New Perspectives for Sciences (EXC2064/1), and the German Research Foundation (DFG; SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms, TP3, project number 276693517)." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ON METHODS", "text": "" }, { "heading": "A.1.1 HUMAN EXPERIMENTS", "text": "In our two human psychophysical studies, we ask humans to predict a feature map’s strongly activating image (“forward simulation task”, Doshi-Velez & Kim 2017). Answers to the two-alternative forced choice paradigm are recorded together with the participants’ confidence level (1: not confident, 2: somewhat confident, 3: very confident, see Fig. 7). Time per trial is unlimited and we record reaction time. After each trial, feedback is given (see Fig. 7). A progress bar at the bottom of the screen indicates how many trials of a block are already completed. As reference images, either synthetic, natural or no reference images are given. The synthetic images are the feature visualizations from the method of Olah et al. (2017). Trials of different reference images are arranged in blocks. Synthetic and natural reference images are alternated, and, in the case of Experiment I, framed by trials without reference images (see Fig. 8A, B). The order of the reference image types is counter-balanced across subjects.\nThe main trials in the experiments are complemented by practice, catch and intuitiveness trials. To avoid learning effects, we use different feature maps for each trial type per participant. Specifically, practice trials give participants the opportunity to familiarize themselves with the task. In order to monitor the attention of participants, catch trials appear randomly throughout blocks of main trials. Here, the query images are a copy of one of the reference images, i.e., there is an obvious correct answer (see Fig. 15). This control mechanism allows us to decide whether trial blocks should be excluded from the analysis due to e.g. fatigue. To obtain the participant’s subjective impression of the helpfulness of maximally activating images, the experiments are preceded (and also succeeded in the case of Experiment II) by three intuitiveness trials (see Fig. 14). Here, participants judge in a slightly different task design how intuitive they consider the synthetic stimuli for the natural stimuli. For more details on the intuitiveness trials, see below.\nAt the end of the experiment, all expert participants in Experiment I and all lay (but not expert) participants in Experiment II are asked about their strategy and whether it changed over time. The information gained through the first group allows us to understand the variety of cues used and paves the way to identify interesting directions for follow-up experiments. The information gained through the second group allowed comparisons to experts’ impressions reported in Experiment I.\nExperiment I The first experiment focuses on comparing performance of synthetic images to two baselines: natural reference images and no reference images (see Fig. 8A). Screenshots of trials are shown in Fig. 12. In total, 45 feature maps are tested: 36 of these are uniformly sampled from the feature maps of each of the four branches for each of the nine Inception modules. The other nine feature maps are uniformly hand-picked for interpretability from the Inception modules’ pooling branch based on the appendix overview selection provided by Olah et al. (2017) or based on our own choices. In the spirit of a general statement about the explainability method, different participants see different natural reference and query images, and each participant sees different natural query images for the same feature maps in different reference conditions. To check the consistency of participants’ responses, we repeat six randomly chosen main trials for each of the three tested reference image types at the end of the experiment.\nExperiment II The second experiment (see Fig. 8B) is about testing expert vs. lay participants as well as comparing different presentation schemes5 (Max 1, Min+Max 1, Max 9 and Min+Max 9, see Fig. 8E). Screenshots of trials are shown in Fig. 13. In total, 80 feature maps are tested: They are uniformly sampled from every second layer with an Inception module of the network (hence a total of 5 instead of 9 layers), and from all four branches of the Inception modules. Given the focus on four different presentation schemes in this experiment, we repeat the sampling method four times without overlap. In terms of reference image types, only synthetic and natural images are tested. Like in Experiment I, different participants see different natural reference and query images.\n5In pilot experiments, we learned that participants preferred 9 over 4 reference images, hence this “default” choice in Experiment I.\nHowever, expert and lay participants see the same images. For details on the counter-balancing of all conditions, please refer to Tab. 1.\nIntuitiveness Trials In order to obtain the participants’ subjective impression of the helpfulness of maximally activating images, we add trials at the beginning of the experiments, and also at the end of Experiment II. The task set-up is slightly different (see Fig. 14): Only maximally activating (i.e. no minimally activating) images are shown. We ask participants to rate how intuitive they find the explanation of the entirety of the synthetic images for the entirety of the natural images. Again, all images presented in one trial are specific to one feature map. By moving a slider to the right (left), participants judge the explanation method as intuitive (not intuitive). The ratings are recorded on a continuous scale from −100 (not intuitive) to +100 (intuitive). All participants see the same three trials in a randomized order. The trials are again taken from the hand-picked (i.e. interpretable) feature maps of the appendix overview in Olah et al. (2017). In theory, this again allows for the highest intuitiveness ratings possible. The specific feature maps are from a low, intermediate and high layer: feature map 43 of mixed3a, feature map 504 of mixed4b and feature map 17 of mixed 5b.\nParticipants Our two experiments are within-subject studies, meaning that every participant answers trials for all conditions. This design choice allows us to test fewer participants. In Experiment I, 10 expert participants take part (7 male, 3 female, age: 27.2 years, SD = 1.75). In Experiment II, 23 participants take part (of which 10 are experts; 14 male, 9 female, age: 28.1 years, SD = 6.76). Expert participants qualify by being familiar or having worked with convolutional neural networks and most of them even with feature visualization techniques. All participants are naive with respect to the aim of the study. Expert (lay) participants are paid 15e (10 e), per hour for participation. Before the experiment, all participants give written informed consent for participating. All participants have normal or corrected to normal vision. All procedures conform to Standard\n8 of the American Psychological 405 Association’s “Ethical Principles of Psychologists and Code of Conduct” (2016). Before the experiment, the first author explains the task to each participant and ensures complete understanding. For lay participants, the explanation is simplified: Maximally (minimally) activating images are called “favorite images” (“non-favorite images”) of a “computer program” and the question is explained as which of the two query images would also be a “favorite” image to the computer program.\nApparatus Stimuli are displayed on a VIEWPixx 3D LCD (VPIXX Technologies; spatial resolution 1920 × 1080 px, temporal resolution 120Hz). Outside the stimulus image, the monitor is set to mean gray. Participants view the display from 60 cm (maintained via a chinrest) in a darkened chamber. At this distance, pixels subtend approximately 0.024° degrees on average (41 ps per degree of visual angle). Stimulus presentation and data collection is controlled via a desktop computer (Intel Core i5-4460 CPU, AMD Radeon R9 380 GPU) running Ubuntu Linux (16.04 LTS), using PsychoPy (Peirce et al., 2019, version 3.0) under Python 3.6." }, { "heading": "A.1.2 STIMULI SELECTION", "text": "Model Following the existing work on feature visualization by Olah et al. (2017; 2018; 2020b;a), we use an Inception V1 network6 (Szegedy et al., 2015) trained on ImageNet (Deng et al., 2009; Russakovsky et al., 2015). Note that the Inception V1 network used in previously mentioned work slightly deviates from the original network architecture: The 3 × 3 branch of Inception module mixed4a only holds 204 instead of 208 feature maps. To stay as close as possible to the aforementioned work, we also use their implementation and trained weights of the network7. We investigate feature visualizations for all branches (i.e. kernel sizes) of the Inception modules and sample from layers mixed3a to mixed5b before the ReLU non-linearity.\nSynthethic Images from Feature Visualization The synthetic images throughout this study are the optimization results of the feature visualization method from Olah et al. (2017). We use the channel objective to find synthetic stimuli that maximally (minimally) activate the spatial mean of a given feature map of the network. We perform the optimization using lucid 0.3.8 and TensorFlow 1.15.0 (Abadi et al., 2015) and use the hyperparameter as specified in Olah et al. (2017). For the experimental conditions with more than one minimally/maximally activating reference image, we add a diversity regulariztion across the samples. In hindsight, we realized that we generated 10 synthetic images in Experiment I, even though we only needed and used 9 per feature map.\nSelection of Natural Images The natural stimuli are selected from the validation set of the ImageNet ILSVRC 2012 (Russakovsky et al., 2015) dataset. To choose the maximally (minimally) activating natural stimuli for a given feature map, we perform three steps, which are illustrated in Fig. 9 and explained in the following: First, we calculate the activation of said feature map for all pre-processed images (resizing to 256× 256 pixels, cropping centrally to 224× 224 pixels and normalizing) and take the spatial average to get a scalar representing the excitability of the given feature map caused by the image. Second, we order the images according to the collected activation values and select the (Nstimuli+1) ·Nbatches maximally (respectively minimally) activating images. Here, Nstimuli corresponds to the number of reference images used (either 1 or 9, see Fig. 8, E), the +1 comes from the query image, and Nbatches = 20 determines the maximum number of participants we can test with our setup. Third, we distribute the selected images intoNstimuli+1 blocks. Within each block, we randomly shuffle the order of the images. Lastly, we create Nbatches batches of data by selecting one image from each of the blocks for every batch.8\n6This network is considered very interpretable (Olah et al., 2018), yet other work also finds deeper networks more interpretable (Bau et al., 2017). More recent work, again, suggests that “analogous features [...] form across models [...],” i.e. that interpretable feature visualizations appear “universally” for different CNNs (Olah et al., 2020b; OpenAI, 2020).\n7github.com/tensorflow/lucid/tree/v0.3.8/lucid 8After having performed Experiment I and II, we realized a minor bug in our code: Instead of moving every 20th image into the same batch for one participant, we moved every 10th image into the same batch for one participant. This means that we only use a total of 110 different images, instead of 200. The minimal query image is still always selected from the 20 least activating images; the maximal query image is selected from the 91st to 110th maximally activating images - and we do not use the 111th to 200th maximally activating images.\nThe reasons for creating several batches of extremely activating natural images are two-fold: (1) We want to get a general impression of the interpretability method and would like to reduce the dependence on single images, and (2) in Experiment I, a participant has to see different query images in the three different reference conditions. A downside of this design choice is an increase in variability. The precise allocation was done as follows: In Experiment I, the natural query images of the none condition were always allocated the batch with batch nr = subject id, the query and reference images of the natural condition were allocated the batch with batch nr = subject id+1, and the natural query images of the synthetic condition were allocated the batch with batch nr = subject id+2. The allocation scheme in Experiment II can be found in Table 1.\nSelection of Feature Maps The selection of feature maps used in Experiment I is shown in Table 2; the selection of feature maps used in Experiment II is shown in Table 3." }, { "heading": "A.1.3 DIFFERENT ACTIVATION MAGNITUDES", "text": "We note that the elicited activations of synthetic images are almost always about one magnitude larger than the activations of natural images (see Fig. 10a). This constitutes an inherent difference in the synthetic and natural reference image condition. A simple approach to make the two conditions more comparable is to limit the optimization process such that the resulting feature visualizations elicit activations similar to that of natural images. This can be achieved by halting the optimization process once the activations approximately match. By following that procedure one finds limited synthetic images which are indistinguishable from natural images in terms of their activations (see Fig. 10b). Importantly though, these images are visually not more similar to natural images, have a much lower color contrast than normal feature visualizations, and above all hardly resemble meaningful features (see Fig. 11)." }, { "heading": "A.1.4 DATA ANALYSIS", "text": "Significance Tests All significance tests are performed with JASP (JASP Team, 2020, version 0.13.1). For the analysis of the distribution of confidence ratings (see Fig. 3B), we use contingency tables with χ2-tests. For testing pairwise effects in accuracy, confidence, reaction time and intuitiveness data, we report Wilcoxon signed-rank tests with uncorrected p-values (Bonferroni-corrected critical alpha values with family-wise alpha level of 0.05 reported in all figures where relevant). These non-parametric tests are preferred for these data because they do not make distributional assumptions like normally-distributed errors, as in e.g. paired t-tests. For testing marginal effects (main effects of one factor marginalizing over another) we report results from repeated measures ANOVA (RM ANOVA), which does assume normality." }, { "heading": "A.2 DETAILS ON RESULTS", "text": "" }, { "heading": "A.2.1 COMPLEMENTING FIGURES FOR MAIN RESULTS", "text": "Figures 16 - 21 complement the results and figures presented in Section 4. Here, all experimental conditions are shown." }, { "heading": "A.2.2 DETAILS ON PERFORMANCE OF EXPERT AND LAY PARTICIPANTS", "text": "As reported in the main body of the paper, a mixed-effects ANOVA revealed no significant main effect of expert level (F (1, 21) = 0.6, p = 0.44, between-subjects effect). Further, there is no significant interaction with the reference image type (F (1, 21) = 0.4, p = 0.53), and both expert and lay participants show a significant main effect of the reference image type (F (1, 21) = 230.2, p < 0.001)." }, { "heading": "A.2.3 DETAILS ON PERFORMANCE OF EXPERTS SPLIT BY DIFFERENT LEVELS OF EXPERTISE", "text": "Even though Experiment II does not show a significant performance difference for lay and expert participants, it is an open question whether the level of expertise or the background of experts matters. For the data from experts, we hence further divide participants into subgroups according to their expertise (see Fig. 20a-f) and background level (see Fig. 20g-h). Expertise level 1 means that participants are familiar with CNNs, but not feature visualizations; expertise level 2 means that participants have heard of or read about feature visualizations; and expertise level 3 means that participants have used feature visualizations themselves. We note that we also accepted feature visualizations methods other than the one by Olah et al. (2017), e.g. DeepDream (Mordvintsev et al., 2015) for level 2 and 3. Regarding background, we distinguished computational neuroscientists from researchers working on computer vision and / or machine learning. We note that some subgroups only hold one participant and hence may not be representative.\nOur data shows varying trends for the three expert levels (see Fig. 20a-f): For synthetic images, performance decreases with increasing expertise in Experiment I, but increases for Experiment II. For natural images, performance first increases for participants of expertise level 2, and then slightly decreases for participants with expertise level 3 - a trend that holds for both Experiment I and II. In the none condition of Experiment I, performance is highest for the participant of expertise level 1, but decreases for participants of expertise level 2, and again slightly increases for expertise level 3.\nRegarding expert’s different backgrounds, our hypothesis is that many of the computational neuroscientists are very familiar with maximally exciting images for monkeys or rodents, and hence might perform better than pure computer vision / machine learning experts. Fig. 20g-h suggest that this is not the case: The bars for all three reference image types are very similar.\nNot finding clear trends in our data between different expertise levels or experts is not surprising as there is even no significant difference between participants whose professional backgrounds are much further apart: lay people vs. people familiar with CNNs." }, { "heading": "A.2.4 DETAILS ON PERFORMANCE OF HAND- AND RANDOMLY-PICKED FEATURE MAPS", "text": "As described in the main body of the paper, pairwise Wilcoxon sign-rank tests reveal no significant differences between hand-picked and randomly-selected feature maps within each reference image type (Z(9) = 27.5, p = 0.59 for natural reference images and Z(9) = 41 p = 0.18 for synthetic references). However, marginalizing over reference image type using a repeated measures ANOVA reveals a significant main effect of the feature map selection mode: F (1, 9) = 6.14, p = 0.035. Therefore, while there may be a small effect of hand-picking feature maps, our data indicates that this effect, if present, is small." }, { "heading": "A.2.5 REPEATED TRIALS", "text": "To check the consistency of participants’ responses, we repeat six main trials for each of the three tested reference image types at the end of the experiment. Specifically, the six trials correspond to the three highest and three lowest absolute confidence ratings. Results are shown in Fig. 21. We observe consistency to be high for both the synthetic and natural reference image types, and moderate for no reference images (see Fig. 21A). In absolute terms, the largest increase in performance occurs for the none condition; for natural reference images there was also a small increase; for synthetic reference images, there was a slight decrease (see Fig. 21B and C). In the question session after the experiments, many participants reported remembering the repeated trials from the first time." }, { "heading": "A.2.6 QUALITATIVE FINDINGS", "text": "In a qualitative interview conducted after completion of the experiment, participants reported to use a large variety of strategies. Colors, edges, repeated patterns, orientations, small local structures and (small) objects were commonly mentioned. Most but not all participants reported to have adapted their decision strategy throughout the experiment. Especially lay participants from Experiment II emphasized that the trial-by-trial feedback was helpful and that it helped to learn new strategies. As already described in the main text, participants reported that the task difficulty varied greatly; while some trials were simple, others were challenging. A few participants highlighted that the comparison between minimally and maximally activating images was a crucial clue and allowed employing the\nexclusion criterion: If the minimally activating query image was easily identifiable, the choice of the maximally activating query image was trivial. This aspect motivated us to conduct an additional experiment where the presentation scheme was varied (Experiment II)." }, { "heading": "A.2.7 BY-FEATURE-MAP ANALYSIS", "text": "For Experiment I, we look at each feature map separately and analyze which feature maps participants find easy and which they find difficult. Further, we investigate commonalities and differences between feature maps. We note that the data for this analysis relies on only 10 responses for each feature map and hence may be noisy.\nIn Fig. 22, we show the number of correct answers split up by reference image type. The patterns look similar to the trend in Fig. 4: Across most layers, there is no clearly identifiable trend that feature maps of a certain network depth would be easier or more difficult; only the lowest (3a) and the highest layer (5b) seem slightly more difficult for both the synthetic and the natural reference images.\nEasy Feature Maps When feature maps are easy (synthetic: 10/10, natural: 10/10 correct responses), their features seem to correspond to clear object parts (e.g. dogs vs. humans, food vs. cats), or shapes (e.g. round vs. edgy (see Supplementary Material Fig. 2- 5)). In Fig. 23, we show the query as well as natural and synthetic reference images for one such easy feature map for one participant. For the images shown to two more participants, see Supplementary Material Fig. 1. Other relatively easy feature maps (where eight to ten participants choose the correct query image for both reference image types) additionally contained other low level cues such as color or texture (see Supplementary Material Fig. 4-5).\nDifficult Feature Maps The most difficult feature maps for synthetic and natural reference images are displayed in Fig. 24. Only four participants predicted the correct query image. Interestingly, the other reference image type was much more easily predictable for both feature maps: Nine out of ten participants correctly simulated the network’s decision. Our impression is that the reason for these feature maps being so difficult in one reference condition is the diversity in the images. In the case of synthetic reference images, we also consider identifying a concept difficult and consequently are unsure what to compare.\nFrom studying several feature maps, our impression is that one or more of the following aspects make feature maps difficult to interpret:\n• Reference images are diverse (see Fig. 24a for synthetic reference images and d for natural reference images)\n• The common feature(s) seem to not correspond to common human concepts (see Fig. 24a and c)\n• Conflicting information, i.e. commonalities can be found between one query image and both the minimal and maximal reference images (see Fig. 25a: eyes and extremity-like structure in synthetic min reference images vs. eyes and earth-colors in synthetic max reference images - both could be considered similar to the max query image of a frog)\n• Very small object parts such as eyes or round, earth-colored shapes seem to be the decisive features (see Fig. 25a and b)\n• Low level cues such as the orientation of lines appear random in the synthetic reference images9 (see Fig. 26a)\nFinally, when we speak bluntly, we are often surprised that participants identified the correct image — the reasons for this are unclear to us (see for example Supplementary Material Fig. 6-7)." }, { "heading": "A.2.8 HIGH QUALITY DATA AS SHOWN BY HIGH PERFORMANCE ON CATCH TRIALS", "text": "We integrate a mechanism to probe the quality of our data: In catch trials, the correct answer is trivial and hence incorrect answers might suggest the exclusion of specific trial blocks (for details, see Sec. A.1.1). Fortunately, very few trials are missed: In Experiment I, only two (out of ten) participants miss one trial each (i.e. a total of 2 out of 180 catch trials were missed); in Experiment II, five participants miss one trial and four participants miss two trials (i.e. a total of 13 out of 736 catch\n9We expected lower layers to be easier than higher layers for synthetic reference images, but our data showed that this was not the case (see Fig. 22. We can imagine that the diversity term as well as the non-custom hyper-parameters contribute to these sub-optimal images.\ntrials were missed). As this indicates that our data is of high quality, we do not perform the analysis with excluded trials as we expect to find the same results.\n8Baseline condition. 9Metrics of explanation quality computed without human judgment are inconclusive and do not correspond to human rankings. 10Task has an additional “I don’t know”-option for confidence rating. 11Comparison is only performed between methods but no absolute measure of interpretability for a method is obtained." }, { "heading": "A.3 DETAILS ON RELATED WORK", "text": "Paper Analyzes\nIntermediate Features?\nExplanation Methods Analyzed\nResults Explanation Confidence/Trust\nhelpful?\nOurs yes • Feature Visualization • natural images8\n• no explanation8 yes\n• high variance in confidence ratings • natural images are more helpful\nBiessmann & Refiano (2019)\nno • LRP • Guided Backprop • simple gradient8\nyes • highest confidence\nfor guided backprop9\nChu et al. (2020) no\n• prediction + gradients • prediction8\n• no information8 no\n• faulty explanations do not decrease trust\nShen & Huan (2020)\nno\n• Extremal Perturb • GradCAM • SmoothGrad • no explanation8\nno • -\nJeyakumar et al. (2020)\nno\n• LIME • Anchor • SHAP • Saliency Maps • Grad-CAM++ • Ex-Matchina\nunclear11 • -\nAlqaraawi et al. (2020)\nno • LRP • classification scores • no explanation8\nyes • confidence similaracross conditions\nChandrasekaran et al. (2017)\nno\n• prediction confidence • attention maps • Grad-CAM • no explanation8\nno • -\nSchmidt & Biessmann (2019)\nno • LIME • custom method • random/no explanation8\nyes\n• humans trust own judgement regardless explanations, except in one condition\nHase & Bansal (2020)\nno\n• LIME • Prototype • Anchor • Decision Boundary • combination of all 4\npartly\n• high variance in helpfulness • helpfulness cannot predict user performance\nKumarakulasinghe et al. (2020)\nno • LIME yes • fairly high trustand reliance\nRibeiro et al. (2018)\nno • LIME • Anchor • no explanation8\nyes\n• high confidence for Anchor • low for LIME & no explanation\nAlufaisan et al. (2020)\nno • prediction + Anchor • prediction8\n• no information8 partly • explanations do notincrease confidence\nRamamurthy et al. (2020)\nno • MAME • SP-LIME • Two Step\n• unclear11 • users can adjust MAMEwhich increased trust\nDieber & Kirrane (2020)\nno • LIME partly • -\nDinu et al. (2020)\nno\n• SHAP • ridge • lasso • random explanation8\npartly • no statement onconfidence ratings" } ]
2,021
null
SP:400ec44ff0b658f1acbd74ab8c710f88bea6f7dd
[ "This paper proposes a conditional generation framework (cGAN) that bridges the gap between discrete and continuous variable used in the generation. They do so by proposing a new network architecture that implements higher order multi variate polynomials (MVP). They show that MVP generalizes well to different types of conditional variables and has good expressivity even in the absence of activation functions.\t" ]
Conditional Generative Adversarial Nets (cGANs) have been widely adopted for image generation. cGANs take i) a noise vector and ii) a conditional variable as input. The conditional variable can be discrete (e.g., a class label) or continuous (e.g., an input image) resulting into class-conditional (image) generation and imageto-image translation models, respectively. However, depending on whether the conditional variable is discrete or continuous, various cGANs employ substantially different deep architectures and loss functions for their training. In this paper, we propose a novel framework, called MVP, for conditional data generation. MVP resorts to multivariate polynomials of higher-order and treats in a unified way both discrete and continuous conditional variables. MVP is highly expressive, capturing higher-order autoand cross-correlations of input variables (noise vector and conditional variable). Tailored sharing schemes are designed between the polynomial’s parameter tensors, which result in simple recursive formulas. MVP can synthesize realistic images in both class-conditional and image-to-image translation tasks even in the absence of activation functions between the layers.
[]
[ { "authors": [ "Jorge Agnese", "Jonathan Herrera", "Haicheng Tao", "Xingquan Zhu" ], "title": "A survey and taxonomy of adversarial neural networks for text-to-image synthesis", "venue": null, "year": 1910 }, { "authors": [ "Amjad Almahairi", "Sai Rajeswar", "Alessandro Sordoni", "Philip Bachman", "Aaron Courville" ], "title": "Augmented cyclegan: Learning many-to-many mappings from unpaired data", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Grigory Antipov", "Moez Baccouche", "Jean-Luc Dugelay" ], "title": "Face aging with conditional generative adversarial networks", "venue": "In International Conference on Image Processing (ICIP),", "year": 2017 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Ting Chen", "Mario Lucic", "Neil Houlsby", "Sylvain Gelly" ], "title": "On self modulation for generative adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2016 }, { "authors": [ "Xinyuan Chen", "Chang Xu", "Xiaokang Yang", "Dacheng Tao" ], "title": "Attention-gan for object transfiguration in wild images", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yunjey Choi", "Youngjung Uh", "Jaejun Yoo", "Jung-Woo Ha" ], "title": "Stargan v2: Diverse image synthesis for multiple domains", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Grigorios Chrysos", "Stylianos Moschoglou", "Yannis Panagakis", "Stefanos Zafeiriou" ], "title": "Polygan: High-order polynomial generators", "venue": "arXiv preprint arXiv:1908.06571,", "year": 2019 }, { "authors": [ "Grigorios Chrysos", "Stylianos Moschoglou", "Giorgos Bouritsas", "Yannis Panagakis", "Jiankang Deng", "Stefanos Zafeiriou" ], "title": "π ́nets: Deep polynomial neural networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Antonia Creswell", "Tom White", "Vincent Dumoulin", "Kai Arulkumaran", "Biswa Sengupta", "Anil A Bharath" ], "title": "Generative adversarial networks: An overview", "venue": "IEEE Signal Processing Magazine,", "year": 2018 }, { "authors": [ "Harm De Vries", "Florian Strub", "Jérémie Mary", "Hugo Larochelle", "Olivier Pietquin", "Aaron C Courville" ], "title": "Modulating early visual processing by language. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2017 }, { "authors": [ "O Debals", "L De Lathauwer" ], "title": "The concept of tensorization", "venue": "Technical report, Technical Report 17–99,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Jonathon Shlens", "Manjunath Kudlur" ], "title": "A learned representation for artistic style", "venue": null, "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2014 }, { "authors": [ "Klemen Grm", "Walter J Scheirer", "Vitomir Štruc" ], "title": "Face hallucination using cascaded super-resolution and identity priors", "venue": "IEEE Transactions in Image Processing (TIP),", "year": 2019 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems (NeurIPS)", "venue": null, "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Rui Huang", "Shu Zhang", "Tianyu Li", "Ran He" ], "title": "Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-to-image translation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Satoshi Iizuka", "Edgar Simo-Serra", "Hiroshi Ishikawa" ], "title": "Globally and locally consistent image completion", "venue": "ACM Transactions on Graphics (TOG),", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Siddhant M. Jayakumar", "Wojciech M. Czarnecki", "Jacob Menick", "Jonathan Schwarz", "Jack Rae", "Simon Osindero", "Yee Whye Teh", "Tim Harley", "Razvan Pascanu" ], "title": "Multiplicative interactions and where to find them", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Y Jin", "J Zhang", "M Li", "Y Tian", "H Zhu" ], "title": "Towards the high-quality anime characters generation with generative adversarial networks. In Proceedings of the Machine Learning for Creativity and Design Workshop at NeurIPS, 2017", "venue": null, "year": 2017 }, { "authors": [ "Takuhiro Kaneko", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Label-noise robust generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Tamara G Kolda", "Brett W Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3d object representations for fine-grained categorization", "venue": "In Conference on Computer Vision and Pattern Recognition Workshops (CVPR’W),", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "The cifar-10 dataset", "venue": "online: http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Hsin-Ying Lee", "Hung-Yu Tseng", "Qi Mao", "Jia-Bin Huang", "Yu-Ding Lu", "Maneesh Singh", "MingHsuan Yang" ], "title": "Drit++: Diverse image-to-image translation via disentangled representations", "venue": "International Journal of Computer Vision (IJCV),", "year": 2020 }, { "authors": [ "Soochan Lee", "Junsoo Ha", "Gunhee Kim" ], "title": "Harmonizing maximum likelihood with gans for multimodal conditional generation", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Bowen Li", "Xiaojuan Qi", "Thomas Lukasiewicz", "Philip Torr" ], "title": "Controllable text-to-image generation", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2019 }, { "authors": [ "Muyang Li", "Ji Lin", "Yaoyao Ding", "Zhijian Liu", "Jun-Yan Zhu", "Song Han" ], "title": "Gan compression: Efficient architectures for interactive conditional gans", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Dong Liang", "Rui Wang", "Xiaowei Tian", "Cong Zou" ], "title": "Pcgan: Partition-controlled human image generation", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Steven Liu", "Tongzhou Wang", "David Bau", "Jun-Yan Zhu", "Antonio Torralba" ], "title": "Diverse image generation via self-conditioned gans", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Yongyi Lu", "Yu-Wing Tai", "Chi-Keung Tang" ], "title": "Attribute-guided face generation using conditional cyclegan", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are gans created equal? a large-scale study", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2018 }, { "authors": [ "Liqian Ma", "Xu Jia", "Qianru Sun", "Bernt Schiele", "Tinne Tuytelaars", "Luc Van Gool" ], "title": "Pose guided person image generation", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2017 }, { "authors": [ "Qi Mao", "Hsin-Ying Lee", "Hung-Yu Tseng", "Siwei Ma", "Ming-Hsuan Yang" ], "title": "Mode seeking generative adversarial networks for diverse image synthesis", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Maxim Maximov", "Ismail Elezi", "Laura Leal-Taixé" ], "title": "Ciagan: Conditional identity anonymization generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Masanori Koyama" ], "title": "cgans with projection discriminator", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "S.M. Nikol’skii" ], "title": "Analysis III: Spaces of Differentiable Functions. Encyclopaedia of Mathematical Sciences", "venue": null, "year": 2013 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional image synthesis with auxiliary classifier gans", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Tingting Qiao", "Jing Zhang", "Duanqing Xu", "Dacheng Tao" ], "title": "Mirrorgan: Learning text-to-image generation by redescription", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems (NeurIPS),", "year": 2016 }, { "authors": [ "Yoan Shin", "Joydeep Ghosh" ], "title": "The pi-sigma network: An efficient higher-order neural network for pattern classification and function approximation", "venue": "In International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "Aliaksandr Siarohin", "Enver Sangineto", "Stephane Lathuiliere", "Nicu Sebe" ], "title": "Deformable gans for pose-based human image generation", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Rewa Sood", "Binit Topiwala", "Karthik Choutagunta", "Rohit Sood", "Mirabela Rusu" ], "title": "An application of generative adversarial networks for super resolution medical imaging", "venue": "In International Conference on Machine Learning and Applications (ICMLA),", "year": 2018 }, { "authors": [ "Marshall H Stone" ], "title": "The generalized weierstrass approximation theorem", "venue": "Mathematics Magazine,", "year": 1948 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Luan Tran", "Xi Yin", "Xiaoming Liu" ], "title": "Disentangled representation learning gan for pose-invariant face recognition", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Guilin Liu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Video-to-video synthesis. In Advances in neural information processing systems (NeurIPS), 2018a", "venue": null, "year": 2018 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Bingzhe Wu", "Haodong Duan", "Zhichao Liu", "Guangyu Sun" ], "title": "Srpgan: Perceptual generative adversarial network for single image super resolution", "venue": "arXiv preprint arXiv:1712.05927,", "year": 2017 }, { "authors": [ "Huikai Wu", "Shuai Zheng", "Junge Zhang", "Kaiqi Huang" ], "title": "Gp-gan: Towards realistic high-resolution image blending", "venue": "In Proceedings of the 27th ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Xian Wu", "Kun Xu", "Peter Hall" ], "title": "A survey of image synthesis and editing with generative adversarial networks", "venue": "Tsinghua Science and Technology,", "year": 2017 }, { "authors": [ "Saining Xie", "Zhuowen Tu" ], "title": "Holistically-nested edge detection", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "You Xie", "Erik Franz", "Mengyu Chu", "Nils" ], "title": "Thuerey. tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Tao Xu", "Pengchuan Zhang", "Qiuyuan Huang", "Han Zhang", "Zhe Gan", "Xiaolei Huang", "Xiaodong He" ], "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Xiangyu Xu", "Deqing Sun", "Jinshan Pan", "Yujin Zhang", "Hanspeter Pfister", "Ming-Hsuan Yang" ], "title": "Learning to super-resolve blurry face and text images", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Qiaojing Yan", "Wei Wang" ], "title": "Dcgans for image super-resolution, denoising and debluring", "venue": "Advances in neural information processing systems (NeurIPS),", "year": 2017 }, { "authors": [ "Dingdong Yang", "Seunghoon Hong", "Yunseok Jang", "Tianchen Zhao", "Honglak Lee" ], "title": "Diversitysensitive conditional generative adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Chenyu You", "Guang Li", "Yi Zhang", "Xiaoliu Zhang", "Hongming Shan", "Mengzhou Li", "Shenghong Ju", "Zhen Zhao", "Zhuiyang Zhang", "Wenxiang Cong" ], "title": "Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle)", "venue": "IEEE Transactions on Medical Imaging,", "year": 2019 }, { "authors": [ "A. Yu", "K. Grauman" ], "title": "Fine-Grained Visual Comparisons with Local Learning", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2014 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S Huang" ], "title": "Generative image inpainting with contextual attention", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Xin Yu", "Basura Fernando", "Richard Hartley", "Fatih Porikli" ], "title": "Super-resolving very low-resolution face images with supplementary attributes", "venue": "In Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Fangneng Zhan", "Hongyuan Zhu", "Shijian Lu" ], "title": "Spatial fusion gan for image synthesis", "venue": null, "year": 2021 }, { "authors": [ "Chrysos" ], "title": "Π-Net as a polynomial expansion of a single input variable. Their goal is to model functions x “ Gpzq as high-order polynomial expansions of z. Their focus is towards using a single-input variable z, which can be noise in case of image generation or an image in discriminative experiments. The authors express the StyleGAN architecture", "venue": "(Karras et al.,", "year": 2019 }, { "authors": [ "Karras" ], "title": "Adaptive instance normalization (AdaIN) method for unsupervised image generation. An AdaIN layer expresses a second-order interaction3: h “ pΛwq ̊ npcphinqq, where n is a normalization, c the convolution operator and w is the transformed noisew “MLP pzIq (mapping network)", "venue": null, "year": 2019 }, { "authors": [ "Chen" ], "title": "AdaIN. Stacking AdaIN layers results in a polynomial expansion with a single variable", "venue": null, "year": 2019 }, { "authors": [ "Park" ], "title": "If cast as a polynomial expansion, a network with sBN layers expresses a single polynomial expansion4", "venue": null, "year": 2019 }, { "authors": [ "Park" ], "title": "2019) exhibit impressive generation results with large-scale computing (i.e., they report results using NVIDIA DGX with 8 V100 GPUs). Our goal is not to compete in computationally heavy, large-scale experiments, but rather to illustrate the benefits of the generic formulation of MVP", "venue": null, "year": 2019 }, { "authors": [ "Theis" ], "title": "2016), thus we use the IS and FID. Following the standard practice of the literature, the IS is computed by synthesizing 5, 000 samples, while the FID is computed using 10, 000 samples. The IS is used exclusively for images of natural scenes as a metric. The reasoning behind that is that the Inception network", "venue": null, "year": 2016 }, { "authors": [ "Chrysos" ], "title": "All the images of CelebA, Cars196, Shoes and Handbags are resized to 64ˆ 64 resolution. Architectures: The discriminator structure is left the same for each experiment, we focus only on the generator architecture. All the architectures are based on two different generator schemes, i.e., the SNGAN (Miyato", "venue": null, "year": 2018 }, { "authors": [ "Zhu" ], "title": "One challenge that often arises in conditional data generation is that one of the variables gets ignored by the generator (Isola et al., 2017)", "venue": null, "year": 2016 }, { "authors": [ "• Almahairi" ], "title": "2018) augment the deterministic mapping of CycleGAN", "venue": null, "year": 2018 }, { "authors": [ "• Choi" ], "title": "2020) introduce a method that supports multiple target domains. The method", "venue": null, "year": 2020 }, { "authors": [ "Liu" ], "title": "An interesting technique for diverse, class-conditional generation is the self-conditional GAN", "venue": null, "year": 2020 }, { "authors": [ "Yang" ], "title": "Using regularization terms in the loss function has been an alternative way to achieve diverse generation", "venue": null, "year": 2019 }, { "authors": [ "Lee" ], "title": "2019) propose two variants of a regularization term, with the ‘more stable variant", "venue": null, "year": 2021 } ]
[ { "heading": null, "text": "Conditional Generative Adversarial Nets (cGANs) have been widely adopted for image generation. cGANs take i) a noise vector and ii) a conditional variable as input. The conditional variable can be discrete (e.g., a class label) or continuous (e.g., an input image) resulting into class-conditional (image) generation and imageto-image translation models, respectively. However, depending on whether the conditional variable is discrete or continuous, various cGANs employ substantially different deep architectures and loss functions for their training. In this paper, we propose a novel framework, called MVP, for conditional data generation. MVP resorts to multivariate polynomials of higher-order and treats in a unified way both discrete and continuous conditional variables. MVP is highly expressive, capturing higher-order auto- and cross-correlations of input variables (noise vector and conditional variable). Tailored sharing schemes are designed between the polynomial’s parameter tensors, which result in simple recursive formulas. MVP can synthesize realistic images in both class-conditional and image-to-image translation tasks even in the absence of activation functions between the layers." }, { "heading": "1 INTRODUCTION", "text": "Modelling high-dimensional distributions and generating samples from complex distributions are fundamental tasks in machine learning. Generative adversarial networks (GANs) (Goodfellow et al., 2014) have demonstrated spectacular results in the two tasks using both unsupervised (Miyato et al., 2018) and supervised (Brock et al., 2019) learning. In the unsupervised setting, (the generator of) a GAN accepts as input a noise vector zI and maps the noise vector to a high-dimensional output. The supervised models, called conditional Generative Adversarial Nets (cGANs) (Mirza & Osindero, 2014), accept both a noise vector zI and an additional conditional variable zII that facilitates the generation. The conditional variable can be discrete (e.g., a class or an attribute label) or continuous (e.g., a low-resolution image). The impressive results obtained with both discrete conditional input (Brock et al., 2019) and continuous conditional input (Park et al., 2019; Ledig et al., 2017) have led to a plethora of applications that range from text-to-image synthesis (Qiao et al., 2019) to deblurring (Yan & Wang, 2017) and medical analysis (You et al., 2019).\nDespite the similarity in the formulation for discrete and continuous conditional input (i.e., learning the function GpzI, zIIq), the literature has focused on substantially different architectures and losses. Frequently, techniques are simultaneously developed, e.g., the self-attention in the class-conditional Self-Attention GAN (Zhang et al., 2019) and in the Attention-GAN (Chen et al., 2018) with continuous conditional input. This delays the progress since practitioners develop twice as many architectures and losses for every case. A couple of straightforward ideas can be employed to unify the behavior of the two conditional variable types. One idea is to use an encoder network to obtain representations that are independent of the conditional variable. This has two drawbacks: i) the network ignores the noise and a deterministic one-variable mapping is learned (Isola et al., 2017), ii) such encoder has not been successful so far for discrete conditional input. An alternative idea is to directly concatenate the labels in the latent space instead of finding an embedding. In AC-GAN (Odena et al., 2017) the class labels are concatenated with the noise; however, the model does not scale well beyond 10 classes. We argue that concatenation of the input is only capturing additive correlation and not higher-order interactions between the inputs. A detailed discussion is conducted on sec. D (in the Appendix).\nA polynomial expansion with respect to the input variables can capture such higher-order correlations. Π-Net (Chrysos et al., 2020) casts the function approximation into a polynomial expansion of a single input variable. By concatenating the input variables, we can express the function approximation as a polynomial of the fused variable. However, the concatenation reduces the flexibility of the model significantly, e.g., it enforces the same order of expansion with respect to the different variables and it only allows the same parameter sharing scheme to all variables.\nWe introduce a multivariate framework, called MVP, for conditional data generation. MVP resorts to multivariate polynomials with two input variables, i.e., zI for the noise vector and zII for the conditional variable. MVP captures higher-order auto- and cross-correlations between the variables. By imposing a tailored structure in the higher-order interactions, we obtain an intuitive, recursive formulation for MVP. The formulation is flexible and enables different constraints to be applied to each variable and its associated parameters. The formulation can be trivially extended to M input variables. In summary, our contributions are the following:\n• We introduce a framework, called MVP, that expresses a high-order, multivariate polynomial for conditional data generation. Importantly, MVP treats both discrete and continuous conditional variables in a unified way.\n• We offer an in-depth relationship with state-of-the-art works, such as SPADE (Park et al., 2019), that can be interpreted as polynomial expansions. We believe this perspective better explains the success of such architectures and offers a new direction for their extension.\n• MVP is trained on eight different datasets for both class-conditional generation and imageto-image translation tasks. The trained models rely on both input variables, i.e., they do not ignore the noise vector.\n• To illustrate the expressivity of the model, we also experiment with generators that do not use activation functions between the layers. We verify that MVP can synthesize realistic images even in the absence of activation functions between the layers.\nThe source code of MVP will be published upon the acceptance of the paper." }, { "heading": "2 RELATED WORK", "text": "The literature on conditional data generation is vast; dedicated surveys per task (Agnese et al., 2019; Wu et al., 2017b) can be found for the interested reader. Below, we review representative works in conditional generation and then we summarize the recent progress in multiplicative interactions." }, { "heading": "2.1 CONDITIONAL GENERATIVE MODELS", "text": "The challenging nature of image/video generation has led to a proliferation of conditional models. Although cGAN (Mirza & Osindero, 2014) is a general framework, since then the methods developed for conditional generation differ substantially depending on the type of conditional data. We present below representative works of the two categories, i.e., discrete and continuous conditional data, and their combination.\nDiscrete conditional variable: This is most frequently used for class-conditional generation (Miyato et al., 2018; Brock et al., 2019; Kaneko et al., 2019). Conditional normalization (Dumoulin et al., 2017; De Vries et al., 2017) techniques have been popular in the case of discrete conditional input, e.g., in generation of natural scenes images (Miyato et al., 2018; Brock et al., 2019). Conditional normalization cannot trivially generalize to a continuous conditional variable. In AC-GAN (Odena et al., 2017), they concatenate the class labels with the noise; however, their model does not scale well (i.e., they train one model per 10 classes). The aforementioned methods cannot be trivially used or modified for continuous conditional input. Text-to-image generation models (Qiao et al., 2019; Li et al., 2019; Zhang et al., 2018; Xu et al., 2018) use a specialized branch to embed the text labels.\nContinuous conditional variable: The influential work of pix2pix (Isola et al., 2017) has become the reference point for continuous conditional input. The conditional input is embedded in a lowdimensional space (with an encoder), and then mapped to a high-dimensional output (through a decoder). The framework has been widely used for inverse tasks (Ledig et al., 2017; Pathak et al.,\n2016; Wu et al., 2017a; Iizuka et al., 2017; Huang et al., 2017; Yu et al., 2018a; Grm et al., 2019; Xie et al., 2018; Yan & Wang, 2017), conditional pose generation (Ma et al., 2017; Siarohin et al., 2018; Liang et al., 2019), representation learning (Tran et al., 2017), conditional video generation (Wang et al., 2018a), generation from semantic labels (Wang et al., 2018b), image blending (Wu et al., 2019; Zhan et al., 2019). We recognize two major drawbacks in the aforementioned methods: a) they cannot be easily adapted for discrete conditional input, b) they learn a deterministic mapping, i.e., the noise is typically ignored. However, in many real applications, such as inverse tasks, the mapping is not one-to-one; there are multiple plausible outputs for every conditional input. The auxiliary losses used in such works, e.g., `1 loss (Isola et al., 2017), perceptual loss (Ledig et al., 2017), are an additional drawback. Those losses both add hyper-parameters that require tuning and are domain-specific, thus it is challenging to transfer them to different domains or even different datasets. On the contrary, in our experiments, we do not use any additional loss.\nDiscrete and continuous conditional variables: Few works combine both discrete and continuous conditional inputs (Yu et al., 2018b; Xu et al., 2017; Lu et al., 2018). However, these methods include significant engineering (e.g., multiple discriminators (Xu et al., 2017), auxiliary losses), while often the generator learns to ignore the noise (similarly to the continuous conditional input). Antipov et al. (2017) design a generator for face aging. The generator combines continuous with discrete variables (age classes), however there is no Gaussian noise utilized, i.e., a deterministic transformation is learned for each input face. InfoGAN (Chen et al., 2016) includes both discrete and continuous conditional variables. However, the authors explicitly mention that additional losses are required, otherwise the generator is ‘free to ignore’ the additional variables.\nThe idea of Li et al. (2020) is most closely related to our work. They introduce a unifying framework for paired (Isola et al., 2017) and unpaired (Zhu et al., 2017a) learning. However, their framework assumes a continuous conditional input, while ours can handle discrete conditional input (e.g., class labels). In addition, their method requires a pre-trained teacher generator, while ours consists of a single generator trained end-to-end.\nDiverse data generation: Conditional image generation often suffers from deterministic mappings, i.e., the noise variable has often negligible or negative impact in the generator (Zhu et al., 2017b; Isola et al., 2017). This has been tackled in the literature with additional loss terms and/or auxiliary network modules. A discussion of representative methods that tackle diverse generation is deferred to sec. I in the Appendix. In Table 1 the differences of the core techniques are summarized. Even though diverse generation is a significant task, we advocate that learning a generator does not ignore the input variables can be achieved without such additional loss terms. We highlight that diverse generation is a byproduct of MVP and not our main goal. Particularly, we believe that diverse images can be synthesized because the higher-order correlations of the input variables are captured effectively the proposed method." }, { "heading": "2.2 MULTIPLICATIVE INTERACTIONS", "text": "Multiplicative connections have long been adopted in computer vision and machine learning (Shin & Ghosh, 1991; Hochreiter & Schmidhuber, 1997; Bahdanau et al., 2015). The idea is to combine the inputs through elementwise products or other diagonal forms. Even though multiplicative connections\nhave successfully been applied to different tasks, until recently there was no comprehensive study of their expressivity versus the standard feedforward networks. Jayakumar et al. (2020) include the proof that second order multiplicative operators can represent a greater class of functions than classic feed-forward networks. Even though we capitalize on the theoretical argument, our framework can express any higher-order interactions while the framework of Jayakumar et al. (2020) is limited to second order interactions.\nHigher-order interactions have been studied in the tensor-related literature (Kolda & Bader, 2009; Debals & De Lathauwer, 2017). However, their adaptation in modern deep architectures has been slower. Chrysos et al. (2020) propose high-order polynomial for mapping the input z to the output x “ Gpzq. Π-Netfocuses on a single input variable and cannot handle the multivariate cases that are the focus of this work. Three additional works that can be thought of as polynomial expansions are Karras et al. (2019); Park et al. (2019); Chen et al. (2019). The three works were originally introduced as (conditional) normalization variants, but we attribute their improvements in the expressiveness of their polynomial expansions. Under the polynomial expansion perspective, they can be expressed as special cases of the proposed MVP. A detailed discussion is conducted in sec. F in the Appendix. We believe that the proposed framework offers a direction to further extend the results of such works, e.g., by allowing more than one conditional variables." }, { "heading": "3 METHOD", "text": "The framework for a multivariate polynomial with a two-variable input is introduced (sec. 3.1). The derivation, further intuition and additional models are deferred to the Appendix (sec. B). The crucial technical details, including the stability of the polynomial, are developed in sec. 3.2. We emphasize that a multivariate polynomial can approximate any function (Stone, 1948; Nikol’skii, 2013), i.e., a multivariate polynomial is a universal approximator.\nNotation:Tensors/matrices/vectors are symbolized by calligraphic/uppercase/lowercase boldface letters e.g., W ,W ,w. The mode-m vector product of W (of order M ) with a vector u P RIm is W ˆm u and results in a tensor of order M ´ 1. We assume that śb i“a xi “ 1 when a ą b. The core symbols are summarized in Table 3, while a detailed tensor notation is deferred to the Appendix (sec. B.1)." }, { "heading": "3.1 TWO INPUT VARIABLES", "text": "Given two input variables 1 zI, zII P Kd where K Ď R or K Ď N, the goal is to learn a function G : Kdˆd Ñ Ro that captures the higher-degree interactions between the elements of the two inputs. We can learn such higher-degree interactions as polynomials of two input variables. A polynomial of expansion order N P N with output x P Ro has the form:\nx “ GpzI, zIIq “ N ÿ\nn“1\nn`1 ÿ\nρ“1\nˆ W rn,ρs ρ ź\nj“2 ˆjzI\nn`1 ź\nτ“ρ`1 ˆτzII\n˙\n` β (1)\nwhere β P Ro and W rn,ρs P Roˆ śn m“1 ˆmd for n P r1, N s, ρ P r1, n ` 1s are the learnable parameters. The expansion depends on two (independent) variables, hence we use the n and ρ as auxiliary variables. The two products of (1) do not overlap, i.e., the first multiplies the modes r2, ρs (of W rn,ρs) with zI and the other multiplies the modes rρ` 1, n` 1s with zII.\nRecursive relationship: The aforementioned derivation can be generalized to an arbitrary expansion order. The recursive formula for an arbitrary order N P N is the following:\nxn “ xn´1 ` ´ UTrn,IszI `U T rn,IIszII ¯ ˚ xn´1 (2)\nfor n “ 2, . . . , N with x1 “ UTr1,IszI ` U T r1,IIszII and x “ CxN ` β. The parameters C P Roˆk,Urn,φs P Rdˆk for n “ 1, . . . , N and φ “ tI, IIu are learnable. The intuition behind this model is the following: An embedding is initially found for each of the two input variables, then the two embeddings are added together and they are multiplied elementwise with the previous approximation. The different embeddings for each of the input variables allows us to implement Urn,Is and Urn,IIs with different constraints, e.g., Urn,Is to be a dense layer and Urn,IIs to be a convolution." }, { "heading": "3.2 MODEL EXTENSIONS AND TECHNICAL DETAILS", "text": "There are three limitations in (2). Those are the following: a) (2) describes a polynomial expansion of a two-variable input, b) each expansion order requires additional layers, c) high-order polynomials might suffer from unbounded values. Those limitations are addressed below.\nOur model can be readily extended beyond two-variable input; an extension with three-variable input is developed in sec. C. The pattern (for each order) is similar to the two-variable input: a) a different embedding is found for each input variable, b) the embeddings are added together, c) the result is multiplied elementwise with the representation of the previous order.\nThe polynomial expansion of (2) requires ΘpNq layers for an N th order expansion. That is, each new order n of expansion requires new parameters Urn,Is and Urn,IIs. However, the order of expansion\n1To avoid cluttering the notation we use same dimensionality for the two inputs. However, the derivations apply for different dimensionalities, only the dimensionality of the tensors change slightly.\ncan be increased without increasing the parameters substantially. To that end, we can capitalize on the product of polynomials. Specifically, let N1 be the order of expansion of the first polynomial. The output of the first polynomial is fed into a second polynomial, which has expansion order of N2. Then, the output of the second polynomial will have an expansion order of N1 ¨N2. The product of polynomials can be used with arbitrary number of polynomials; it suffices the output of the τ th polynomial to be the input to the pτ`1qth polynomial. For instance, if we assume a product of Φ P N polynomials, where each polynomial has an expansion order of two, then the polynomial expansion is of 2Φ order. In other words, we need Θplog2pNqq layers to achieve an N th order expansion. In algebra, higher-order polynomials are unbounded and can thus suffer from instability for large values. To avoid such instability, we take the following three steps: a) MVP samples the noise vector from the uniform distribution, i.e., from the bounded interval of r´1, 1s, b) a hyperbolic tangent is used in the output of the generator as a normalization, i.e., it constrains the outputs in the bounded interval of r´1, 1s, c) batch normalization (Ioffe & Szegedy, 2015) is used to convert the representations to zero-mean. We emphasize that in GANs the hyperbolic tangent is the default activation function in the output of the generator, hence it is not an additional requirement of our method. Additionally, in our preliminary experiments, the uniform distribution can be changed for a Gaussian distribution without any instability. A theoretical analysis on the bounds of such multivariate polynomials would be an interesting subject for future work." }, { "heading": "4 EXPERIMENTS", "text": "The proposed MVP is empirically evaluated in three settings: a) a class-conditional generation, i.e., with discrete conditional input, b) an image-to-image translation, i.e., with continuous conditional input, c) a mixed conditional setting with two conditional variables. The goal is to showcase how MVP can be used with both discrete and continuous conditional inputs. Even though architectures specialized for a single task (e.g., Ledig et al. (2017)) perform well in that task, their well-selected inductive biases (e.g., perceptual or `1 loss) do not generalize well in other domains or different conditional inputs. Hence, our goal is not to demonstrate state-of-the-art results in specific tasks, but rather to propose one generic formulation. Further experiments (e.g., class-conditional generation with SVHN or MNIST to SVHN translation; sec H), the details on the datasets and the evaluation metrics (sec. G) are deferred to the Appendix. Throughout the experimental section, we reserve the symbol zII for the conditional input (e.g., a class label).\nOur framework, e.g., (2), does not include any activation functions. To verify the expressivity of our framework, we maintain the same setting for the majority of the experiments below. Particularly, the generator does not have activation functions between the layers; there is only a hyperbolic tangent in the output space for normalization. Training a generator without activation functions between the layers also emerged in Π-Net (Chrysos et al., 2020), where the authors demonstrate the challenges in such framework. However, we conduct one experiment using a strong baseline with activation functions. That is, a comparison with SNGAN (Miyato & Koyama, 2018) in class-conditional generation is performed (sec. 4.1).\nBaselines: ‘Π-Net-SICONC’ implements a polynomial expansion of a single variable, i.e., by concatenating all the input variables. ‘SPADE’ implements a polynomial expansion with respect to the conditional variable. Also, ‘GAN-CONC’ and ‘GAN-ADD’ are added as baselines, where we replace the Hadamard products with concatenation and addition respectively. An abstract schematic of the differences between the compared polynomial methods is depicted in Fig. 6, while a detailed description of all methods is deferred to sec. G. Each experiment is conducted five times and the mean and the standard deviation are reported." }, { "heading": "4.1 CLASS-CONDITIONAL GENERATION", "text": "The first experiment is on class-conditional generation, where the conditional input is a class label in the form of one-hot vector. Two types of networks are utilized: a) a resnet-based generator (SNGAN), b) a polynomial generator (Π-Net) based on Chrysos et al. (2020). The former network has exhibited strong performance the last few years, while the latter bears resemblance to the formulation we propose in this work.\nResnet-based generator: The experiment is conducted by augmenting the resnet-based generator of SNGAN. The quantitative results are in Table 4 and synthesized samples are illustrated in Fig. 2(a). SNGAN-MVP improves upon all the baselines in both the Inception score (IS) (Salimans et al., 2016) and the FID (Heusel et al., 2017). The proposed formulation enables inter-class interpolations. That is, the noise zI is fixed, while the class zII is interpolated. In Fig. 2(b) and Fig. 2(c), intra-class and inter-class linear interpolations are illustrated respectively. Both the quantitative and the qualitative results exhibit the effectiveness of our framework.\nΠ-Net-based generator: A product of polynomials, based on Π-Net, is selected as the baseline architecture for the generator. Π-Net has conditional batch normalization (CBN) in the generator, while in the rest compared methods CBN is replaced by batch normalization. The results in CIFAR10 are summarized in Table 5 (left), where MVP outperforms all the baselines by a large margin. An additional experiment is performed in Cars196 that has 196 classes. The results in Table 5 (right) depict a substantial improvement over the all the baselines (53.9% reduction over the best-performing baseline). We should note that the baseline was not built for conditional generation, however we have done our best effort to optimize the respective hyper-parameters. We hypothesize that the improvement arises because of the correlations of the classes. That is, the 196 classes might be correlated (e.g., the SUV cars of different carmakers share several patterns). Such correlations are captured by our framework, while they might be missed when learning different normalization statistics per class. Overall, MVP synthesizes plausible images (Fig. 11) even in the absence of activation functions." }, { "heading": "4.2 CONTINUOUS CONDITIONAL INPUT", "text": "The performance of MVP is scrutinized in tasks with continuous conditional input, e.g., superresolution. The conditional input zII is an input image, e.g., a low-resolution sample or a corrupted sample. Even though the core architecture remains the same, a single change is made in the structure of the discriminator: Motivated by (Miyato & Koyama, 2018), we include an elementwise product of zII with the real/fake image in the discriminator. This stabilizes the training and improves the results. A wealth of literature is available on such continuous conditional inputs (sec. 2.1), however we select the challenging setting of using a generator without activation functions between the layers.\nThe experiments are performed in (a) super-resolution, (b) block-inpainting. Super-resolution assumes a low-resolution image is available, while in block inpainting, a (rectangular) part of the image is missing. The two tasks belong in the broader category of ‘inverse tasks’, and they are significant both for academic reasons but also for commercial reasons (Sood et al., 2018; You et al., 2019). Such inverse tasks are underdetermined; each input image corresponds to several plausible output images.\nThe FID scores in Cars196 for the task of super-resolution are reported in Table 6. In super-resolution 16ˆ, zII has 48 dimensions, while in super-resolution 8ˆ, zII has 192 dimensions. Notice that the performance of Π-Net-SICONC deteriorates substantially when the dimensionality of the conditional variable increases. That validates our intuition about the concatenation in the input of the generator (sec. E). We also report the SPADE-MVP, which captures higher-order correlations with respect to the first variable as well (further details in sec. G). The proposed SPADE-MVP outperforms the original SPADE, however it cannot outperform the full two-variable model, i.e., MVP. MVP maintains outperforms all baselines by a large margin.\nThe qualitative results on (a) super-resolution 8ˆ on CelebA, (b) super-resolution 8ˆ on Cars196, (c) super-resolution 16ˆ on Cars196 are illustrated in Fig. 3. Similarly the qualitative results on block-inpainting are visualized in Fig. 11. For each conditional image, different noise vectors zI are sampled. Notice that the corresponding synthesized images differ in the fine-details. For instance, changes in the mouth region, the car type or position and even background changes are observed. Thus, MVP results in high-resolution images that i) correspond to the conditional input, ii) vary in fine-details. Similar variation has emerged even when the source and the target domains differ substantially, e.g., in the translation of MNIST digits to SVHN digits (sec. H.3). We should mention that regularization techniques have been proposed specifically for image-to-image translation, e.g.,\nYang et al. (2019); Lee et al. (2019). However, such works utilize additional losses and even require additional networks for training, which makes the training more computationally heavy and more sensitive to design choices." }, { "heading": "5 CONCLUSION", "text": "The topic of conditional data generation is the focus of this work. A multivariate polynomial model, called MVP, is introduced. MVP approximates a function GpzI, zIIq with inputs zI (e.g., sample from a Gaussian distribution) and zII (e.g., class or low-resolution image). MVP resorts to multivariate polynomials with arbitrary conditional inputs, which capture high-order correlations of the inputs. The empirical evaluation confirms that our framework can synthesize realistic images in both class-conditional generation (trained on CIFAR10, Cars196 and SVHN), attribute-guided generation and image-to-image translation (i.e., super-resolution, block-inpainting, edges-to-shoes, edges-to-handbag, MNIST-to-SVHN). We also showcase that it can be extended to three-variable input with class-conditional super-resolution. In addition to conditional data generation, the proposed framework can be used in tasks requiring fusion of different types of variables." }, { "heading": "A SUMMARY OF SECTIONS IN THE APPENDIX", "text": "In the following sections, further details and derivations are provided to elaborate the details of the MVP. Specifically, in sec. B the decomposition and related details on the method are developed. The extension of our method beyond two-input variables is studied in sec. C. A method frequently used in the literature for fusing information is concatenation; we analyze how concatenation captures only additive and not more complex correlations (e.g., multiplicative) in sec. D. The differences from Π-Net (Chrysos et al., 2020) is explored in sec. E. In sec. F, some recent (conditional) data generation methods are cast into the polynomial neural network framework and their differences from the proposed framework are analyzed. The experimental details including the evaluation metrics and details on the baselines are developed in sec. G. In sec. H, additional experimental results are included. Lastly, the differences from works that perform diverse generation are explored in sec. I." }, { "heading": "B METHOD DERIVATIONS", "text": "In this section, we expand on the method details, including the scalar output case or the notation. Specifically, a more detailed notation is determined in sec. B.1; the scalar output case is analyzed in sec. B.2. In sec. B.3 a second order expansion is assumed to illustrate the connection between the polynomial expansion and the recursive formula. Sequentially, we derive an alternative model with different factor sharing. This model, called Nested-MVP, has a nested factor sharing format (sec. B.4)." }, { "heading": "B.1 NOTATION", "text": "Our derivations rely on tensors (i.e., multidimensional equivalent of matrices) and (tensor) products. We relay below the core notation used in our work, the interested reader can find further information in the tensor-related literature (Kolda & Bader, 2009; Debals & De Lathauwer, 2017).\nSymbols of variables: Tensors/matrices/vectors are symbolized by calligraphic/uppercase/lowercase boldface letters e.g., W ,W ,w. Matrix products: The Hadamard product of A,B P RIˆN is defined as A ˚B and is equal to api,jqbpi,jq for the pi, jq element. The Khatri-Rao product of matricesA P RIˆN andB P RJˆN is\ndenoted byAdB and yields a matrix of dimensions pIJq ˆN . The Khatri-Rao product for a set of matrices tArms P RImˆNuMm“1 is abbreviated byAr1s dAr2s d ¨ ¨ ¨ dArMs .“ ÄM m“1Arms.\nTensors: Each element of an M th order tensor W is addressed by M indices, i.e., pWqi1,i2,...,iM .“ wi1,i2,...,iM . An M th-order tensor W is defined over the tensor space RI1ˆI2ˆ¨¨¨ˆIM , where Im P Z for m “ 1, 2, . . . ,M . The mode-m unfolding of a tensor W P RI1ˆI2ˆ¨¨¨ˆIM maps W to a matrix Wpmq P RImˆĪm with Īm “\nśM k“1 k‰m Ik such that the tensor element wi1,i2,...,iM is mapped to the\nmatrix element wim,j where j “ 1` řM\nk“1 k‰m\npik ´ 1qJk with Jk “ śk´1\nn“1 n‰m\nIn. The mode-m vector\nproduct of W with a vector u P RIm , denoted by W ˆm u P RI1ˆI2ˆ¨¨¨ˆIm´1ˆIm`1ˆ¨¨¨ˆIM , results in a tensor of order M ´ 1:\npW ˆm uqi1,...,im´1,im`1,...,iM “ Im ÿ\nim“1 wi1,i2,...,iMuim . (3)\nWe denote W ˆ1 up1q ˆ2 up2q ˆ3 ¨ ¨ ¨ ˆM upMq .“W śm m“1ˆmupmq.\nThe CP decomposition (Kolda & Bader, 2009) factorizes a tensor into a sum of component rank-one tensors. The rank-R CP decomposition of an M th-order tensor W is written as:\nW .“ rrUr1s,Ur2s, . . . ,UrMsss “ R ÿ\nr“1 up1qr ˝ up2qr ˝ ¨ ¨ ¨ ˝ upMqr , (4)\nwhere ˝ is the vector outer product. The factor matrices Urms “ ru pmq 1 ,u pmq 2 , ¨ ¨ ¨ ,u pmq R s P RImˆR (M\nm“1 collect the vectors from the rank-one components. By considering the mode-1 unfolding of W , the CP decomposition can be written in matrix form as:\nWp1q .“ Ur1s\nˆ 2 ä\nm“M Urms\n˙T\n(5)\nThe following lemma is useful in our method: Lemma 1. For a set of N matrices tArνs P RIνˆKuNν“1 and tBrνs P RIνˆLuNν“1, the following equality holds:\np N ä\nν“1 ArνsqT ¨ p\nN ä ν“1 Brνsq “ pATr1s ¨Br1sq ˚ . . . ˚ pA T rNs ¨BrNsq (6)\nAn indicative proof can be found in the Appendix of Chrysos et al. (2019)." }, { "heading": "B.2 SCALAR OUTPUT", "text": "The proposed formulation expresses higher-order interactions of the input variables. To elaborate that, we develop the single output case below. That is, we focus on an element τ of the output vector, e.g., a single pixel. In the next few paragraphs, we consider the case of a scalar output xτ , with τ P r1, os when the input variables are zI, zII P Kd. To avoid cluttering the notation we only refer to the scalar output with xτ in the next few paragraphs.\nAs a reminder, the polynomial of expansion order N P N with output x P Ro has the form:\nx “ GpzI, zIIq “ N ÿ\nn“1\nn`1 ÿ\nρ“1\nˆ W rn,ρs ρ ź\nj“2 ˆjzI\nn`1 ź\nτ“ρ`1 ˆτzII\n˙\n` β (7)\nWe assume a second order expansion (N “ 2) and let τ denote an arbitrary scalar output of x. The first order correlations can be expressed through the sums\nřd λ“1 w r1,1s τ,λ zII,λ and řd λ“1 w r1,2s τ,λ zI,λ. The\nsecond order correlations include both auto- and cross-correlations. The tensors W r2,1s and W r2,3s capture the auto-correlations, while the tensor W r2,2s captures the cross-correlations.\nA pictorial representation of the correlations are captured in Fig. 4. Collecting all the terms in an equation, each output is expressed as:\nxτ “ βτ ` d ÿ\nλ“1\n”\nw r1,1s τ,λ zII,λ ` w r1,2s τ,λ zI,λ `\nd ÿ µ“1 w r2,1s τ,λ,µzII,λzII,µ ` d ÿ µ“1 w r2,3s τ,λ,µzI,λzI,µ ` d ÿ µ“1 w r2,2s τ,λ,µzI,λzII,µ ı\n(8)\nwhere βτ P R. Notice that all the correlations of up to second order are captured in equation 8." }, { "heading": "B.3 SECOND ORDER DERIVATION FOR TWO-VARIABLE INPUT", "text": "In all our derivations, the variables associated with the first input zI have an I notation, e.g., Ur1,Is. Respectively for the second input zII, the notation II is used.\nEven though equation 7 enables any order of expansion, the learnable parameters increase exponentially, therefore we can use a coupled factorization to reduce the parameters. Next, we derive the factorization for a second order expansion (i.e., N “ 2) and then provide the recursive relationship that generalizes it for an arbitrary order.\nSecond order derivation: For a second order expansion (i.e., N “ 2 in equation 1), we factorize each parameter tensor W rn,ρs. We assume a coupled CP decomposition for each parameter as follows:\n• LetW r1,1sp1q “ CU T r1,IIs andW r1,2s p1q “ CU T r1,Is be the parameters for n “ 1.\n• Let W r2,1sp1q “ CpUr2,IIs d Ur1,IIsq T and W r2,3sp1q “ CpUr2,Is d Ur1,Isq\nT capture the second order correlations of a single variable (zII and zI respectively).\n• The cross-terms are expressed in W r2,2s ˆ2 zI ˆ3 zII. The output of the τ element2 is řd λ,µ“1 w r2,2s τ,λ,µzI,λzII,µ. The product Ŵ\nr2,2sˆ2zIIˆ3zI also results in the same elementwise expression. Hence, to allow for symmetric expression, we factorize the termW r2,2sp1q as the sum of the two terms CpUr2,IIs dUr1,IsqT and CpUr2,Is dUr1,IIsqT . For each of the two terms, we assume that the vector-valued inputs are accordingly multiplied.\nThe parameters C P Roˆk,Urm,φs P Rdˆk (m “ 1, 2 and φ “ tI, IIu) are learnable. The aforementioned factorization results in the following equation:\nx “ CUTr1,IIszII `CU T r1,IszI `C\n´ Ur2,IIs dUr1,IIs ¯T´ zII d zII ¯ `C ´ Ur2,Is dUr1,Is ¯T´ zI d zI ¯ `\nC ´ Ur2,Is dUr1,IIs ¯T´ zI d zII ¯ `C ´ Ur2,IIs dUr1,Is ¯T´ zII d zI ¯\n` β (9)\n2An elementwise analysis (with a scalar output) is provided on the Appendix (sec. B.2).\nThis expansion captures the correlations (up to second order) of the two input variables zI, zII.\nTo make the proof more complete, we remind the reader that the recursive relationship (i.e., (2) in the main paper) is:\nxn “ xn´1 ` ´ UTrn,IszI `U T rn,IIszII ¯ ˚ xn´1 (10)\nfor n “ 2, . . . , N with x1 “ UTr1,IszI `U T r1,IIszII and x “ CxN ` β.\nClaim 1. The equation (9) is a special format of a polynomial that is visualized as in Fig. 1 of the main paper. Equivalently, prove that (9) follows the recursive relationship of (10).\nProof. We observe that the first two terms of equation 9 are equal to Cx1 (from equation 10). By applying Lemma 1 in the terms that have Khatri-Rao product, we obtain:\nx “ β `Cx1 `C \" ´ UTr2,IIszII ¯ ˚ ´ UTr1,IIszII ¯ ` ´ UTr2,IszI ¯ ˚ ´ UTr1,IszI ¯\n` ´\nUTr2,IszI\n¯ ˚ ´\nUTr1,IIszII\n¯ ` ´\nUTr2,IIszII\n¯ ˚ ´\nUTr1,IszI\n¯\n*\n“\nβ `Cx1 `C \" ”´ UTr2,IszI ¯ ` ´ UTr2,IIszII ¯ı ˚ x1 * “ Cx2 ` β\n(11)\nThe last equation is precisely the one that arises from the recursive relationship from equation 10.\nTo prove the recursive formula for the N th order expansion, a similar pattern as in sec.C of PolyGAN (Chrysos et al., 2019) can be followed. Specifically, the difference here is that because of the two input variables, the auto- and cross-correlation variables should be included. Other than that, the same factor sharing is followed." }, { "heading": "B.4 NESTED-MVP MODEL FOR TWO-VARIABLE INPUT", "text": "The model proposed above (i.e., equation 10), relies on a single coupled CP decomposition, however a more flexible model can factorize each level with a CP decomposition. To effectively do that, we utilize learnable hyper-parameters brns P Rω for n P r1, N s, which act as scaling factors for each parameter tensor. Then, a polynomial of expansion order N P N with output x P Ro has the form:\nx “ GpzI, zIIq “ N ÿ\nn“1\nn`2 ÿ\nρ“2\nˆ W rn,ρ´1s ˆ2 brN`1´ns ρ ź\nj“3 ˆjzI\nn`2 ź\nτ“ρ`1 ˆτzII\n˙\n` β (12)\nTo demonstrate the factorization without cluttering the notation, we assume a second order expansion in equation 12.\nSecond order derivation: The second order expansion, i.e., N “ 2, is derived below. We jointy factorize all parameters of equation 12 with a nested decomposition as follows:\n• First order parameters : W r1,1sp1q “ CpAr2,IIs dBr2sq T andW r1,2sp1q “ CpAr2,Is dBr2sq T .\n• Let W r2,1sp1q “ C \" Ar2,IIs d „ ´ Ar1,IIs d Br1s ¯ Vr2s\n*T\nand W r2,3sp1q “ C \"\nAr2,Is d „\n´ Ar1,Is dBr1s ¯ Vr2s\n*T\ncapture the second order correlations of a single variable (zII\nand zI respectively).\n• The cross-terms are included in W r2,2s ˆ2 br1s ˆ3 zI ˆ4 zII. The output of the τ element is expressed as\nřω ν“1 řd λ,µ“1 w r2,2s τ,ν,λ,µbr1s,ωzI,λzII,µ. Similarly, the product Ŵ r2,2sˆ2 br1sˆ3\nzII ˆ4 zI has output řω ν“1 řd λ,µ“1 w r2,2s τ,ν,µ,λbr1s,ωzI,λzII,µ for the τ element. Notice that the only change in the two expressions is the permutation of the third and forth modes of the tensor; the rest of the expression remains the same. Therefore, to account for this symmetry we factorize the term W r2,2s as the sum of two terms and assume that each term is multiplied by the respective terms. LetW r2,2sp1q “ C \" Ar2,Is d „ ´ Ar1,IIs dBr1s ¯ Vr2s \n`Ar2,IIs d „\n´ Ar1,Is dBr1s ¯ Vr2s\n*T\n.\nThe parameters C P Roˆk,Arn,φs P Rdˆk,Vrns P Rkˆk,Brns P Rωˆk for n “ 1, 2 and φ “ tI, IIu are learnable. Collecting all the terms above and extractingC as a common factor (we ommit C below to avoid cluttering the notation):\npAr2,IIs dBr2sqT pzII d br2sq ` pAr2,Is dBr2sqT pzI d br2sq` \"\nAr2,IIs d „ ´ Ar1,IIs dBr1s ¯ Vr2s\n*T\npzII d zII d br1sq` \"\nAr2,Is d „ ´ Ar1,Is dBr1s ¯ Vr2s\n*T\npzI d zI d br1sq` \"\nAr2,Is d „ ´ Ar1,IIs dBr1s ¯ Vr2s\n*T\npzI d zII d br1sq` \"\nAr2,IIs d „ ´ Ar1,Is dBr1s ¯ Vr2s\n*T\npzII d zI d br1sq “ ´\nATr2,IIszII `A T r2,IszI\n¯ ˚ ´\nBTr2sbr2s\n¯\n` ´\nATr2,IIszII `A T r2,IszI\n¯ ˚ \"\nV Tr2s\n„\n´\nATr1,IIszII `A T r1,IszI\n¯ ˚ ´\nBTr1sbr1s\n¯\n*\n(13)\nThe last equation is precisely a recursive equation that can be expressed with the Fig. 5 or equivalently the generalized recursive relationship below.\nRecursive relationship: The recursive formula for the Nested-MVP model with arbitrary expansion order N P N is the following:\nxn “ ´ ATrn,IszI `A T rn,IIszII ¯ ˚ ´ V Trnsxn´1 `B T rnsbrns ¯\n(14)\nwhere n P r2, N s and x1 “ ´ ATr1,IszI ` A T r1,IIszII ¯ ˚ ´ BTr1sbr1s ¯\n. The parameters C P Roˆk,Arn,φs P Rdˆk,Vrns P Rkˆk,Brns P Rωˆk for φ “ tI, IIu are learnable. Then, the output x “ CxN ` β. The Nested-MVP model manifests an alternative network that relies on slightly modified assumptions on the decomposition. Thus, changing the underlying assumptions of the decomposition can modify the resulting network. This can be an important tool for domain-specific applications, e.g., when the domain-knowledge should be inserted in the last layers." }, { "heading": "C BEYOND TWO VARIABLES", "text": "Frequently, more than one conditional inputs are required (Yu et al., 2018b; Xu et al., 2017; Maximov et al., 2020). In such tasks, the aforementioned framework can be generalized to more than two input variables. We demonstrate how this is possible with three variables; then it can trivially extended to an arbitrary number of input variables.\nLet zI, zII, zIII P Kd denote the three input variables. We aim to learn a function that captures the higher-order interactions of the input variables. The polynomial of expansion order N P N with output x P Ro has the form:\nx “ GpzI, zII, zIIIq “ N ÿ\nn“1\nn`1 ÿ\nρ“1\nn`1 ÿ\nδ“ρ\nˆ W rn,ρ,δs ρ ź\nj“2 ˆjzI\nδ ź\nτ“ρ`1 ˆτzII\nn`1 ź\nζ“δ`1 ˆζzIII\n˙\n` β (15)\nwhere β P Ro and W rn,ρ,δs P Roˆ śn m“1 ˆmd (for n P r1, N s and ρ, δ P r1, n` 1s) are the learnable parameters. As in the two-variable input, the unknown parameters increase exponentially. To that end, we utilize a joint factorization with factor sharing. The recursive relationship of such a factorization is:\nxn “ xn´1 ` ´ UTrn,IszI `U T rn,IIszII `U T rn,IIIszIII ¯ ˚ xn´1 (16)\nfor n “ 2, . . . , N with x1 “ UTr1,IszI `U T r1,IIszII `U T r1,IIIszIII and x “ CxN ` β.\nNotice that the pattern (for each order) is similar to the two-variable input: a) a different embedding is found for each input variable, b) the embeddings are added together, c) the result is multiplied elementwise with the representation of the previous order." }, { "heading": "D CONCATENATION OF INPUTS", "text": "A popular method used for conditional generation is to concatenate the conditional input with the noise labels. However, as we showcase below, concatenation has two significant drawbacks when compared to our framework. To explain those, we will define a concatenation model.\nLet zI P Kd11 , zII P K d2 2 where K1,K2 can be a subset of real or natural numbers. The output of a concatenation layer is x “ P T ”\nzI; zII\nıT\nwhere the symbol ‘;’ denotes the concatenation\nand P P Rpd1d2qˆo is an affine transformation on the concatenated vector. The jth output is xj “ řd1 τ“1 pτ,jzI,τ ` řd2 τ“1 pτ`d1,jzII,τ .\nTherefore, the two differences from the concatenation case are:\n• If the input variables are concatenated together we obtain an additive format, not a multiplicative that can capture cross-term correlations. That is, the multiplicative format does allow achieving higher-order auto- and cross- term correlations. • The concatenation changes the dimensionality of the embedding space. Specifically, the\ninput space has dimensionality d1 ¨ d2. That has a significant toll on the size of the filters (i.e., it increases the learnable parameters), while still having an additive impact. On the contrary, our framework does not change the dimensionality of the embedding spaces.\nE IN-DEPTH DIFFERENCES FROM Π-NET\nIn the next few paragraphs, we conduct an in-depth analysis of the differences between Π-Net and MVP. The analysis assumes knowledge of the proposed model, i.e., (2).\nChrysos et al. (2020) introduce Π-Net as a polynomial expansion of a single input variable. Their goal is to model functions x “ Gpzq as high-order polynomial expansions of z. Their focus is towards using a single-input variable z, which can be noise in case of image generation or an image in discriminative experiments. The authors express the StyleGAN architecture (Karras et al., 2019) as a polynomial expansion, while they advocate that the impressive results can be attributed to the polynomial expansion.\nTo facilitate the in-depth analysis, the recursive relationship that corresponds to (2) is provided below. An N th order expansion in Π-Net is expressed as:\nxn “ ´ ΛTrnsz ¯ ˚ xn´1 ` xn´1 (17)\nfor n “ 2, . . . , N with x1 “ ΛTr1sz and x “ ΓxN ` β. The parameters Λ,Γ are learnable.\nIn this work, we focus on conditional data generation, i.e., there are multiple input variables available as auxiliary information. The trivial application of Π-Net would be to concatenate all the M input variables zI, zII, zIII, . . .. The input variable z becomes z “ ” zI; zII; zIII; . . . ı\n, where the symbol ‘;’ denotes the concatenation. Then, the polynomial expansion of Π-Net can be learned on the concatenated z. However, there are four significant reasons that we believe that this is not as flexible as the proposed MVP.\nWhen we refer to Π-Net below, we refer to the model with concatenated input. In addition, let zI P Kd11 , zII P K d2 2 denote the input variables where K1,K2 can be a subset of real or natural numbers.\nParameter sharing: MVP allows additional flexibility in the structure of the architecture, since MVP utilizes a different projection layer for each input variable. We utilize this flexibility to share the parameters of the conditional input variable; as we detail in (19), we set Urn,IIs “ Ur1,IIs on (2). If we want to perform a similar sharing in Π-Net, the formulation equivalent to (17) would be pλrnsqi “ pλr1sqi for i “ d1, . . . , d1 ` d2. However, sharing only part of the matrix might be challenging. Additionally, when Λ is a convolution, the sharing pattern is not straightforward to be computed. Therefore, MVP enables additional flexibility to the model, which is hard to be included in Π-Net.\nInductive bias: The inductive bias is crucial in machine learning (Zhao et al., 2018), however concatenating the variables restricts the flexibility of the model (i.e. Π-Net). To illustrate that, let us use the super-resolution experiments as an example. The input variable zI is the noise vector and zII is the (vectorized) low-resolution image. If we concatenate the two variables, then we should use a fully-connected (dense) layer, which does not model well the spatial correlations. Instead, with MVP, we use a fully-connected layer for the noise vector and a convolution for zII (low-resolution image). The convolution reduces the number of parameters and captures the spatial correlations in the image. Thus, by concatenating the variables, we reduce the flexibility of the model.\nDimensionality of the inputs: The dimensionality of the inputs might vary orders of magnitude, which might create an imbalance during learning. For instance, in class-conditional generation concatenating the one-hot labels in the input does not scale well when there are hundreds of classes (Odena et al., 2017). We observe a similar phenomenon in class-conditional generation: in Cars196 (with 196 classes) the performance of Π-Net deteriorates considerably when compared to its (relative) performance in CIFAR10 (with 10 classes). On the contrary, MVP does not fuse the elements of the input variables directly, but it projects them into a subspace appropriate for adding them.\nOrder of expansion with respect to each variable: Frequently, the two inputs do not require the same order of expansion. Without loss of generality, assume that we need correlations up to NI and NII order (with NI ă NII ) from zI and zII respectively. MVP includes a different transformation\nfor each variable, i.e., Urn,Is for zI and Urn,IIs for zII. Then, we can set Urn,Is “ 0 for n ą NI . On the contrary, the concatenation of inputs (in Π-Net) constrains the expansion to have the same order with respect to each variable.\nAll in all, we can use concatenation to fuse variables and use Π-Net, however an inherently multivariate model is more flexible and can better encode the types of inductive bias required for conditional data generation." }, { "heading": "F DIFFERENCES FROM OTHER NETWORKS CAST AS POLYNOMIAL NEURAL NETWORKS", "text": "A number of networks with impressive results have emerged in (conditional) data generation the last few years. Three such networks that are particularly interesting in our context are Karras et al. (2019); Park et al. (2019); Chen et al. (2019). We analyze below each method and how it relates to polynomial expansions:\n• Karras et al. (2019) propose an Adaptive instance normalization (AdaIN) method for unsupervised image generation. An AdaIN layer expresses a second-order interaction3: h “ pΛTwq ˚ npcphinqq, where n is a normalization, c the convolution operator and w is the transformed noisew “MLP pzIq (mapping network). The parameters Λ are learnable, while hin is the input to the AdaIN. Stacking AdaIN layers results in a polynomial expansion with a single variable.\n• Chen et al. (2019) propose a normalization method, called sBN, to stabilize the GAN training. The method performs a ‘self-modulation’ with respect to the noise variable and optionally the conditional variable in the class-conditional generation setting. Henceforth, we focus on the class-conditional setting that is closer to our work. sBN injects the network layers with a multiplicative interaction of the input variables. Specifically, sBN projects the conditional variable into the space of the variable zI through an embedding function. Then, the interaction of the two vector-like variables is passed through a fully-connected layer (and a ReLU activation function); the result is injected into the network through the batch normalization parameters. If cast as a polynomial expansion, a network with sBN layers expresses a single polynomial expansion4\n• Park et al. (2019) introduce a spatially-adaptive normalization, i.e., SPADE, to improve semantic image synthesis. Their model, referred to as SPADE in the remainder of this work, assumes a semantic layout as a conditional input that facilitates the image generation. We analyze in sec. F.1 how to obtain the formulation of their spatially-adaptive normalization. If cast as a polynomial expansion, SPADE expresses a polynomial expansion with respect to the conditional variable.\nThe aforementioned works propose or modify the batch normalization layer to improve the performance or stabilize the training, while in our work we propose the multivariate polynomial as a general function approximation technique for conditional data generation. Nevertheless, given the interpretation of the previous works in the perspective of polynomials, we still can express them as special cases of MVP. Methodologically, there are two significant limitations that none of the aforementioned works tackle:\n• The aforementioned architectures focus on no or one conditional variable. Extending the frameworks to multiple conditional variables might not be trivial, while MVP naturally extends to arbitrarily many conditional variables.\n• Even though the aforementioned three architectures use (implicitly) a polynomial expansion, a significant factor is the order of the expansion. In our work, the product of polynomials enables capturing higher-order correlations without increasing the amount of layers substantially (sec. 3.2).\n3The formulation is derived from the public implementation of the authors. 4In MVP, we do not learn a single embedding function for the conditional variable. In addition, we do not project the (transformed) conditional variable to the space of the noise-variable. Both of these can be achieved by making simplifying assumptions on the factor matrices of MVP.\nIn addition to the aforementioned methodological differences, our work is the only polynomial expansion that conducts experiments on a variety of conditional data generation tasks. Thus, we both demonstrate methodologically and verify experimentally that MVP can be used for a wide range of conditional data generation tasks.\nF.1 IN-DEPTH DIFFERENCES FROM SPADE\nIn the next few paragraphs, we conduct an in-depth analysis of the differences between SPADE and MVP.\nPark et al. (2019) introduce a spatially-adaptive normalization, i.e., SPADE, to improve semantic image synthesis. Their model, referred to as SPADE in the remainder of this work, assumes a semantic layout as a conditional input that facilitates the image generation.\nThe nth model block applies a normalization on the representation xn´1 of the previous layer and then it performs an elementwise multiplication with a transformed semantic layout. The transformed semantic layout can be denoted asATrn,IIszII where zII denotes the conditional input to the generator. The output of this elementwise multiplication is then propagated to the next model block that performs the same operations. Stacking N such blocks results in an N th order polynomial expansion which is expressed as:\nxn “ ´ ATrn,IIszII ¯ ˚ ´ V Trnsxn´1 `B T rnsbrns ¯\n(18)\nwhere n P r2, N s and x1 “ ATr1,IszI. The parameters C P R oˆk,Arn,φs P Rdˆk,Vrns P Rkˆk,Brns P Rωˆk for φ “ tI, IIu are learnable. Then, the output x “ CxN ` β. SPADE as expressed in (18) resembles one of the proposed models of MVP (specifically (14)). In particular, it expresses a polynomial with respect to the conditional variable. The parametersArn,Is are set as zero, which means that there are no higher-order correlations with respect to the input variable zI. Therefore, our work bears the following differences from Park et al. (2019):\n• SPADE proposes a normalization scheme that is only applied to semantic image generation. On the contrary, our proposed MVP can be applied to any conditional data generation task, e.g., class-conditional generation or image-to-image translation. • SPADE is a special case of MVP. In particular, by setting i)Ar1,IIs equal to zero, ii)Arn,Is\nin (14) equal to zero, we obtain SPADE. In addition, MVP allows different assumptions on the decompositions which lead to an alternative structure, such as (2). • SPADE proposes a polynomial expansion with respect to a single variable. On the other\nhand, our model can extend to an arbitrary number of input variables to account for auxiliary labels, e.g., (16). • Even though SPADE models higher-order correlations of the conditional variable, it still\ndoes not leverage the higher-order correlations of the representations (e.g., as in the product of polynomials) and hence without activation functions it might not work as well as the two-variable expansion.\nPark et al. (2019) exhibit impressive generation results with large-scale computing (i.e., they report results using NVIDIA DGX with 8 V100 GPUs). Our goal is not to compete in computationally heavy, large-scale experiments, but rather to illustrate the benefits of the generic formulation of MVP.\nSPADE is an important baseline for our work. In particular, we augment SPADE in wo ways: a) by extending it to accept both continuous and discrete variables in zII and b) by adding polynomial terms with respect to the input variable zI. The latter model is referred to as SPADE-MVP (details on the next section)." }, { "heading": "G EXPERIMENTAL DETAILS", "text": "Metrics: The two most popular metrics (Lucic et al., 2018; Creswell et al., 2018) for evaluation of the synthesized images are the Inception Score (IS) (Salimans et al., 2016) and the Frechet Inception\nDistance (FID) (Heusel et al., 2017). The metrics utilize the pretrained Inception network (Szegedy et al., 2015) to extract representations of the synthesized images. FID assumes that the representations extracted follow a Gaussian distribution and matches the statistics (i.e., mean and variance) of the representations between real and synthesized samples. Alternative evaluation metrics have been reported as inaccurate, e.g., in Theis et al. (2016), thus we use the IS and FID. Following the standard practice of the literature, the IS is computed by synthesizing 5, 000 samples, while the FID is computed using 10, 000 samples.\nThe IS is used exclusively for images of natural scenes as a metric. The reasoning behind that is that the Inception network has been trained on images of natural scenes. On the contrary, the FID metric relies on the first and second-order moments of the representations, which are considered more robust to different types of images. Hence, we only report IS for the CIFAR10 related experiments, while for the rest the FID is reported.\nDataset details: There are five main datasets used in this work:\n• Large-scale CelebFaces Attributes (or CelebA for short) (Liu et al., 2015) is a large-scale face attributes dataset with 202, 000 celebrity images. We use 160, 000 images for training our method.\n• Cars196 (Krause et al., 2013) is a dataset that includes different models of cars in different positions and backgrounds. Cars196 has 16, 000 images, while the images have substantially more variation than CelebA faces.\n• CIFAR10 (Krizhevsky et al., 2014) contains 60, 000 images of natural scenes. Each image is of resolution 32ˆ 32ˆ 3 and is classified in one of the 10 classes. CIFAR10 is frequently used as a benchmark for image generation.\n• The Street View House Numbers dataset (or SVHN for short) (Netzer et al., 2011) has 100, 000 images of digits (73, 257 of which for training). SVHN includes color housenumber images which are classified in 10 classes; each class corresponds to a digit 0 to 9. SVHN images are diverse (e.g., with respect to background, scale).\n• MNIST (LeCun et al., 1998) consists of images with handwritten digits. Each images depicts a single digit (annotated from 0 to 9) in a 28ˆ 28 resolution. The dataset includes 60, 000 images for training.\n• Shoes (Yu & Grauman, 2014; Xie & Tu, 2015) consists of 50, 000 images of shoes, where the edges of each shoe are extracted (Isola et al., 2017).\n• Handbags (Zhu et al., 2016; Xie & Tu, 2015) consists of more than 130, 000 images of handbag items. The edges have been computed for each image and used as conditional input to the generator (Isola et al., 2017).\n• Anime characters dataset (Jin et al., 2017) consists of anime characters that are generated based on specific attributes, e.g., hair color. The public version used5 contains annotations on the hair color and the eye color. We consider 7 classes on the hair color and 6 classes on the eye color, with a total of 14, 000 training images.\nAll the images of CelebA, Cars196, Shoes and Handbags are resized to 64ˆ 64 resolution.\nArchitectures: The discriminator structure is left the same for each experiment, we focus only on the generator architecture. All the architectures are based on two different generator schemes, i.e., the SNGAN (Miyato & Koyama, 2018) and the polynomial expansion of Chrysos et al. (2020) that does not include activation functions in the generator.\nThe variants of the generator of SNGAN are described below:\n• SNGAN (Miyato & Koyama, 2018): The generator consists of a convolution, followed by three residual blocks. The discriminator is also based on successive residual blocks. The public implementation of SNGAN with conditional batch normalization (CBN) is used as the baseline.\n5The version is downloaded following the instructions of https://github.com/bchao1/ Anime-Generation.\n• SNGAN-MVP [proposed]: We convert the resnet-based generator of SNGAN to an MVP model. To obtain MVP, the SNGAN is modified in two ways: a) the Conditional Batch Normalization (CBN) is converted into batch normalization (Ioffe & Szegedy, 2015), b) the injections of the two embeddings (from the inputs) are added after each residual block, i.e. the formula of (2). In other words, the generator is converted to a product of two-variable polynomials.\n• SNGAN-CONC: Based on SNGAN-MVP, we replace each Hadamard product with a concatenation. This implements the variant mentioned in sec. D.\n• SNGAN-SPADE (Park et al., 2019): As described in sec. F.1, SPADE is a polynomial with respect to the conditional variable zII. The generator of SNGAN-MVP is modified to perform the Hadamard product with respect to the conditional variable every time.\nThe variants of the generator of Π-Net are described below:\n• Π-Net (Chrysos et al., 2020): The generator is based on a product of polynomials. The first polynomials use fully-connected connections, while the next few polynomials use cross-correlations. The discriminator is based on the residual blocks of SNGAN. We stress out that the generator does not include any activation functions apart from a hyperbolic tangent in the output space for normalization. The authors advocate that this exhibits the expressivity of the designed model.\n• Π-Net-SICONC: The generator structure is based on Π-Net with two modifications: a) the Conditional Batch Normalization is converted into batch normalization (Ioffe & Szegedy, 2015), b) the second-input is concatenated with the first (i.e., the noise) in the input of the generator. Thus, this is a single variable polynomial, i.e., a Π-Net, where the second-input is vectorized and concatenated with the first. This baseline implements the Π-Net described in sec. E.\n• MVP [proposed]: The generator of Π-Net is converted to an MVP model with two modifications: a) the Conditional Batch Normalization is converted into batch normalization (Ioffe & Szegedy, 2015), b) instead of having a Hadamard product with a single variable as in Π-Net, the formula with the two-variable input (e.g., (2)) is followed.\n• GAN-CONC: Based on MVP, each Hadamard product is replaced by a concatenation. This implements the variant mentioned in sec. D.\n• GAN-ADD: Based on MVP, each Hadamard product is replaced by an addition. This modifies (14) to xn “ ´ ATrn,IszI `A T rn,IIszII ¯ ` ´ V Trnsxn´1 `B T rnsbrns ¯ .\n• SPADE (Park et al., 2019): As described in sec. F.1, SPADE defines a polynomial with respect to the conditional variable zII. The generator of Π-Net is modified to perform the Hadamard product with respect to the conditional variable every time.\n• SPADE-MVP [proposed]: This is a variant we develop to bridge the gap between SPADE and the proposed MVP. Specifically, we augment the aforementioned SPADE twofold: a) the dense layers in the input space are converted into a polynomial with respect to the variable zI and b) we also convert the polynomial in the output (i.e., the rightmost polynomial in the Fig. 6 schematics) to a polynomial with respect to the variable zI. This model captures higher-order correlations of the variable zI that SPADE did not not originally include. This model still includes single variable polynomials, however the input in each polynomial varies and is not only the conditional variable.\nThe two baselines GAN-CONC and GAN-ADD capture only additive correlations, hence they cannot effectively model complex distributions without activation functions. Nevertheless, they are added as a reference point to emphasize the benefits of higher-order polynomial expansions.\nAn abstract schematic of the generators that are in the form of products of polynomials is depicted in Fig. 6. Notice that the compared methods from the literature use polynomials of a single variable, while we propose a polynomial with an arbitrary number of inputs (e.g., two-input shown in the schematic).\nUnder review as a conference paper at ICLR 2021\nc\nImplementation details of MVP: Throughout this work, we reserve the symbol zII for the conditional input (e.g., a class label). In each polynomial, we reduce further the parameters by using the same embedding for the conditional variables. That is expressed as:\nUrn,IIs “ Ur1,IIs (19)\nfor n “ 2, . . . , N . Equivalently, that would beArn,IIs “ Ar1,IIs in (14). Additionally, Nested-MVP performed better in our preliminary experiments, thus we use (14) to design each polynomial. Given\nthe aforementioned sharing, the N th order expansion is described by:\nxn “ ´ ATrn,IszI `A T r1,IIszII ¯ ˚ ´ V Trnsxn´1 `B T rnsbrns ¯\n(20)\nfor n “ 2, . . . , N . Lastly, the factor Ar1,IIs is a convolutional layer in the case of continuous conditional input, while it is a fully-connected layer in the case of discrete conditional input." }, { "heading": "H ADDITIONAL EXPERIMENTS", "text": "Additional experiments and visualizations are provided in this section. Additional visualizations for class-conditional generation are provided in sec. H.1. An additional experiment with class-conditional generation with SVHN digits is performed in sec. H.2. An experiment that learns the translation of MNIST to SVHN digits is conducted in sec. H.3. To explore further the image-to-image translation, two additional experiments are conducted in sec. H.4. An attribute-guided generation is performed in sec. H.5 to illustrate the benefit of our framework with respect to multiple, discrete conditional inputs. This is further extended in sec. H.6, where an experiment with mixed conditional input is conducted. Finally, an additional diversity-inducing regularization term is used to assess whether it can further boost the diversity the synthesized images in sec. H.7." }, { "heading": "H.1 ADDITIONAL VISUALIZATIONS IN CLASS-CONDITIONAL GENERATION", "text": "In Fig. 7 the qualitative results of the compared methods in class-conditional generation on CIFAR10 are shared. Both the generator of SNGAN and ours have activation functions in this experiment.\nIn Fig. 8 samples from the baseline Π-Net (Chrysos et al., 2020) and our method are depicted for the class-conditional generation on CIFAR10. The images have a substantial difference. Similarly, in Fig. 9 a visual comparison between Π-Net and MVP is exhibited in Cars196 dataset. To our knowledge, no framework in the past has demonstrated such expressivity; MVP synthesizes images that approximate the quality of synthesized images from networks with activation functions.\nIn Fig. 10, an inter-class interpolation of various compared methods in CIFAR10 are visualized. The illustrations of the intermediate images in SNGAN-CONC and SNGAN-ADD are either blurry or not realistic. On the contrary, in SPADE and MVP the higher-order polynomial expansion results in more realistic intermediate images. Nevertheless, MVP results in sharper shapes and images even in the intermediate results when compared to SPADE." }, { "heading": "H.2 CLASS-CONDITIONAL GENERATION ON HOUSE DIGITS", "text": "An experiment on class-conditional generation with SVHN is conducted below. SVHN images include (substantial) blur or other distortions, which insert noise in the distribution to be learned. In addition, some images contain contain a central digit (i.e., based on which the class is assigned), and\npartial visibility of other digits. Therefore, the generation of digits of SVHN is challenging for a generator without activation functions between the layers.\nOur framework, e.g., equation 14, does not include any activation functions. To verify the expressivity of our framework, we maintain the same setting for this experiment. Particularly, the generator does not have activation functions between the layers; there is only a hyperbolic tangent in the output space for normalization. The generator receives a noise sample and a class as input, i.e., it is a class-conditional polynomial generator.\nThe results in Fig. 12(b) illustrate that despite the noise, MVP learns the distribution. As mentioned in the main paper, our formulation enables both inter-class and intra-class interpolations naturally. In the inter-class interpolation the noise zI is fixed, while the class zII is interpolated. In Fig. 12(d) several inter-class interpolations are visualized. The visualization exhibits that our framework is able to synthesize realistic images even with inter-class interpolations." }, { "heading": "H.3 TRANSLATION OF MNIST DIGITS TO SVHN DIGITS", "text": "An experiment on image translation from the domain of binary digits to house numbers is conducted below. The images of MNIST are used as the source domain (i.e., the conditional variable zII), while\nthe images of SVHN are used as the target domain. The correspondence of the source to the target domain is assumed to be many-to-many, i.e., each MNIST digit can synthesize multiple SVHN images. No additional loss is used, the setting of continuous conditional input from sec. 4.2 is used.\nThe images in Fig. 13 illustrate that MVP can translate MNIST digits into SVHN digits. Additionally, for each source digit, there is a significant variation in the synthesized images." }, { "heading": "H.4 TRANSLATION OF EDGES TO IMAGES", "text": "An additional experiment on translation is conducted, where the source domain depicts edges and the target domain is the output image. Specifically, the tasks of edges-to-handbags (on Handbags dataset) and edges-to-shoes (on Shoes dataset) have been selected Isola et al. (2017).\nIn this experiment, the MVP model of sec. 4.2 is utilized, i.e., a generator without activation functions between the layers. The training is conducted using only the adversarial loss. Visual results for both the case of edges-to-handbags and edges-to-shoes are depicted in Fig. 14. The first row depicts the conditional input zII, i.e., an edge, while the rows 2-6 depict the synthesized images. Note that in both the case of handbags and shoes there is significant variation in the synthesized images, while they follow the edges provided as input." }, { "heading": "H.5 MULTIPLE, DISCRETE CONDITIONAL INPUTS", "text": "Frequently, more than one type of input conditional inputs are available. Our formulation can be extended beyond two input variables (sec. C); we experimentally verify this case. The task selected is attribute-guided generation trained on images of Anime characters. Each image is annotated with respect to the color of the eyes (6 combinations) and the color of the hair (7 combinations).\nSince SPADE only accepts a single conditional variable, we should concatenate the two attributes in a single variable. We tried simply concatenating the attributes directly, but this did not work well. Instead, we can use the total number of combinations, which is the product of the individual attribute combinations, i.e., in our case the total number of combinations is 42. Obviously, this causes ‘few’ images to belong in each unique combination, i.e., there are 340 images on average that belong to each combination. On the contrary, there are 2380 images on average for each eye color.\nSPADE and Π-Net are trained by using the two attributes in a single combination, while in our case, we consider the multiple conditional variable setting. In each case, only the generator differs depending on the compared method. In Fig. 15 few indicative images are visualized for each method; each row depicts a single combination of attributes, i.e., hair and eye color. Notice that SPADE results in a single image per combination, while in Π-Net-SINCONC there is considerable repetition in each case. The single image in SPADE can be explained by the lack of higher-order correlations with respect to the noise variable zI.\nIn addition to the diversity of the images per combination, an image from every combination is visualized in Fig. 16. MVP synthesizes more realistic images than the compared methods of Π-NetSINCONC and SPADE." }, { "heading": "H.6 MULTIPLE CONDITIONAL INPUTS WITH MIXED CONDITIONAL VARIABLES", "text": "We extend the previous experiment with multiple conditional variables to the case of mixed conditional variables, i.e., there is one discrete and one continuous conditional variable. The discrete conditional variable captures the class label, while the continuous conditional variable captures the low-resolution image. Thus, the task is class-conditional super-resolution.\nWe use the experimental details of sec. 4.2 in super-resolution 8ˆ. In Fig. 17, we visualize how for each low-resolution image the results differ depending on the randomly sampled class label. The FID in this case is 53.63, which is similar to the previous two cases. Class-conditional super-resolution (or similar tasks with multiple conditional inputs) can be of interest to the community and MVP results in high-dimensional images with large variance.\nH.7 IMPROVE DIVERSITY WITH REGULARIZATION\nAs emphasized in sec. I, various methods have been utilized for synthesizing more diverse images in conditional image generation tasks. A reasonable question is whether our method can be used in conjunction with such methods, since it already synthesizes diverse results. Our hypothesis is that when MVP is used in conjunction with any diversity-inducing technique, it will further improve the diversity of the synthesized images. To assess the hypothesis, we conduct an experiment on edges to images that is a popular benchmark in such diverse generation tasks (Zhu et al., 2017b; Yang et al., 2019).\nThe plug-n-play regularization term of Yang et al. (2019) is selected and added to the GAN loss during the training. The objective of the regularization term Lreg is to maximize the following term:\nLreg “ minp ||GpzI, 1, zIIq ´GpzI, 2, zIIq| |1\n||zI, 1 ´ zI, 2| |1 , τq (21)\nwhere τ is a predefined constant, zI, 1, zI, 2 are different noise samples. The motivation behind this term lies in encouraging the generator to produce outputs that differ when the input noise samples differ. In our experiments, we follow the implementation of the original paper with τ “ 10. The regularization loss of equation 21 is added to the GAN loss; the architecture of the generator remains similar to sec. H.4. The translation task is edges-to-handbags (on Handbags dataset) and edges-to-shoes (on Shoes dataset). In Fig. 18 the synthesized images are depicted. The regularization loss causes more diverse images to be synthesized (i.e., when compared to the visualization of Fig. 14 that was trained using only the adversarial loss). For instance, in both the shoes and the handbags, new shades of blue are now synthesized, while yellow handbags can now be synthesized.\nThe empirical results validate the hypothesis that our model can be used in conjunction with diversity regularization losses in order to improve the results. Nevertheless, the experiment in sec. H.4 indicates that a regularization term is not necessary to synthesize images that do not ignore the noise as feed-forward generators had previously." }, { "heading": "I DIFFERENCE OF MVP FROM OTHER DIVERSE GENERATION TECHNIQUES", "text": "One challenge that often arises in conditional data generation is that one of the variables gets ignored by the generator (Isola et al., 2017). This has been widely acknowledged in the literature, e.g., Zhu et al. (2017b) advocates that it is hard to utilize a simple architecture, like Isola et al. (2017), with noise. A similar conclusion is drawn in InfoGAN (Chen et al., 2016) where the authors explicitly mention that additional losses are required, otherwise the generator is ‘free to ignore’ the additional variables. To mitigate this, a variety of methods have been developed. We summarize the most prominent methods from the literature, starting from image-to-image translation methods:\n• BicycleGAN (Zhu et al., 2017b) proposes a framework that can synthesize diverse images in image-to-image translation. The framework contains 2 encoders, 1 decoder and 2 discriminators. This results in multiple loss terms (e.g., eq.9 of the paper). Interestingly, the authors utilize a separate training scheme for the encoder-decoder and the second encoder\nas training together ’hides the information of the latent code without learning meaningful modes’.\n• Almahairi et al. (2018) augment the deterministic mapping of CycleGAN (Zhu et al., 2017a) with a marginal matching loss. The framework learns diverse mappings utilizing the additional encoders. The framework includes 4 encoders, 2 decoders and 2 discriminators.\n• MUNIT (Huang et al., 2018) focuses on diverse generation in unpaired image-to-image translation. MUNIT demonstrates impressive translation results, while the inverse translation is also learnt simultaneously. That is, in case of edges-to-shoes, the translation shoes-toedges is also learnt during the training. The mapping learnt comes at the cost of multiple network modules. Particularly, MUNIT includes 2 encoders, 2 decoders, 2 discriminators for learning. This also results in multiple loss terms (e.g., eq.5 of the paper) along with additional hyper-parameters and network parameters.\n• Drit++ (Lee et al., 2020) extends unpaired image-to-image translation with disentangled representation learning, while they allow multi-domain image-to-image translations. Drit++ uses 4 encoders, 2 decoders, 2 discriminators for learning. Similarly to the previous methods, this results in multiple loss terms (e.g., eq.6-7 of the paper) and additional hyper-parameters.\n• Choi et al. (2020) introduce a method that supports multiple target domains. The method includes four modules: a generator, a mapping network, a style encoder and a discriminator. All modules (apart from the generator) include domain-specific sub-networks in case of multiple target domains. To ensure diverse generation, Choi et al. (2020) utilize a regularization loss (i.e., eq. 3 of the paper), while their final objective consists of multiple loss terms.\nThe aforementioned frameworks contain additional network modules for training, which also results in additional hyper-parameters in the loss-function and the network architecture. Furthermore, the frameworks focus exclusively on image-to-image translation and not all conditional generation cases, e.g., they do not tackle class-conditional or attribute-based generation.\nAn interesting technique for diverse, class-conditional generation is the self-conditional GAN of Liu et al. (2020). The method conditions the generator with pseudo-labels that are automatically derived from clustering on the feature space of the discriminator. This enables the generator to synthesize more diverse samples. This method is orthogonal to our, i.e., the generator of Liu et al. (2020) can be replaced with MVP.\nUsing regularization terms in the loss function has been an alternative way to achieve diverse generation. Mao et al. (2019); Yang et al. (2019) propose simple regularization terms that can be\nplugged into any architecture to encourage diverse generation. Lee et al. (2019) propose two variants of a regularization term, with the ‘more stable variant’ requiring additional network modules.\nWe emphasize that our method can be used in conjunction with many of the aforementioned techniques to obtain more diverse examples. We demonstrate that this is possible in an experiment in sec. H.7." } ]
2,020
null
SP:66df8bc94a4e5e99341cd1ad491018cca6207ad9
[ "This paper aims to incorporate the attention mechanism into recurrent neural networks by using fixed point equations. In particular, the authors define a bidirectional RNN with attention by a fixed point equation and then transform it to a variant of the Transformer block. The proposed model StarSaber is shown to be more parameter efficient than the Transformer model and achieve competitive performance on three CLUE datasets." ]
Transformer has achieved state of the art performance in multiple Natural Language Processing tasks recently. Yet the Feed Forward Network(FFN) in a Transformer block is computationally expensive. In this paper, we present a framework to transform Recurrent Neural Networks(RNNs) and their variants into selfattention-style models, with an approximation of Banach Fixed-point Theorem. Within this framework, we propose a new model, StarSaber, by solving a set of equations obtained from RNN with Fixed-point Theorem and further approximate it with a Multi-layer Perceptron. It provides a view of stacking layers. StarSaber achieves better performance than both the vanilla Transformer and an improved version called ReZero on three datasets and is more computationally efficient, due to the reduction of Transformer’s FFN layer. It has two major parts. One is a way to encode position information with two different matrices. For every position in a sequence, we have a matrix operating on positions before it and another matrix operating on positions after it. The other is the introduction of direct paths from the input layer to the rest of layers. Ablation studies show the effectiveness of these two parts. We additionally show that other RNN variants such as RNNs with gates can also be transformed in the same way, outperforming the two kinds of Transformers as well.
[]
[ { "authors": [ "Thomas Bachlechner", "Bodhisattwa Prasad Majumder", "Huanru Henry Mao", "Garrison W. Cottrell", "Julian McAuley" ], "title": "ReZero is All You Need: Fast Convergence at Large Depth", "venue": "arXiv e-prints, art", "year": 2020 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "venue": "arXiv e-prints, art", "year": 2014 }, { "authors": [ "Shaojie Bai", "J. Zico Kolter", "Vladlen Koltun" ], "title": "Deep equilibrium models", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Qian Chen", "Xiaodan Zhu", "Zhen-Hua Ling", "Si Wei", "Hui Jiang", "Diana Inkpen" ], "title": "Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Jason P.C. Chiu", "Eric Nichols" ], "title": "Named entity recognition with bidirectional lstm-cnns", "venue": "Trans. Assoc. Comput. Linguistics,", "year": 2016 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder– decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Qipeng Guo", "Xipeng Qiu", "Pengfei Liu", "Yunfan Shao", "Xiangyang Xue", "Zheng Zhang" ], "title": "Startransformer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Karl Moritz Hermann", "Tomas Kocisky", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Rudolf Kadlec", "Martin Schmid", "Ondrej Bajgar", "Jan Kleindienst" ], "title": "Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 908–918, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1086. URL https: //www.aclweb.org/anthology/P16-1086", "year": 2016 }, { "authors": [ "Nal Kalchbrenner", "Phil Blunsom" ], "title": "Recurrent continuous translation models", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "arXiv e-prints, art", "year": 2014 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Shen Li", "Zhe Zhao", "Renfen Hu", "Wensi Li", "Tao Liu", "Xiaoyong Du" ], "title": "Analogical reasoning on chinese morphological and semantic relations", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient Estimation of Word Representations in Vector Space", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Distributed Representations of Words and Phrases and their Compositionality", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Zhiguo Wang", "Wael Hamza", "Radu Florian" ], "title": "Bilateral multi-perspective matching for natural language sentences", "venue": "In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Yue Zhang", "Jie Yang" ], "title": "Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": "Association for Computational Linguistics. doi: 10. 18653/v1/P18-1144. URL https://www.aclweb.org/anthology/P18-1144", "year": 2018 }, { "authors": [ "Peng Zhou", "Zhenyu Qi", "Suncong Zheng", "Jiaming Xu", "Hongyun Bao", "Bo Xu" ], "title": "Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling", "venue": "In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recurrent Neural Network, known as RNN, has been widely applied to various tasks in the last decade, such as Neural Machine Translation (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014), Text Classification (Zhou et al., 2016), Name Entity Recognition (Zhang & Yang, 2018; Chiu & Nichols, 2016), Machine Reading Comprehension (Hermann et al., 2015; Kadlec et al., 2016) and Natural Language Inference (Chen et al., 2017; Wang et al., 2017). Models applied to these tasks are not the vanilla RNNs but two of their famous variants, Gated Recurrent Unit (Cho et al., 2014), known as GRU, and Long Short Term Memory (Hochreiter & Schmidhuber, 1997), known as LSTM, in which gates play an important role. RNNs are hard to be computed parallelly. They are not bidirectional either, meaning that a word cannot utilize the information of words coming after it. A general way to alleviate this problem is to reverse the input sequence and combine results given by two different RNN encoders with operations like concatenation and addition.\nHowever, Transformer (Vaswani et al., 2017) has provided a better solution. It is based on purely attention mechanism, which has been widely used in Neural Machine Translation since Bahdanau et al. (2014). Models based on self-attention mechanism are mostly Transformer and its variants, such as Transformer-XL (Dai et al., 2019), Universal Transformer (Dehghani et al., 2019) and Star-Transformer (Guo et al., 2019). Compared with recurrent units such as GRU and LSTM, self-attention-style models can be computed parallelly, which means they suit better large-scale training. But each of these Transformers has an FFN layer with a very high vector dimension, which still is the bottleneck to improve the computation efficency.\nIn this paper, we present a new framework based on Banach Fixed-point Theorem to transform the vanilla RNN and its variants with self-attention mechanism. StarSaber, one of such transformed models, outperforms both the vanilla Transformer and ReZero (Bachlechner et al., 2020) in our experiments with less parameters and thus less computational power. To start with,\nwe need a different view of attention. Attention is a way to build a relation graph between words, and the vanilla RNN is nothing but a model with a relation graph as a chain. This graph is in fact represented with an adjacent matrix, which is computed by mapping each pair of positions to a positive real number and normalizing the numbers related to each position, which are just those in the same row of the adjacent matrix, so that they sum up to one.\nThe vanilla RNN updates hidden states through a chain, that is, the hidden state for each position only depends on that in the previous position. However, if we have this relation graph, the hidden state for each position depends on hidden states for all other positions in a sequence. This is where we obtain equations. In our opinion, a bidirectional RNN is defined by some equations and Banach Fixed-point Theorem inspires us to iterate according to them. When we fix the number of iterations and specify distinct weights for each of them, a self-attention-style model is then constructed.\nIn Transformer, Position Embedding(PE) as a way to capture word order information in language by adding a matrix to the input, is indispensable. But in StarSaber, position encoding is done in the aggregation step after the construction of a relation graph. For each position, we sum up linear transformations of hidden states in all positions with the corresponding weights in the relation matrix in order to get an attention vector. In the calculation of such a vector, we specify different linear transformation weights for the ”future” and the ”past”. Then the hidden vector for a position is computed with the corresponding attention vector and an input vector, which turns into a direct path from the input layer to each hidden layer. And we directly drop the FFN layer in Transformer achieving still competive and even better results with much less parameters on three datasets provided by CLUE (Xu et al., 2020): the AFQMC dataset of Sentence Similarity, the TNEWS dataset of Text Classification and the CMNLI dataset of Natural Language Inference. More importantly, our derivation of StarSaber shows a universal way to transform different RNNs, such as LSTM and GRU discussed in the following content, providing possibilities other than Transformers for self-attention models." }, { "heading": "2 RELATED WORK", "text": "Gates were first introduced into recurrent networks in LSTM and were rediscovered and simplified in GRU. Gate mechanism is an operation that multiplies an output by a single sigmoid layer of the input and is often seen as an approach to address the gradient vanishing issue. But if only so, other approaches which addresses this problem should achieve similar results to LSTM and GRU. In this paper, we show by experiments that in StarSaber which doesn’t have such a problem, gates can also help improve the performance.\nAttention in sequence modeling is a weighted sum of the output in each position of a sequence, which simulates the way a man distributes his attention to all its parts. Weights in this sum are given by a certain function of some inputs. And self-attention is an approach computing both the weighted sum and weights on the same sequence without any other inputs. There are different types of attention like multi-head attention and scaled dot product attention in Transformer, attention based on addition in Bahdanau et al. (2014), and bilinear attention in Luong et al. (2015). Our model applies the bilinear attention in the construction of a word relation graph.\nResidual Connection was proposed by He et al. (2015). It alleviates the problem of training deep neural networks. In Natural Language Processing, Residual Connection alleviates both the gradient vanishing problem and the degration problem of deep networks. Our model uses a weighted residual connection (Bachlechner et al., 2020) which further alleviates the degration problem. Another similar idea is the highway connection (Srivastava et al., 2015). In this paper, we inspect the gate mechanism in our self-attention-style model. Note that the highway connection can also fit into our framework, which is a fixed-point generalization of GRU.\nPretraining has proved to be extremely useful since Embeddings from Language Models(ELMO) (Peters et al., 2018). Many works that follow such as BERT (Devlin et al., 2018), ALBERT (Lan et al., 2020), XLNET (Yang et al., 2019) have outperformed humans. Pretraining is a training pattern which trains a language model, usually extremely large, on an enormous dataset with one\nor more unsupervised tasks and fine-tunes it on other datasets and tasks. There are two types of language models in general, known as auto-regressive models(e.g., XLNET) and auto-encoder ones(e.g., BERT). However, pretraining on a large dataset requires resources. We show in this paper that only pretraining on a dadtaset formed by collecting the training, development and test inputs together can as well improve the performance, revealing the significance of pretraining tasks.\nMLM is the unsupervised task utilized by BERT to pretrain. It randomly masks some pieces of a sentence and train the model to predict what has been masked. In this way, knowledge is gained and the model is initialized for downstream tasks. However, experiments show that even not pretrained on a large dataset, using MLM to pretrain on a dataset formed by collecting the training, development and test inputs can still improve the performance, inspiring us that a more flexible and task-related pretraining method is beneficial." }, { "heading": "3 MODEL ARCHITECTURE", "text": "" }, { "heading": "3.1 RECURRENT NEURAL NETWORKS AND BIDIRECTIONAL EQUATIONS WITH SELF-ATTENTION", "text": "This section follows the intuition we have discussed before. The vanilla RNN formulas are listed below:\nhn = tanh(Uhn−1 +Wxn) (1)\nIn self-attention, we don’t just utilize the hidden state from the previous position but hidden states from all positions, to compute a hidden vector for position n. To encode information of relative order, we specify distinct linear transformation weights. For simplicity, we ignore all bias terms in the following derivation. Following the bilinear self-attention mechanism, we have:\nhn = tanh(An + V xn) An = ∑ i<n GniU lefthi + ∑ i≥n GniU righthi 1\nGni = softmax(g,−1) = gni∑ j gnj\ngni = exp( hTnWhi√\nd )2\n(2)\nWhat we have done here is replacing hn−1 with an attention vector An. Notice a fact that in the first equality, h appears on both the left-hand side and the right-hand side(we use h to compute An), turning it into an equation. This means a bidirectional model is defined by a set of equations, because the word relation graph constructed by attention is not free of loops. Moreover, introducing equations can be seen as a constraint to obtain stable representation of a sentence. Intuitively, if we view the non-linear function on the right hand side as an updating operation and the hidden vector we obtain in each position as a semantic representation, it simply means that when the model ”reads” the whole sentence again based on the current understanding, it should produce the same representation, meaning that it has ”fully understands” the whole sentence and makes no changes on the hidden vectors." }, { "heading": "3.2 GENERALIZE EQUATIONS WITH FIXED-POINT", "text": "Now we have an equation to solve, which is extremely complex and difficult. But Banach Fixedpoint theorem shows us a way.\nTheorem 3.1 (Banach Fixed-point Theorem) For any real-valued function f(x), if | dfdx | < 1, then iteration process xn+1 = f(xn) converges and lim\nn→+∞ xn = x\n∗, where x∗ = f(x∗).\n1If not stated, a sum without a limit is to sum over all possible values. 2All these U, V, Ws are matrices that satisfy rules of the matrix-vector product. The hyperparameter d here\nis the hidden size. The attention here is scaled for faster convergence.\nThe equation above is an equation of an iterative pattern, and this Theorem just tells us that as long as we keep iterating, we will obtain a root of the equation if its jacobian matrix satisfies some conditions. The iterative pattern is given as follows:\nhl+1n = tanh(A l n + V xn) Aln = ∑ i<n GlniU lefthli + ∑ i≥n GlniU righthli\nGlni = softmax(g l,−1) = g l ni∑\nj\nglnj\nglni = exp( (hln) TWhli√ d )\n(3)\nWe can then iterate till it converges. Similar ideas are in Bai et al. (2019), where the authors solve the fixed-point directly with very high computational cost. Sometimes it cannot even converge to a fixedpoint, since the convergence condition is quite strict. A sufficient condition for convergence is that all parameter matrices are strictly orthogonal, making the optimization problem hard. Therefore, if we want to obtain a faster and more stable model, we can approximate it with a Multi-layer Perceptron(MLP) and relax the condition of convergence. In addition, we allow our model to assign different weights for different layers. The reason why we don’t reuse parameters in each layer is that iterating with the same set of parameters without a constraint of orthogonality often diverges. Even if we fix the number of iterations, it is still hardly possible to converge to the correct fixed-point. In this case, specifying different weights for each layer allows our model to learn a better fit for the whole iteration process. Therefore, we have\nhl+1n = tanh(A l n + V lxn) Aln = ∑ i<n GlniU lhli + ∑ i≥n GlniQ lhli\nGlni = softmax(g l,−1) = g l ni∑\nj\nglnj\nglni = exp( (hln) TW lhli√ d )\n(4)\nHere we also need an initial point to start the iteration. In our model, we choose the input sequence itself to be the initial value, that is to set h0i = xi. In more general cases, the initial value may be a linear transformation of the input or just some fixed vector like a zero one." }, { "heading": "3.3 RESIDUAL CONNECTIONS", "text": "Since we decide to approximate the iteration process with an MLP, Residual Connection is then indispensable in for it helps to alleviate the problem of degration. However, its magnitude, which is the fixed scaling number, needs to be tuned mannually. If we allow it to be automatically tuned by our model, the whole model can be written as follows:\nhl+1n = h l n + α ltanh(Aln + V lxn) (5)\nThe rest of formulas are the same as above. The αl here is a crucial weight initialized to be one or zero in every layer. In Bachlechner et al. (2020) it is initialized to be zero in order to stabilize Transformer. But in our experiments we don’t train extremely deep networks with a thousand or more layers. Thus we initialize it to be one since we find that it speeds up convergence." }, { "heading": "3.4 MODEL SUMMARY", "text": "The derivation above has demonstrated how to transform the vanilla RNN into a self-attention-style model. To summarize, the structure of StarSaber can be described by Figure 1. The Attention Graph here is the relation construction process returning a matrix G. It is exactly what happens in the last two formulas shown above. Masked Attention here is how we implement Position Encoding. And α is the weight for Residual Connection." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 A SPECIAL TYPE OF PRETRAINING: EFFECTIVENESS OF MASKED LANGUAGE", "text": "Although we don’t pretrain on an enormous dataset, we can still improve the performance by utilizing the common pretraining task. In practice, we pretrain our model on a collection of all the inputs without their labels from the training, development and test set. We use dynamic mask (Liu et al., 2019), which is to mask different positions of a sample at every epoch. The reconstruction loss is computed on all positions of a sequence instead of only those masked ones. Masking probability in every position is set to be 0.3. Different from BERT, we don’t have a MASK symbol. Instead every masked position is replaced with a word uniformally selected from the whole vocabulary." }, { "heading": "4.2 SETTINGS", "text": "Experiments are conducted on three datasets from Xu et al. (2020), namely the AFQMC(Sentence Similarity) dataset, the TNEWS(Text Classification) dataset and the CMNLI(Natural Language Inference) dataset. For the two text matching tasks, we concatenate the two input sentences with a seperation. The Adam optimizer (Kingma & Ba, 2014) is used. The learning rate is set to 1e-3 in pretraining and 1e-4 in fine-tuning for ReZero and StarSaber. A learning rate of 1e-4 in both pretraining and fine-tuning is applied to the vanilla Transformer. Random seed is set to 10. We compare StarSaber with BiLSTM and two Transformers. For StarSaber and the two Transformers, we have two versions each for every dataset, namely pretrained and not pretrained ones. For LSTM, we only have a not-pretrained version. In such versions, we utilize word embeddings trained from Word2Vec (Mikolov et al., 2013b;a) with data collected from Wikipedia provided by Li et al. (2018) and finetune the embeddings on the pretraining dataset for each task. In training process, we freeze these embeddings. Our BiLSTM concatenates features encoded by two LSTM encoders of opposite directions. In ReZero Layer Normalization (Lei Ba et al., 2016) and the warmup procedure are dropped. Both Transformer and ReZero have 8 heads and take 4 * Hidden-size as the size for the FFN layer. For models using Word2Vec, the hidden size in every layer and the input size are all set to 300. For models pretrained, they are set to 512. Early Stopping is used and the loss function is Cross-Entropy in pretraining but Hinge Loss in training. More details on model configurations are shown in Table 1.3 All results are submitted online to www.cluebenchmark.com and test labels are not available. Due to the submission limit of 10 times in a month, we cannot try more configuration settings." }, { "heading": "4.3 RESULTS AND ANALYSIS", "text": "From Table 2 we can see that both AFQMC and TNEWS are datasets of medium size and CMNLI is a larger dataset compared to the other two. AFQMC is a dataset of sentence similarity, represented in a binary classification task. TNEWS is a dataset of text classification which has 15 classes in total. We don’t use the keywords provided in order to make comparision with results in Xu et al. (2020). And CMNLI is a Natural Language Inference dataset containing 3 classes for each sample. Our results are shown in Table 3. Results achieved by different large pretrained models are shown in Table 4. It can be seen that none of those large-scale models can achieve astonishing performance. This is due to the construction approach used by CLUE. They use a specific pretrained baseline to select all samples misclassified. Details can be found in https://github.com/CLUEbenchmark/CLUE. Given the results from BiLSTM and StarSaber, it shows that even pretraining on a small dataset with less time and computational power\n3If not stated, parameter numbers are computed with the size of embedding matrices.\ncan help improve the performance. Pretraining also allows us to use deeper and larger models. The reason why models with Word2Vec are small is that larger models without pretraining can achieve much worse performance, for they indicate greater search space and are harder to optimize.\nFor AFQMC, all models with Word2Vec in fact output zero for every sample(class labels are zero and one). This may be due to distribution imbalance in such a dataset. Only those pretrained ones can classify a small fraction of samples into the positive class. Thus an improvement of 0.41% is uneasy to achieve. Another insteresting phenomenon appearing in CMNLI is that the gap between the development set and the test set is surprisingly large. For models with Word2Vec, the gap reaches up to 9.61%. For TNEWS, the input sentence is only the title of a passage. In this dataset, StarSaber-1 outperforms ReZero by 0.72% while StarSaber-2 differs from ReZero by only 0.14%.\nIt can also be seen from the results of AFQMC and TNEWS that stacking more layers in fact helps improve performance for StarSaber. On the dataset of CMNLI, the fact that StarSaber-1 doesn’t outperform StarSaber-2 is probably because 12 layers are enough or even redundant for StarSaber. The same logic can be applied to the fact that StarSaber-1 doesn’t outperform ReZero. With enough data and enough model complexity, ReZero and Transformer can perform fairly well. Compared to them, StarSaber is more efficient. In all three datasets, it achieves almost the same results as ReZero with the same number of layers, revealing a simple fact that many parts such as Multi-head Attention and the FFN layer are not necessary within our framework. We can drop all these computationally expensive parts.\nReZero is an improved version of Transformer in Vaswani et al. (2017). It adds a trainable weight in front of the Residual Connection and leads to faster convergence. But the better performance of ReZero here doesn’t mean it always outperforms Transformer with Layer Normalization, since samples in all these datasets are selected using Transformer-based models. Such a conclusion drawn from these selected data may not hold generally." }, { "heading": "5 ABLATION STUDIES", "text": "" }, { "heading": "5.1 EFFECTIVENESS OF GATES", "text": "In an RNN, gates allow gradients to flow back to the more distant past. But in our model, there has been a weighted residual connection to solve such a problem, which means that gates’ function of adjusting gradient flows is no more important. Here, we incorporate gates in a different way. Formulas can be found in the appendix. Model configurations and results are in Table 5. Numbers of layers, hidden sizes and input sizes are the same as StarSaber-2.\nWe can compare the results here with the results in Table 3. With the number of layers, the hidden size and the input size fixed, gates certainly help improve the performance. But when parameters are\nequally many, that is our implementation of StarSaber-1 which has twice the number of layers, gates don’t show any superiority. For simplicity and compactness, we drop all gates. We may also want to drop the gates in LSTM and replace them with a weighted residual connection instead, which is simpler and more efficient. And this weight itself can also be parametrized by a simple non-linear function of hidden vectors." }, { "heading": "5.2 COMPARISION OF TWO WAYS FOR POSITION ENCODING", "text": "We conduct experiments on our proposed methods for position encoding. We at first replace the two matrices representing distinct directions with one and add a position embedding matrix made of cosines and sines to the input. The number of parameters is increased by doubling the number of layers. From Table 6, we can observe that after replacing our implementation of position encoding with the PE matrix in Transformer, performance is even worse than StarSaber-2 with less parameters, especially in CMNLI. It means that the PE in Transformer is not consistent with StarSaber. We may give an intuitive explanation: Because of Transformer’s FFN layers, the PE matrix added to the input can in fact be recognized in hidden layers. But StarSaber doesn’t have a Feed Forward Network, therefore cannot seperate such mixed information." }, { "heading": "5.3 EFFECTIVENESS OF DIRECT PATHS", "text": "Direct paths seem unecessary in StarSaber since we already have a residual connection. However, from the perspective of fixed point, if we drop these direct paths in each layer, the model will finally converge to the same fixed point for whatever inputs. In order to check whether these direct paths are practical or not, we remove all of them and again increase the number of layers to equalize the number of parameters. From the results of CMNLI in Table 7, we can clearly see the benefits they bring." }, { "heading": "6 CONCLUSION", "text": "This paper proposes a framework to transform RNN-based models to attention-based ones. With the perspective to view attention as a way to construct a word relation graph, we transform the vanilla RNN to StarSaber, by defining a set of equations. Other variants of RNN can also be transformed in the same way, such as LSTM and GRU discussed above. In this way, we reduce the number of parameters in Transformer by dropping the FFN layer. Experiments on three datasets and the ablation study show the effectiveness of our model and framework." }, { "heading": "A FORMULAS TO INCORPORATE GATES", "text": "The formulas to incorporate gates mentioned in the ablation study are listed below:\nhl+1n = h l n + α ltanh(rln ◦Aln + iln ◦ (V lxn))4 Aln = ∑ i<n GlniU lhli + ∑ i≥n GlniQ lhli\nrln = σ(W rlAln + V rlxn)\niln = σ(W ilAln + V ilxn)\nGlni = softmax(g l,−1) = g l ni∑\nj\nglnj\nglni = exp( (hln) TW lhli√ d )\n(6)\nNote that this is also a demonstration of how to transform a recurrence-based model into an attention based model in our framework.\n4◦ denotes the element-wise product." } ]
2,020
null
SP:741bcead336d8cc7288ce82bca8028516280fff0
[ "The paper deals with the problem of community detection on graphs, examining the impact of graph measures. To do so, the paper proposes an experimental framework where clustering is achieved using the kernel k-means algorithm, and the performance of graph measures is examined on various instances of artificially generated graphs using the LFR benchmark. The overall approach is empirical, supported mainly by the experimental results. The main observations concern the consistent behavior of particular graph measures across multiple settings of the dataset." ]
Graph measures can be used for graph node clustering using metric clustering algorithms. There are multiple measures applicable to this task, and which one performs better is an open question. We study the performance of 25 graph measures on generated graphs with different parameters. While usually measure comparisons are limited to general measure ranking on a particular dataset, we aim to explore the performance of various measures depending on graph features. Using an LFR graph generator, we create a dataset of ∼7500 graphs covering the whole LFR parameter space. For each graph, we assess the quality of clustering with k-means algorithm for every considered measure. We determine the best measure for every area of the parameter space. We find that the parameter space consists of distinct zones where one particular measure is the best. We analyze the geometry of the resulting zones and describe it with simple criteria. Given particular graph parameters, this allows us to choose the best measure to use for clustering.
[]
[ { "authors": [ "Lada A. Adamic", "Natalie Glance" ], "title": "The political blogosphere and the 2004 us election: divided they blog", "venue": "In Proceedings of the 3rd International Workshop on Link Discovery,", "year": 2005 }, { "authors": [ "David Arthur", "Sergei Vassilvitskii" ], "title": "k-means++: The advantages of careful seeding", "venue": "Technical report, Stanford University,", "year": 2006 }, { "authors": [ "Konstantin Avrachenkov", "Pavel Chebotarev", "Dmytro Rubanov" ], "title": "Kernels on graphs as proximity measures", "venue": "In International Workshop on Algorithms and Models for the Web-Graph,", "year": 2017 }, { "authors": [ "Rinat Aynulin" ], "title": "Efficiency of transformations of proximity measures for graph clustering", "venue": "In International Workshop on Algorithms and Models for the Web-Graph,", "year": 2019 }, { "authors": [ "Rinat Aynulin" ], "title": "Impact of network topology on efficiency of proximity measures for community detection", "venue": "In International Conference on Complex Networks and Their Applications,", "year": 2019 }, { "authors": [ "Michael J. Barber", "John W. Clark" ], "title": "Detecting network communities by propagating labels under constraints", "venue": "Physical Review E,", "year": 2009 }, { "authors": [ "Vincent D. Blondel", "Jean-Loup Guillaume", "Renaud Lambiotte", "Etienne Lefebvre" ], "title": "Fast unfolding of communities in large networks", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2008 }, { "authors": [ "Pavel Chebotarev" ], "title": "Studying new classes of graph metrics", "venue": "In International Conference on Geometric Science of Information,", "year": 2013 }, { "authors": [ "Pavel Chebotarev", "Elena Shamis" ], "title": "On the proximity measure for graph vertices provided by the inverse Laplacian characteristic matrix", "venue": "In Abstracts of the Conference “Linear Algebra and its Applications”,", "year": 1995 }, { "authors": [ "Pavel Chebotarev", "Elena Shamis" ], "title": "On a duality between metrics and Σ-proximities", "venue": "Automation and Remote Control,", "year": 1998 }, { "authors": [ "Pavel Chebotarev", "Elena Shamis" ], "title": "On proximity measures for graph vertices", "venue": "Automation and Remote Control,", "year": 1998 }, { "authors": [ "Fan Chung" ], "title": "The heat kernel as the pagerank of a graph", "venue": "Proceedings of the National Academy of Sciences,", "year": 2007 }, { "authors": [ "Fan Chung", "Shing-Tung Yau" ], "title": "Coverings, heat kernels and spanning trees", "venue": "Journal of Combinatorics,", "year": 1998 }, { "authors": [ "Fan R.K. Chung" ], "title": "Spectral Graph Theory, volume 92", "venue": "American Mathematical Soc.,", "year": 1997 }, { "authors": [ "Sylvain Courtain", "Pierre Leleux", "Ilkka Kivimäki", "Guillaume Guex", "Marco Saerens" ], "title": "Randomized shortest paths with net flows and capacity constraints", "venue": "Information Sciences,", "year": 2020 }, { "authors": [ "Anton J. Enright", "Stijn Van Dongen", "Christos A. Ouzounis" ], "title": "An efficient algorithm for large-scale detection of protein families", "venue": "Nucleic Acids Research,", "year": 2002 }, { "authors": [ "Ernesto Estrada", "Naomichi Hatano" ], "title": "Statistical-mechanical approach to subgraph centrality in complex networks", "venue": "Chemical Physics Letters,", "year": 2007 }, { "authors": [ "Ernesto Estrada", "Naomichi Hatano" ], "title": "Communicability in complex networks", "venue": "Physical Review E,", "year": 2008 }, { "authors": [ "Ernesto Estrada", "Grant Silver" ], "title": "Accounting for the role of long walks on networks via a new matrix function", "venue": "Journal of Mathematical Analysis and Applications,", "year": 2017 }, { "authors": [ "Santo Fortunato", "Marc Barthelemy" ], "title": "Resolution limit in community detection", "venue": "Proceedings of the National Academy of Sciences,", "year": 2007 }, { "authors": [ "Babak Fotouhi", "Naghmeh Momeni", "Benjamin Allen", "Martin A Nowak" ], "title": "Evolution of cooperation on large networks with community structure", "venue": "Journal of the Royal Society Interface,", "year": 2018 }, { "authors": [ "Francois Fouss", "Luh Yen", "Alain Pirotte", "Marco Saerens" ], "title": "An experimental investigation of graph kernels on a collaborative recommendation task", "venue": "In Sixth International Conference on Data Mining (ICDM’06),", "year": 2006 }, { "authors": [ "François Fouss", "Kevin Francoisse", "Luh Yen", "Alain Pirotte", "Marco Saerens" ], "title": "An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification", "venue": "Neural Networks,", "year": 2012 }, { "authors": [ "François Fouss", "Marco Saerens", "Masashi Shimbo" ], "title": "Algorithms and Models for Network Data and Link Analysis", "venue": null, "year": 2016 }, { "authors": [ "F. Göbel", "A.A. Jagers" ], "title": "Random walks on graphs", "venue": "Stochastic Processes and Their Applications,", "year": 1974 }, { "authors": [ "Martijn Gösgens", "Liudmila Prokhorenkova", "Alexey Tikhonov" ], "title": "Systematic analysis of cluster similarity indices: Towards bias-free cluster validation", "venue": "arXiv preprint arXiv:1911.04773,", "year": 2019 }, { "authors": [ "Guillaume Guex", "Ilkka Kivimäki", "Marco Saerens" ], "title": "Randomized optimal transport on a graph: framework and new distance measures", "venue": "arXiv preprint arXiv:1806.03232,", "year": 2018 }, { "authors": [ "Guillaume Guex", "Sylvain Courtain", "Marco Saerens" ], "title": "Covariance and correlation kernels on a graph in the generalized bag-of-paths formalism", "venue": null, "year": 1902 }, { "authors": [ "Paul W. Holland", "Kathryn Blackmond Laskey", "Samuel Leinhardt" ], "title": "Stochastic blockmodels: First steps", "venue": "Social Networks,", "year": 1983 }, { "authors": [ "Lawrence Hubert", "Phipps Arabie" ], "title": "Comparing partitions", "venue": "Journal of Classification,", "year": 1985 }, { "authors": [ "Vladimir Ivashkin", "Pavel Chebotarev" ], "title": "Do logarithmic proximity measures outperform plain ones in graph clustering", "venue": "In International Conference on Network Analysis,", "year": 2016 }, { "authors": [ "Karly A. Jacobsen", "Joseph H. Tien" ], "title": "A generalized inverse for graphs with absorption", "venue": "Linear Algebra and its Applications,", "year": 2018 }, { "authors": [ "Jaz Kandola", "Nello Cristianini", "John S. Shawe-Taylor" ], "title": "Learning semantic similarity", "venue": "In Advances in Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "Leo Katz" ], "title": "A new status index derived from sociometric analysis", "venue": "Psychometrika, 18(1):39–43,", "year": 1953 }, { "authors": [ "Stephen J. Kirkland", "Michael Neumann" ], "title": "Group Inverses of M-matrices and Their Applications", "venue": null, "year": 2012 }, { "authors": [ "Ilkka Kivimäki", "Masashi Shimbo", "Marco Saerens" ], "title": "Developments in the theory of randomized shortest paths with a comparison of graph node distances", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2014 }, { "authors": [ "Andrea Lancichinetti", "Santo Fortunato", "Filippo Radicchi" ], "title": "Benchmark graphs for testing community detection algorithms", "venue": "Physical Review E,", "year": 2008 }, { "authors": [ "Pierre Leleux", "Sylvain Courtain", "Guillaume Guex", "Marco Saerens" ], "title": "Sparse randomized shortest paths routing with tsallis divergence regularization", "venue": "arXiv preprint arXiv:2007.00419,", "year": 2020 }, { "authors": [ "Jure Leskovec", "Jon Kleinberg", "Christos Faloutsos" ], "title": "Graph evolution: Densification and shrinking diameters", "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD),", "year": 2007 }, { "authors": [ "Stuart Lloyd" ], "title": "Least squares quantization in pcm", "venue": "IEEE Transactions on Information Theory,", "year": 1982 }, { "authors": [ "David Lusseau", "Karsten Schneider", "Oliver J. Boisseau", "Patti Haase", "Elisabeth Slooten", "Steve M. Dawson" ], "title": "The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations", "venue": "Behavioral Ecology and Sociobiology,", "year": 2003 }, { "authors": [ "Ulrike V. Luxburg", "Agnes Radl", "Matthias Hein" ], "title": "Getting lost in space: Large sample analysis of the resistance distance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "James MacQueen" ], "title": "Some methods for classification and analysis of multivariate observations", "venue": "In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability,", "year": 1967 }, { "authors": [ "Andrew Kachites McCallum", "Kamal Nigam", "Jason Rennie", "Kristie Seymore" ], "title": "Automating the construction of internet portals with machine learning", "venue": "Information Retrieval,", "year": 2000 }, { "authors": [ "Sebastian Mika", "Gunnar Ratsch", "Jason Weston", "Bernhard Scholkopf", "Klaus-Robert" ], "title": "Mullers. Fisher discriminant analysis with kernels", "venue": "In Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop,", "year": 1999 }, { "authors": [ "Mark E.J. Newman" ], "title": "Modularity and community structure in networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2006 }, { "authors": [ "Mark E.J. Newman", "Michelle Girvan" ], "title": "Finding and evaluating community structure in networks", "venue": "Physical Review E,", "year": 2004 }, { "authors": [ "Lawrence Page", "Sergey Brin", "Rajeev Motwani", "Terry Winograd" ], "title": "The PageRank citation ranking: Bringing order to the web", "venue": "Technical report, Stanford InfoLab,", "year": 1999 }, { "authors": [ "Muhammad Qasim Pasta", "Faraz Zaidi" ], "title": "Topology of complex networks and performance limitations of community detection algorithms", "venue": "IEEE Access,", "year": 2017 }, { "authors": [ "Liudmila Prokhorenkova" ], "title": "Using synthetic networks for parameter tuning in community detection", "venue": "In International Workshop on Algorithms and Models for the Web-Graph,", "year": 2019 }, { "authors": [ "Usha Nandini Raghavan", "Réka Albert", "Soundar Kumara" ], "title": "Near linear time algorithm to detect community structures in large-scale networks", "venue": "Physical Review E,", "year": 2007 }, { "authors": [ "John Shawe-Taylor", "Nello Cristianini" ], "title": "Kernel Methods for Pattern Analysis", "venue": null, "year": 2004 }, { "authors": [ "Felix Sommer", "François Fouss", "Marco Saerens" ], "title": "Comparison of graph node distances on clustering tasks", "venue": "In International Conference on Artificial Neural Networks,", "year": 2016 }, { "authors": [ "Felix Sommer", "François Fouss", "Marco Saerens" ], "title": "Modularity-driven kernel k-means for community detection", "venue": "In International Conference on Artificial Neural Networks,", "year": 2017 }, { "authors": [ "Juliette Stehlé", "Nicolas Voirin", "Alain Barrat", "Ciro Cattuto", "Lorenzo Isella", "Jean-François Pinton", "Marco Quaggiotto", "Wouter Van den Broeck", "Corinne Régis", "Bruno Lina" ], "title": "High-resolution measurements of face-to-face contact patterns in a primary school", "venue": "PloS One,", "year": 2011 }, { "authors": [ "Stijn Marinus Van Dongen" ], "title": "Graph Clustering by Flow Smulation", "venue": "PhD thesis, Utrecht University,", "year": 2000 }, { "authors": [ "Ulrike Von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and Computing,", "year": 2007 }, { "authors": [ "Luh Yen", "Francois Fouss", "Christine Decaestecker", "Pascal Francq", "Marco Saerens" ], "title": "Graph nodes clustering based on the commute-time kernel", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2007 }, { "authors": [ "Luh Yen", "Marco Saerens", "Amin Mantrach", "Masashi Shimbo" ], "title": "A family of dissimilarity measures between nodes generalizing both the shortest-path and the commute-time distances", "venue": "In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2008 }, { "authors": [ "Luh Yen", "Francois Fouss", "Christine Decaestecker", "Pascal Francq", "Marco Saerens" ], "title": "Graph nodes clustering with the sigmoid commute-time kernel: A comparative study", "venue": "Data & Knowledge Engineering,", "year": 2009 }, { "authors": [ "Wayne W. Zachary" ], "title": "An information flow model for conflict and fission in small groups", "venue": "Journal of Anthropological Research,", "year": 1977 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph node clustering is one of the central tasks in graph structure analysis. It provides a partition of nodes into disjoint clusters, which are groups of nodes that are characterized by strong mutual connections. It can be of practical use for graphs representing real-life systems, such as social networks or industrial processes. Clustering allows to infer some information about the system: the nodes of the same cluster are highly similar, while the nodes of different clusters are dissimilar. The technique can be applied without any labeled data to extract important insights about a network.\nThere are different approaches to clustering, including ones based on modularity optimization (Newman & Girvan, 2004; Blondel et al., 2008), label propagation algorithm (Raghavan et al., 2007; Barber & Clark, 2009), Markov cluster process (Van Dongen, 2000; Enright et al., 2002), and spectral clustering (Von Luxburg, 2007). In this work, we use a different approach based on choosing a closeness measure on a graph, which allows one to use any metric clustering algorithm (e.g., Yen et al., 2009).\nThe choice of the measure significantly affects the quality of clustering. Classical measures are the Shortest Path (Buckley & Harary, 1990) and the Commute Time (Göbel & Jagers, 1974) distances. The former is the minimum number of edges in a path between a given pair of nodes. The latter is the expected number of steps from one node to the other and back in a random walk on the graph. There is a number of other measures, including recent ones (e.g., Estrada & Silver, 2017; Jacobsen & Tien, 2018), many of them are parametric. Despite the fact that graph measures are compatible with any metric algorithm, in this paper we restrict ourselves to the kernel k-means algorithm (e.g., Fouss et al., 2016).\nWe base our research on a generated set of graphs. There are various algorithms to generate graphs with community structures. The well-known ones are the Stochastic Block Model (Holland et al., 1983) and Lancichinetti–Fortunato–Radicchi benchmark (Lancichinetti et al., 2008) (hereafter, LFR). The first one is an extension of the Erdős–Rényi model with different intra- and intercluster probabilities of edge creation. The second one involves power law distributions of node degrees and community sizes. There are other generation models, e.g., Naive Scale-free Clustering (Pasta & Zaidi, 2017). We choose the LFR model: although it misses some key properties of real graphs, like diameter or the clustering coefficient, this model has been proven to be effective in meta-learning (Prokhorenkova, 2019).\nThere are a lot of measure benchmarking studies considering node classification and clustering for both generated graphs and real-world datasets (Fouss et al., 2012; Sommer et al., 2016; 2017; Avrachenkov et al., 2017; Ivashkin & Chebotarev, 2016; Guex et al., 2018; 2019; Aynulin, 2019a;b; Courtain et al., 2020; Leleux et al., 2020), etc. Despite a large number of experimental results, theoretical results are still a matter of the future. One of the most interesting theoretical results on graph measures is the work by Luxburg et al. (2010), where some unattractive features of the Commute Time distance on large graphs were explained theoretically, and a reasonable amendment was proposed to fix the problem. Beyond the complexity of such proofs, there is still very little empirical understanding of what effects need to be proven. Our empirical work has two main differences from the previous ones. First, we consider a large number of graph measures, which for the first time gives a fairly complete picture. Second, unlike these studies concluding with a global leaderboard, we are looking for the leading measures for each set of the LFR parameters.\nWe aim to explore the performance of of the 25 most popular measures in the graph clustering problem on a set of generated graphs with various parameters. We assess the quality of clustering with every considered measure and determine the best measure for every region of the graph parameter space.\nOur contributions are as follows:\n• We generate a dataset of ∼7500 graphs covering all parameter space of LFR generator; • We consider a broad set of measures and rank measures by clustering performance on this\ndataset;\n• We find the regions of certain measure leadership in the graph parameter space; • We determine the graph features that are responsible for measure leadership; • We check the applicability of the results on real-world graphs.\nOur framework for clustering with graph measures as well as a collected dataset are available on link_is_not_available_during_blind_review.\n2 DEFINITIONS\n2.1 KERNEL k-MEANS\nThe original k-means algorithm (Lloyd, 1982; MacQueen et al., 1967) clusters objects in Euclidean space. It requires coordinates of the objects to determine the distances between them and centroids. The algorithm can be generalized to use the degree of closeness between the objects without defining a particular space. This technique is called the kernel trick, usually it is used to bring non-linearity to linear algorithms. The algorithm that uses the kernel trick is called kernel k-means (see, e.g., Fouss et al., 2016). For graph node clustering scenario, we can use graph measures as kernels for the kernel k-means.\nInitially, the number of clusters is known and we need to set initial state of centroids. The results of the clustering with k-means are very sensitive to it. Usually, the algorithm runs several times with different initial states (trials) and chooses the best trial. There are different approaches to the initialization; we consider three of them: random data points, k-means++ (Arthur & Vassilvitskii, 2006), and random partition. We combine all these strategies to reduce the impact of the initialization strategy on the result." }, { "heading": "2.2 CLOSENESS MEASURES", "text": "For a given graph G, V (G) is the set of its vertices and A is its adjacency matrix. A measure on G is a function κ : V (G) × V (G) → R, which gets two nodes and returns closeness (bigger means closer) or distance (bigger means farther).\nA kernel on a graph is a graph nodes’ closeness measure that has an inner product representation. Any symmetric positive semidefinite matrix is an inner product matrix (also called Gram matrix). A kernel matrix K is a square matrix that contains similarities for all pairs of nodes in a graph.\nTo use kernel k-means, we need kernels. Despite that not all closeness measures we consider are Gram matrices, we treat them as kernels. The applicability of this approach was confirmed in Fouss et al. (2016). For the list of measures bellow, we use the word “kernel” only for the measures that satisfy the strict definition of kernel.\nClassical measures are Shortest Path distance (Buckley & Harary, 1990) (SP) and Commute Time distance (Göbel & Jagers, 1974) (CT). SP is the minimum number of edges in a path between a given pair of nodes. CT is the expected lengths of random walks between two nodes. SP and CT are defined as distances, so we need to transform them into similarities to use as kernels. We apply the following distance to closeness transformation (Chebotarev & Shamis, 1998a; Borg & Groenen, 2005): K = −HDH; H = I − E/n, (1) where D is a distance matrix, E is the matrix of ones, I is the identity matrix, and n is the number of nodes.\nIn this paper, we examine 25 graph measures (or, more exactly, 25 parametric families of measures). We present these measures grouped by type similarly to (Avrachenkov et al., 2017):\n• Adjacency Matrix A based kernels and measures. – Katz kernel: KKatzα = (I − αA)−1, 0 < α < ρ−1, where ρ is the spectral radius of A. (Katz, 1953) (also known as Walk proximity (Chebotarev & Shamis, 1998b) or von Neumann diffusion kernel (Kandola et al., 2003; Shawe-Taylor & Cristianini et al., 2004)).\n– Communicability kernel KCommt = expm(tA), t > 0, where expm means matrix exponential (Fouss et al., 2006; Estrada & Hatano, 2007; 2008).\n– Double Factorial closeness: KDFt = ∑inf k=0 tk k!!A k, t > 0 (Estrada & Silver, 2017).\n• Laplacian Matrix L = D − A based kernels and measures, where D = Diag(A · 1) is the degree matrix of G, Diag(x) is the diagonal matrix with vector x on the main diagonal.\n– Forest kernel: KFort = (I + tL)−1, t > 0 (also known as Regularized Laplacian kernel) (Chebotarev & Shamis, 1995). – Heat kernel: KHeatt = expm(−tL), t > 0 (Chung & Yau, 1998). – Normalized Heat kernel: KNHeatt = expm(−tL), L = D− 1 2LD− 1 2 , t > 0 (Chung,\n1997). – Absorption kernel: KAbst = (tA+ L)−1, t > 0 (Jacobsen & Tien, 2018).\n• Markov Matrix P = D−1A based kernels and measures. – Personalized PageRank closeness: KPPRα = (I − αP )−1, 0 < α < 1 (Page et al.,\n1999). – Modified Personalized PageRank: KMPPRα = (I − αP )−1D−1 = (D − αA)−1,\n0 < α < 1 (Kirkland & Neumann, 2012). – PageRank heat closeness: KHPRt = expm(−t(I − P )), t > 0 (Chung, 2007). – Randomized Shortest Path distance. Using P and the matrix of the SP distances C\nfirst get Z (Yen et al., 2008):\nW = P ◦ exp(−βC); Z = (I −W )−1. (2)\nThen S = (Z(C ◦W )Z)÷Z; C̄ = S−e diag(S)T , and finally,DRSP = (C̄+C̄T )/2. Here ◦ and ÷ are element-wise multiplication and division. Kernel version KRSP(t) can be obtained with equation 1. – Free Energy distance. Using Z from equation 2: Zh = Z Diag(Z)−1; Φ = −1/β logZh; DFE = (Φ + ΦT )/2 (Kivimäki et al., 2014). Kernel version KFE(t) can be obtained with equation 1.\n• Sigmoid Commute Time kernels. – Sigmoid Commute Time kernel:\nKSCTt = σ(−tKCT/std(KCT)), t > 0, (3)\nwhere σ is an element-wise sigmoid function σ(x) = 1/(1 + e−x) (Yen et al., 2007).\nOccasionally, element-wise logarithm is applied to the resulting kernel matrix (Chebotarev, 2013; Ivashkin & Chebotarev, 2016). We apply it to almost all investigated measures and consider the resulting measures separately from their plain versions (see Table 1). For some measures, like Forest kernel, this is well-known practice (Chebotarev, 2013), while for others, like Double Factorial closeness, this transformation, to the best of our knowledge, is applied for the first time. The considered measures and their short names are summarized in Table 1." }, { "heading": "3 DATASET", "text": "We collected a paired dataset of graphs and the corresponding results of clustering with each measure mentioned in Table 1. In this section, we describe the graph generator, the sampling strategy, the calculated graph features, and the pipeline for the measure score calculation.\nWe use Lancichinetti–Fortunato–Radicchi (LFR) graph generator. It generates non-weighted graphs with ground truth non-overlapping communities. The model has mandatory parameters: the number of nodes n (n > 0), the power law exponent for the degree distribution τ1 (τ1 > 1), the power law exponent for the community size distribution τ2 (τ2 > 1), the fraction of intra-community edges incident to each node µ (0 ≤ µ ≤ 1), and either minimum degree (min degree) or average degree (avg degree). There are also extra parameters: maximum degree (max degree), minimum community size (min community), maximum community size (max community). Not the whole LFR parameter space corresponds to common real-world graphs; most of such graphs are described with τ1 ∈ [1, 4] and µ < 0.5 (e.g., Fotouhi et al., 2019). However, there is also an interesting case of bipartite/multipartite-like graphs with µ > 0.5. Moreover, many of the datasets studied in Section 5 have τ1 > 4. Our choice is to consider the entire parameter space to cover all theoretical and practical cases.\nFor the generation, we consider 10 < n < 1500. It is impossible to generate a dataset with a uniform distribution of all LFR parameters, because τ1 and τ2 parameters are located on rays. We transform\nτ1 and τ2 to τ̃i = 1 − (1/ √ τi), i = 1, 2 to bring their scope to the [0, 1] interval. In this case, “realistic” settings with τ1 ∈ [1, 4] take up 50% of the variable range. Also, as avg degree feature is limited by the n of a particular graph, we decided to replace it with density (avg degree/(n−1)). It is not dependent on n and belongs to [0, 1]. Using all these considerations, we collected our dataset by uniformly sampling parameters for LFR generator from the set [n, τ̃1, τ̃2, µ, density] and generating graphs with these parameters. Additionally, we filter out all disconnected graphs.\nIn total, we generated 7396 graphs. It is worth noting that the generator fails for some sets of parameters, so the resulting dataset is not uniform (see Fig. 1). In our study, non-uniformity is not a very important issue, because we are interested in local effects, not global leadership. Moreover, true uniformity for LFR parameter space is impossible, due to the unlimited scope of parameters.\nFor our research, we choose a minimum set of the features that describe particular properties of graphs and are not interchangeable.\nThe LFR parameters can be divided in three groups by the graph properties they reflect:\n• The size of the graph and the communities: n, τ1, min community, max community;\n• The density and uniformity of the node degrees distribution: τ2, min degree, avg degree, max degree. As avg degree depends on n, it is distributed exponentially, so we use log(avg degree) instead;\n• The cluster separability: µ. As µ parameter considers only the ratio between the number of inter-cluster edges and the number of nodes but ignores overall density, we use modularity (Newman & Girvan, 2004) as a more appropriate measure for cluster separability.\nThus, the defined set of features [n, τ1, τ2, avg degree, modularity] is enough to consider all graph properties mentioned above. Although modularity is a widely used measure, it suffers from resolution limit problems (Fortunato & Barthelemy, 2007). We acknowledge that this may cause some limitations in our approach, which should be the topic of further research.\nFor every generated graph, we calculate the top ARI score for every measure (Hubert & Arabie, 1985). We choose ARI as a clustering score which is both popular and unbiased (Gösgens et al., 2019). As soon as every measure has a parameter, we perform clustering for a range of parameter values (we transform the parameter to become in the [0, 1] interval and then choose 16 values linearly spaced from 0 to 1). For each value, we run 6 + 6 + 6 trials of k-means (6 trials for each of three initialization methods).\nFig. 2 shows the pipeline we use to calculate ARI score for a given LFR parameter set, a measure, and a measure parameter. Measure parameters are not the subject of our experiments, so for every measure we just take the result of the measure with the value of the parameter that gives the best ARI score.\nBecause of the need to iterate over graphs, measures, parameter values, and initializations, the task is computationally difficult. The total computation time was 20 days on 18 CPU cores and 6 GPUs." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 GLOBAL LEADERSHIP IN LFR SPACE", "text": "We rank the measures by their ARI score on every graph of the dataset. The rank is defined as the position of the measure in this list, averaged over the dataset (see Table 2). It is important to note that the global leadership does not give a comprehensive advice on which measure is better to use, because for a particular graph, the global leader can perform worse than the others. Here we consider the entire LFR space, not just its zone corresponding to common real-world graphs, so the ranking may differ from those obtained for restricted settings.\nAs SCCT is the winner for both ranking and percentage of wins, we can say for sure that it is the global winner for the LFR space graphs. Other measures still can be leaders in some zones of the feature space." }, { "heading": "4.2 FEATURE IMPORTANCE STUDY", "text": "First of all, we find out which graph features are important for the choice of the best measure and which are not. To do that, we use Linear Discriminant Analysis (Mika et al., 1999) (LDA). This method finds a new basis in the feature space to classify a dataset in the best way. It also shows how many components of basis are required to fit the majority of data.\nFig. 3a shows that the first two components take about 90% of the explained variance. Fig. 3b shows that these components include only τ1, avg degree, and modularity. The fact that n is not used means that the size of the graph as well as the density are not of primary importance for choosing the best measure. So is not τ2 measuring the diversity of cluster sizes.\nFig. 4 shows the point cloud projected on the space of the two main components of LDA. We see a confirmation that the measures are indeed zoned, but the areas are quite noisy. To detect the zones of measure leadership, we need to know the leadership on average in every area of space, rather than the wins in particular points. To define the local measure leadership in the whole space, we need to introduce a filtering algorithm that for every point of the space returns the measure leadership depending on the closest data points. As the choice of measure is actually dependent only on three features, we can limit our feature space to [τ1, avg degree, modularity]." }, { "heading": "4.3 GAUSSIAN FILTER IN FEATURE SPACE", "text": "Using a filter in the feature space, we can suppress the noise and find actual zones of leadership for the measures. We use the Gaussian filter with a scale parameter σ. For every given point of the space, it takes the data points that are closer than 3σ and averages ARIs of the chosen points with a weight e−dist\n2/2σ2 . This allows to give larger weights to closer points. If there are less than three data points inside the sphere with a 3σ radius, the filter returns nothing, allowing to ignore the points with insufficient data.\nBefore applying the filter, we prepare the dataset. First, we only take the points with only one winning measure, because multiple winners can confuse the filter. Then we normalize the standard\ndeviation of every feature distribution to one. Finally, we cut off the long tail of distant data points. The resulting number of graphs is 5201.\nTo choose σ, we apply the filter with different σ and look at the number of connected components in the feature space. The needed σ should be large enough to suppress the noise, however, it should not suppress small zones. Guided by this heuristic, we choose σ = 0.5.\nAfter filtering with σ = 0.5, the leaderboard of measure wins is changed (see Table 3). Only six measures keep their positions: SCCT, NHeat, logComm, Comm, logDF, and RSP. This means that these measures do have zones of leadership, otherwise they would be filtered out. We can plot the entire feature space colored by the leadership zones of the measures (Fig. 5). As the resulting space is 3D, we show slices of it by each of the three coordinates." }, { "heading": "5 REAL-WORLD DATASETS", "text": "Even though LFR lacks some characteristics of real-world graphs, there is evidence that the optimal parameter of the Louvain clustering for a real graph is close to the parameter for LFR graphs gen-\nerated from the features of a real one (Prokhorenkova, 2019). So, there is a chance that the learned space might be helpful for choosing measures in the wild.\nFor evaluation, we use 29 graphs of standard datasets: Dolphins (Lusseau et al., 2003), Football (Newman & Girvan, 2004), Karate club (Zachary, 1977), Newsgroups (9 subsets, weights are binarized with threshold 0.1) (Yen et al., 2007), Political blogs (Adamic & Glance, 2005), Political books (Newman, 2006), SocioPatterns Primary school day (2 graphs) (Stehlé et al., 2011), Cora (11 subsets) (McCallum et al., 2000), Eu-core (Leskovec et al., 2007), EuroSIS (WebAtlas, 2009). The parameters of these graphs are marked in Fig. 5. For each graph, we found the best ARI for every measure (iterating over the measure parameter value). Now we can check the quality of measure choice, based on the found LFR data. The result of LFR recommendation is the measure that is chosen for the set of parameters corresponding to the dataset in hand.\nThe best measures on the considered datasets are SCCT (by the mean ARI) and SCT (by the rank). This is pretty similar to the results obtained for LFR. Moreover, Spearman correlation between the ranks of measures for the datasets and for the corresponding LFR recommendations is 0.90.\nLet us use “always take SCCT” as our baseline strategy. In Table 4 we compare it with strategies based on the LFR space. We obtain LFR recommendation using knn as a well-proven method for meta-learning. Since each graph is unique, the result of 1nn can be very noisy, thus we use 5nn.\nTable 4 shows that the recommendation approach slightly beats the baseline. Reducing the number of measures from 25 to 6 do not drop the quality. However, this quality increase is not enough to draw confident conclusions about the advantages of the method. Using this fact and the fact that the ranks on datasets and recommendations are highly correlated, we conclude that the metalearning procedure is adequate to give a robust recommendation, but not precise enough to beat the baseline confidently. This may be due to the fact that that the nodes of real graphs were not labeled systematically since they were created in the wild. A larger dataset could help separate the signal from the noise and pinpoint where the limits of the method are. At least, the good news is that the conclusions made on the LFR basis do not contradict the results obtained on the datasets." }, { "heading": "6 CONCLUSIONS", "text": "In this work, we have shown that the global leadership of measures does not provide comprehensive knowledge about graph measures. We demonstrated that among 25 measures, SCCT is the best measure for the LFR graphs both by winning rate and ranking. However, there are also smaller confident zones of leadership for NHeat, Comm, logComm, logDF, and RSP.\nOur results do not contradict those of other experimental works and rather expand them by providing new findings. LogComm was first introduced in Ivashkin & Chebotarev (2016) and won in the competitions on graphs generated with a fixed set of SBM parameters. This study confirms its leadership, but only for a certain type of graphs. Another interesting finding is logDF, which unexpectedly shows good performance for the graphs with low modularity and low average degree.\nThis study is based on the LFR benchmark data. An attempt to apply the results to real data gives the quality slightly above the baseline. However, there is a strong correlation between the ranking of measures for datasets and the ranking of LFR recommendation, which indicates that the leading measures are the same, while the recommendations are not precise enough.\nIt can be noted that our study is insensitive to the non-uniformity of the generated dataset. While manipulations with this dataset may affect the global leaderboard, they cannot change the local leadership studied in this work." } ]
2,020
null
SP:cd671e0b2ae21fbca75c90741ccd008fefdd76ec
[ "In this paper, the authors develop DynaTune which achieves faster convergence speed to optimize a DNN model when compared to the state-of-the-art DL compiler, AutoTVM. The key idea is a time-slot-based scheduling method based on UCB-type multi-armed bandit policy. At each time, the scheduler chooses an action to maximize the latency reduction. In practice, A Bayesian belief model via MCMC is used to capture current knowledge of the optimization results to predict future performance, which helps make better decisions and expedites the convergence speed. The idea of using MAB in DL compiler is very interesting. The numerical experiments also demonstrate clear advantage of the proposed DynaTune. My concerns are as follows. " ]
Recently, the DL compiler, together with Learning to Compile has proven to be a powerful technique for optimizing deep learning models. However, existing methods focus on accelerating the convergence speed of the individual tensor operator rather than the convergence speed of the entire model, which results in long optimization time to obtain a desired latency. In this paper, we present a new method called DynaTune, which provides significantly faster convergence speed to optimize a DNN model. In particular, we consider a Multi-Armed Bandit (MAB) model for the tensor program optimization problem. We use UCB to handle the decision-making of time-slot-based optimization, and we devise a Bayesian belief model that allows predicting the potential performance gain of each operator with uncertainty quantification, which guides the optimization process. We evaluate and compare DynaTune with the state-of-the-art DL compiler. The experiment results show that DynaTune is 1.2–2.4 times faster to achieve the same optimization quality for a range of models across different hardware architectures.
[ { "affiliations": [], "name": "Minjia Zhang" }, { "affiliations": [], "name": "Menghao Li" }, { "affiliations": [], "name": "Chi Wang" }, { "affiliations": [], "name": "Mingqin Li" } ]
[ { "authors": [ "Andrew Adams", "Karima Ma", "Luke Anderson", "Riyadh Baghdadi", "Tzu-Mao Li", "Michaël Gharbi", "Benoit Steiner", "Steven Johnson", "Kayvon Fatahalian", "Frédo Durand", "Jonathan Ragan-Kelley" ], "title": "Learning to optimize halide with tree search and random programs", "venue": "ACM Trans. Graph.,", "year": 2019 }, { "authors": [ "Byung Hoon Ahn", "Prannoy Pilligundla", "Amir Yazdanbakhsh", "Hadi Esmaeilzadeh" ], "title": "Chameleon: Adaptive code optimization for expedited deep neural network compilation", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jason Ansel", "Shoaib Kamil", "Kalyan Veeramachaneni", "Jonathan Ragan-Kelley", "Jeffrey Bosboom", "Una-May O’Reilly", "Saman P. Amarasinghe" ], "title": "Opentuner: an extensible framework for program autotuning", "venue": "International Conference on Parallel Architectures and Compilation, PACT ’14,", "year": 2014 }, { "authors": [ "Peter Auer", "Nicolò Cesa-Bianchi", "Paul Fischer" ], "title": "Finite-time analysis of the multiarmed bandit problem", "venue": "Mach. Learn.,", "year": 2002 }, { "authors": [ "Baruch Awerbuch", "Robert D Kleinberg" ], "title": "Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches", "venue": "In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing,", "year": 2004 }, { "authors": [ "Dirk Bergemann", "Ulrich Hege" ], "title": "The financing of innovation: Learning and stopping", "venue": "RAND Journal of Economics,", "year": 2005 }, { "authors": [ "Dirk Bergemann", "Juuso Välimäki" ], "title": "Learning and strategic pricing", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1996 }, { "authors": [ "George EP Box", "George C Tiao" ], "title": "Bayesian inference in statistical analysis, volume 40", "venue": null, "year": 2011 }, { "authors": [ "Eric Brochu", "Vlad M. Cora", "Nando de Freitas" ], "title": "A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning", "venue": "CoRR, abs/1012.2599,", "year": 2010 }, { "authors": [ "Felipe Caro", "Jérémie Gallien" ], "title": "Dynamic assortment with demand learning for seasonal consumer goods", "venue": "Management Science,", "year": 2007 }, { "authors": [ "Tianqi Chen", "Mu Li", "Yutian Li", "Min Lin", "Naiyan Wang", "Minjie Wang", "Tianjun Xiao", "Bing Xu", "Chiyuan Zhang", "Zheng Zhang" ], "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems", "venue": "arXiv preprint arXiv:1512.01274,", "year": 2015 }, { "authors": [ "Tianqi Chen", "Thierry Moreau", "Ziheng Jiang", "Lianmin Zheng", "Eddie Q. Yan", "Haichen Shen", "Meghan Cowan", "Leyuan Wang", "Yuwei Hu", "Luis Ceze", "Carlos Guestrin", "Arvind Krishnamurthy" ], "title": "TVM: an automated end-to-end optimizing compiler for deep learning", "venue": "In 13th USENIX Symposium on Operating Systems Design and Implementation,", "year": 2018 }, { "authors": [ "Tianqi Chen", "Lianmin Zheng", "Eddie Q. Yan", "Ziheng Jiang", "Thierry Moreau", "Luis Ceze", "Carlos Guestrin", "Arvind Krishnamurthy" ], "title": "Learning to optimize tensor programs", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Sharan Chetlur", "Cliff Woolley", "Philippe Vandermersch", "Jonathan Cohen", "John Tran", "Bryan Catanzaro", "Evan Shelhamer" ], "title": "cuDNN: Efficient Primitives for Deep Learning", "venue": "arXiv preprint arXiv:1410.0759,", "year": 2014 }, { "authors": [ "Corinna Cortes", "Giulia DeSalvo", "Vitaly Kuznetsov", "Mehryar Mohri", "Scott Yang" ], "title": "Discrepancybased algorithms for non-stationary rested bandits", "venue": "arXiv preprint arXiv:1710.10657,", "year": 2017 }, { "authors": [ "Tobias Domhan", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves", "venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Daniel Foreman-Mackey", "David W Hogg", "Dustin Lang", "Jonathan Goodman" ], "title": "emcee: the mcmc hammer", "venue": "Publications of the Astronomical Society of the Pacific,", "year": 2013 }, { "authors": [ "John Gittins", "Kevin Glazebrook", "Richard Weber" ], "title": "Multi-armed bandit allocation indices", "venue": null, "year": 2011 }, { "authors": [ "Jonathan Goodman", "Jonathan Weare" ], "title": "Ensemble samplers with affine invariance", "venue": "Communications in applied mathematics and computational science,", "year": 2010 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Wassily Hoeffding" ], "title": "Probability inequalities for sums of bounded random variables", "venue": "In The Collected Works of Wassily Hoeffding,", "year": 1994 }, { "authors": [ "Forrest N. Iandola", "Matthew W. Moskewicz", "Khalid Ashraf", "Song Han", "William J. Dally", "Kurt Keutzer" ], "title": "SqueezeNet: AlexNet-level Accuracy with 50x Fewer Parameters and <1MB Model Size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Robert Kleinberg", "Tom Leighton" ], "title": "The value of knowing a demand curve: Bounds on regret for online posted-price auctions", "venue": "In 44th Annual IEEE Symposium on Foundations of Computer Science,", "year": 2003 }, { "authors": [ "Chris Lattner", "Jacques Pienaar", "Mehdi Amini", "Uday Bondhugula", "River Riddle", "Albert Cohen", "Tatiana Shpeisman", "Andy Davis", "Nicolas Vasilache", "Oleksandr" ], "title": "Zinenko. Mlir: A compiler infrastructure for the end of moore’s law", "venue": "arXiv preprint arXiv:2002.11054,", "year": 2020 }, { "authors": [ "Chris Leary", "Todd Wang" ], "title": "Xla: Tensorflow, compiled", "venue": "TensorFlow Dev Summit,", "year": 2017 }, { "authors": [ "Menghao Li", "Minjia Zhang", "Chi Wang", "Mingqin Li" ], "title": "Adatune: Adaptive tensor program compilation made efficient", "venue": "Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Ji Lin", "Chuang Gan", "Song Han" ], "title": "Training kinetics in 15 minutes: Large-scale distributed training on videos", "venue": "arXiv preprint arXiv:1910.00932,", "year": 2019 }, { "authors": [ "Changxi Liu", "Hailong Yang", "Rujun Sun", "Zhongzhi Luan", "Lin Gan", "Guangwen Yang", "Depei Qian" ], "title": "Swtvm: exploring the automated compilation for deep learning on sunway architecture", "venue": null, "year": 1904 }, { "authors": [ "Zhiyun Lu", "Liyu Chen", "Chao-Kai Chiang", "Fei Sha" ], "title": "Hyper-parameter tuning under a budget constraint", "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Pavlo Molchanov", "Stephen Tyree", "Tero Karras", "Timo Aila", "Jan Kautz" ], "title": "Pruning Convolutional Neural Networks for Resource Efficient Inference", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diego Novillo" ], "title": "Samplepgo: the power of profile guided optimizations without the usability burden", "venue": "Proceedings of the 2014 LLVM Compiler Infrastructure in HPC,", "year": 2014 }, { "authors": [ "Sandeep Pandey", "Deepak Agarwal", "Deepayan Chakrabarti", "Vanja Josifovski" ], "title": "Bandits for taxonomies: A model-based approach", "venue": "In Proceedings of the 2007 SIAM International Conference on Data Mining,", "year": 2007 }, { "authors": [ "Herbert Robbins" ], "title": "Some aspects of the sequential design of experiments", "venue": "Bulletin of the American Mathematical Society,", "year": 1952 }, { "authors": [ "Nadav Rotem", "Jordan Fix", "Saleem Abdulrasool", "Summer Deng", "Roman Dzhabarov", "James Hegeman", "Roman Levenstein", "Bert Maher", "Nadathur Satish", "Jakob Olesen", "Jongsoo Park", "Artem Rakhov", "Misha Smelyanskiy" ], "title": "Glow: Graph lowering compiler techniques for neural networks", "venue": "CoRR, abs/1805.00907,", "year": 2018 }, { "authors": [ "Mohammad Shoeybi", "Mostofa Patwary", "Raul Puri", "Patrick LeGresley", "Jared Casper", "Bryan Catanzaro" ], "title": "Megatron-lm: Training multi-billion parameter language models using model parallelism", "venue": null, "year": 1909 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Nicolas Vasilache", "Oleksandr Zinenko", "Theodoros Theodoridis", "Priya Goyal", "Zachary DeVito", "William S. Moses", "Sven Verdoolaege", "Andrew Adams", "Albert Cohen" ], "title": "Tensor comprehensions: Framework-agnostic high-performance machine learning", "venue": "abstractions. CoRR,", "year": 2018 }, { "authors": [ "Carole-Jean Wu", "David Brooks", "Kevin Chen", "Douglas Chen", "Sy Choudhury", "Marat Dukhan", "Kim Hazelwood", "Eldad Isaac", "Yangqing Jia", "Bill Jia" ], "title": "Machine learning at facebook: Understanding inference at the edge", "venue": "IEEE International Symposium on High Performance Computer Architecture (HPCA),", "year": 2019 }, { "authors": [ "Carole-Jean Wu", "David Brooks", "Kevin Chen", "Douglas Chen", "Sy Choudhury", "Marat Dukhan", "Kim Hazelwood", "Eldad Isaac", "Yangqing Jia", "Bill Jia" ], "title": "Machine learning at facebook: Understanding inference at the edge", "venue": "IEEE International Symposium on High Performance Computer Architecture (HPCA),", "year": 2019 }, { "authors": [ "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": null, "year": 1903 }, { "authors": [ "Zheng" ], "title": "DynaTune achieves a faster convergence to reach the lowest latency than Ansor on ResNet-18, VGG, and Transformer", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The enormous computational intensity of Deep Neural Network (DNN) models has attracted great interest in optimizing their performance. Popular deep learning (DL) frameworks such as PyTorch (Paszke et al., 2019) and TensorFlow (Abadi et al., 2016) adopt custom optimized kernels such as Intel MKL-DNN or Nvidia cuDNN (Chetlur et al., 2014) as back-end. However, given the increasing complexity of tensor operations in DNNs and the volatility of DL algorithms, it calls for developing fast and automated compilation frameworks to handle the unprecedented amount of innovations. To imitate or even exceed the success of hand-optimized libraries, recent research has developed neural network compilers, such as XLA (Leary & Wang, 2017), Glow (Rotem et al., 2018), Tensor Comprehension (Vasilache et al., 2018), and TVM (Chen et al., 2018a). Among them, TVM has shown superior performance improvements using a technique called Learning to Compile (AutoTVM) (Chen et al., 2018b). AutoTVM optimizes the code by generating many versions of a tensor operator and chooses the best through a learned cost model and search over a large space of code transformation choices.\nWhile the Learning to Compile approach produces highly optimized code of DNN models, it suffers from excessively long optimization time. As an example, although AutoTVM is able to demonstrate close to 2× performance improvement over TensorFlow on ResNet-18, the optimization time can take several hours or even tens of hours (Chen et al., 2018b). The long optimization time hinders the turnaround time and even puts the practical utility of the current compiler-based solutions into question. Recent works strive to reduce the optimization time by improving the search strategy for the code transformation plan and lowering the hardware measurement cost (Ahn et al., 2020; Adams et al., 2019). However, these approaches mostly focus on accelerating the convergence speed of optimization at the individual tensor operator level (e.g., Conv2D, batched GEMM), which do not necessarily solve the issue of slow convergence and long optimization time of the entire model, often containing tens of tensor operators.\nDifferent from existing methods, we introduce DynaTune, a DL code optimization algorithm that minimizes the sum of the execution time of all operators in a model as much as possible and as\n∗Both authors contributed equally. Order of appearance is random.\nquickly as possible. Specifically, the contributions of our paper consist of (1) a preliminary analysis that reveals the challenges and opportunities from existing DL code optimization strategies, (2) a time-slot-based optimization scheme, which simultaneously explores different operators and learns in an online manner that allows to dynamically switch to optimizing more promising tensors operators. (3) a Bayesian belief model that predicts future performance gains of operators, which helps make better decisions and expedites the convergence speed. (4) a detailed evaluation of the proposed algorithm with modern DNNs (ResNet-18, VGG, SqueezeNet, Transformer) on both CPU and GPU. Compared with the leading framework, AutoTVM, DynaTune is 1.2–2.4× times faster to obtain the same levels of optimization." }, { "heading": "2 BACKGROUND", "text": "DL compilation pipeline. A typical DL compiler contains multiple passes to optimize a model trained by popular DL frameworks such as TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), or MXNET (Chen et al., 2015), as shown in Fig. 1. In the first pass (box with dotted line), the compiler frontend applies target-independent and white-box target-dependent optimizations that do not include a measure of actual execution time. The target-independent passes perform optimizations such as operator fusion and data layout transformation, and the white-box target-dependent optimizations apply heuristic rules for code transformation based on domain knowledge. Recent work such as AutoTVM (Chen et al., 2018b) extends the pipeline with another pass, a black-box target-dependent pass, which uses learning machinery to perform optimizations.\nBlack-box target-dependent pass. In this pass, the compiler converts code transformation decisions as code templates. A template contains knobs that control various aspects of the optimization (e.g., memory tiling, loop transformations, vectorization) and determines whether the code (1) fully utilizes the internal parallelism within processors, (2) uses the shared memory wisely, and (3) maximizes data locality. Due to the large transformation space, the compiler makes use of an auto-tuner (with an optimization algorithm) and real hardware measurements to find the best transformation on target hardware (e.g., CPU, GPU, ARM, or IoT devices) (Chen et al., 2018b)." }, { "heading": "3 CHALLENGES AND MOTIVATIONS", "text": "This section presents several studies that reveal the challenges of existing DL compilation that guided our design in Section 4.\nChallenge 1. Existing DL compilation focuses on accelerating the convergence speed of individual tensor operator instead of the entire model, resulting in slow convergence and long optimization time. Prior work (Chen et al., 2018a;b; Vasilache et al., 2018; Ahn et al., 2020) optimizes one tensor operator at a time in a predefined order (e.g., in declaration order). However, such an optimization strategy is not always appropriate in practice. For example, there is often an extreme performance difference (e.g., an order of magnitude) between optimized and unoptimized operators. If we optimize operators sequentially, the overall model inference time stays high as long as there are still unoptimized operators. As a result, practitioners may need to wait until all tensor operators have finished optimization to get the desired latency, which results in long optimization time. With the active research that has been pushing the model size to millions or even billion-scale parameters with a training time of only a few hours or less than one hour (Yamazaki et al., 2019; Goyal et al., 2017; You et al., 2017; Lin et al., 2019; Shoeybi et al., 2019; You et al., 2019), it becomes even more prominent to reduce the inference optimization cost of the current solution. Furthermore, since major players in the industry have adopted many of these DL compilers (Wu et al., 2019a;b; Lattner et al., 2020; Liu et al., 2019), fast convergence is desirable for many users of these pipelines to have\na better control of the optimization cost and good performance. For example, deployment engineers may want to obtain an optimized model sooner or quickly get a latency upper-bound estimate of a model in development.\nChallenge 2. Static scheduling has only a limited view of the tensor program and has difficulty taking advantage of the actual optimization behavior. We note that from an execution point of view, the optimization of tensor operators is independent of each other, so that we may optimize them in any order and even non-consecutively. As a result, dynamic optimization has a big advantage for iterative DL compilation: We can intelligently order the optimization sequence of operators (i.e., scheduling) to accelerate the convergence of the optimization significantly. For example, it would be better to switch to optimizing another operator if we convincingly identify that the other operator has a higher potential. That being said, is it realistic to assume that all the information concerning optimizing the operators is available before the optimization even starts so that we can decide the schedule from the very beginning? Our preliminary analysis indicates that the amount of computation of an operator (known a priori) has a very disproportionate impact on the optimization time and latency reduction. Fig. 2 shows that although operator 17 of VGG (Simonyan & Zisserman, 2015) takes the longest time to optimize, it yields the least amount of latency reduction1. Our further investigation shows that the underlying code transformation space is non-linear, as shown in Fig. 3 2. As a result, the optimization behavior tends to change over time, which is hard to recognize and predict with static knowledge only.\nChallenge 3. Even with dynamic information, it is not clear how to best extrapolate estimated performance. Given the optimization results, there is an incentive to adopt a ”predict-thenoptimize” paradigm that builds a model to learn the correlation between the optimization cost and the observed optimization performance the model can be used to make predictions for potential performance gains. To identify the characteristics of the optimization behavior, we plot 16 optimization curves of best-found GFLOPS (Giga Floating Point Operations Per Second) in Fig. 4 to find patterns that can be used for designing a prediction model. We find that most curves (1) roughly follow an increasing curve with a diminishing return, (2) saturate towards an unknown final value, and (3) occasionally exhibit sudden jumps. The curve saturates to an unknown value because the performance cannot exceed the hardware peak GFLOPS, which is 9.7-TFLOPS in our case. The curve has sudden jumps because the code transformation space has change points, as shown in Fig. 3. By taking into account the curve information, we believe it has more opportunity to dynamically optimize operators that likely lead to greater performance improvements.\n1The orange bar shows the amount of computation of each operator measured as the floating-point operations (FLOPs), which can be calculated statically before the optimization starts, as described in Molchanov et al. (2017). The “optimization gain” is calculated as the reduction of wall-clock time from each operator after optimization, and the “optimization cost” is calculated as the wall-clock time spent to obtain the optimized latency, both of which are normalized by the total latency reduction and optimization time.\n2The figure shows the code transformation space of a Conv2D operator in ResNet-18. In this case, the performance of this operator varies based on the tiling size along the input channel and output channel while having other knobs fixed. The knobs control various aspects of the optimization and its performance. A summary of the knobs can be found in Ahn et al. (2020)." }, { "heading": "4 METHOD", "text": "In this section, we present our design for DynaTune. We illustrate the difference between the existing DL optimization (Fig. 5) and the high-level design of DynaTune (Fig. 6)." }, { "heading": "4.1 DYNAMIC MULTI-TENSOR-OPERATOR OPTIMIZATION PROBLEM", "text": "In this paper, we take a view of accelerating the convergence speed of multi-tensor-operator optimization by first considering a Multi-Armed Bandits model for the problem. In particular, we partition time into some fixed-length time slots {1, 2, ..., T}. Similar to MAB, we define an operator scheduler that operates in discrete time steps. At the beginning of any time slot t ∈ {1, 2, ..., T}, the scheduler needs to choose an operator kt ∈ [K] for tuning in that slot. The scheduler then obtains a list of observations (i.e., best-found performance measured in GFLOPS) Perfk(t ·Lk : (t+1) ·Lk) of k, where Lk is the number of iterations an operator has been optimized. The remaining unchosen operators stay the same unless it is selected (i.e., rested arms).\nThe latency reduction for k at time slot t is rt(kt) = op(k) Perfk[t·Lk] − op(k) Perfk[(t+1)·Lk] , where op(k) represents the number of floating point operations of k. We further define an optimal schedule,i.e., π∗ = {k∗1 , ..., k∗t , ...k∗T }, where k∗t is the operator at step t one could have taken in hindsight (after seeing all performance realizations) that would yield the maximum latency reduction. The cumulative regret is then defined as: R(T) = ∑T t=1(rt(k ∗ t ) − rt(kt)), where k∗t is the operator at step t one could have taken in hindsight (after seeing all performance realizations) that would yield the maximum latency reduction. The goal is therefore to design the scheduler, such that the cumulative regret is minimized. We call this a dynamic multi-tensor-operator optimization problem." }, { "heading": "4.2 TIME-SLOT-BASED SCHEDULING", "text": "To select which operator to optimize in a time slot, we consider deterministic selection policies that belong to the class of index-based MAB policies (Gittins et al., 2011). Among many options, we choose upper confidence bound (UCB) as our action function to represent the exploration and exploitation tradeoff (Auer et al., 2002). In particular, assume ct[k] be the number of times that an operator k ∈ [K] has been selected for optimization up to slot t. If ct−1[k] > 0, we denote yk(t) to be the estimated performance of operator k at the end of t. The UCB for operator k at t is defined as uk(t) = rk(t) + √ C × log tct−1[k] = ( op(k) Perfk[(t−1)·Lk] − op(k) yk(t) ) + √ C × log tct−1[k] for ct−1[k] > 0 and uk(t) = γ (constant) for ct−1[k] = 0. The scheduler then computes UCB for each operator and selects the next one that maximizes UCB.\nOur definition of UCB measures the potential latency reduction from an operator k compared to other operators’ expected performances. The first term is a point estimate of the future latency reduction 3, converted from GFLOPS improvement. The second term is related to the size (according to Chernoff-Hoeffding bounds (Hoeffding, 1994)) of the one-sided confidence interval, which allows the true expected performance falls within with overwhelming probability. In the evaluation section, we evaluate several other action functions, including -greedy and softmax sampling. We\n3In practice, an operator may appear multiple times in a network. Depending on the implementation, the compiler may reuse the same transformation plan for operators that have the same shape. In that situation, we multiply the point estimate of the future latency reduction in the reward function with a weight factor that represents the times of the corresponding operator that appears in the network.\nobserve that these methods, in general, offer very similar performance. We choose UCB since some recent theoretical studies show that the growth rate of the regret from UCB is sublinear and long-run average optimal in the non-stationary bandit setting (Cortes et al., 2017)." }, { "heading": "4.3 BAYESIAN BELIEF MODELS FOR EXPECTED PERFORMANCE IMPROVEMENTS", "text": "To obtain an optimal solution, k∗t requires perfect information, hence infeasible to achieve in practice. Since we identify some patterns (in Sec. 3) that indicate that there is a functional relationship between the expected performance and the already observed performance, we propose a Bayesian belief model fk(t), enabled by Markov Chain Monte Carlo (MCMC), to capture our current knowledge of the optimization results to predict future performance and get updated as new observations become available. In particular, we choose parametric curve models whose shape coincides with our knowledge about the form of optimization curves: increasing, saturating functions such as those from the power-law (pow2, pow4, log power) or the logarithmic family (loglinear, logloglinear). Among these functions, we choose log power function as it works well in our case.\nTo handle breakpoints as mentioned in Sec. 3, we employ a piece-wise parameterization to improve the approximation to the shape of the underlying relationship: since the abrupt jumps often happen just a few times, we cut observation x into segments if a jump leads to more than a relative ∆% (i.e., 20% higher GFLOPS) improvement and model segments with different curve parameters.\nAlgorithm 1 DynaTune: Dynamic Multi-Tensor-Operator Optimization 1: Input: A model Φ with a list of [K] = {1,...,K} operators 2: Output: An optimized model Φ∗ 3: Init: c = (0, 0,...,0) 4: for t = 1,...,T do 5: for k = 1,...,K do 6: Observe history performance of operator k and predict a future performance using the\nbelief model in Section 4.3 7: Update the UCB value uk(t) using the equation in Section 4.2 8: kt ← arg maxuk(t) 9: Allocate time slot L to kt for actual optimization\n10: c[k]← c[k] + 1 11: if k.finished then 12: Exclude k from [K] 13: Collect new data of observed GFLOPS from optimizing k and update the belief model\nData normalization. Before feeding the observations to the model, since the observed GFLOPS of different operators has different ranges, we normalize the GFLOPS of all operators to a common scale. Given that the maximum GFLOPS of any operator is bounded by the theoretical peak performance on the target deployment hardware (e.g., 9.3-TeraFLOPS on a Nvidia P100 GPU), we apply the following formula to normalize observation x as follows: Normalized x = x/Peak(target hardware) which transfers the observation’s values to a new range 0 to 1. We convert the normalized x back to GFLOPS when calculating UCB.\nModeling uncertainty. Since our goal aims at allocating time to optimize tensor operators that are highly likely to bring performance improvement, we need to model uncertainty as truthfully as possible. To model uncertainty, we perform MCMC sampling from the posterior p(θk|Perfk[1 : t · Lk]) ∝ p(Perfk[1 : t · Lk]|θk)p(θk) of model parameters θk of the curve function fk(t) given the observed performance Perfk[1 : t · Lk]. A sample approximation for Perfk[t′] with t′ > t can then be formed as\nE[Perfk[t ′]|Perfk[1 : t · Lk]] ≈ fk(t′|θk), (1)\nAmong many options to do MCMC sampling, we choose Goodman & Weare’s Affine Invariant MCMC Ensemble sampler (Goodman & Weare, 2010), which significantly outperforms standard M-H methods and produces independent samples (which takes O(N) likelihood evaluations) with a much shorter autocorrelation time. We initialize the ensemble samplers in a tight N-dimensional Gaussian ball in parameter space around the maximum likelihood result, which is obtained through non-linear least-squares fit. We set a uniform prior for θ and make sure the prediction is non-\ndecreasing, i.e., the predicted performance is not worse than the more recently observed best-found performance, by also explicitly encoding this knowledge into the prior. We obtain the predictive mean µ and standard deviation σ of the posterior parameter distribution using 200 MCMC samples. We then compute the expected positive improvement EI (Brochu et al., 2010) over the best known measured performance, while taking into account the possible uncertainty in that prediction." }, { "heading": "5 EVALUATION", "text": "In this section, we evaluate DynaTune experimentally, seeking answers to how DynaTune helps accelerate the convergence of optimizing all operators in a model. We integrate DynaTune with AutoTVM (Chen et al., 2018b) and use as AutoTVM our baseline for comparison. We implement DynaTune in Python, and we leverage emcee (Foreman-Mackey et al., 2013) to implement the MCMC sampling. We do a warmup to find the first non-zero GFLOPS as a starting point for all operators. We use the default hyperparameters provided by AutoTVM for the underlying code optimization. To obtain the parameter posterior, we run the ensemble MCMC with 10 walkers and 500 sampling steps. A convergence diagnostics of MCMC is presented in Appendix B. All the free parameters in the curve model are taken care of by MCMC sampling. For UCB, we choose a default value of C = 2 suggested by the theory in Auer et al. (2002), which we find to be robust to different range of latencies. When the initial latency is <1ms, we empirically find that C=0.2 leads to increased performance, which we report. We perform 5 independent runs of each configuration with different random seeds and report the median together with a 95% confidence interval." }, { "heading": "5.1 COMPARISON OF AUTOTVM AND DYNATUNE FOR OPTIMIZING MODELS WITH MULTIPLE TENSOR OPERATORS", "text": "In the previous approach (Chen et al., 2018b), authors optimize one operator at a time until all operators in a model have been optimized. We compare the performance of AutoTVM and DynaTune on how much optimization speedup we obtain as a function of the wall-clock time. Due to space limitations, we include four tasks, covering both CPU and GPU hardware: ResNet-18 (He et al., 2016) and SqueezeNet (Iandola et al., 2016) on CPU (Intel Xeon CPU E5-2690 v3 @ 2.60GHz 2600 MHz), VGG (Simonyan & Zisserman, 2015) Transformer Encoder (Iandola et al., 2016) on GPUs (Nvidia Tesla P100), which have K = 12, 18, 18, and 6 tunable operators, respectively.\nFig. 7 visualizes the results. The x-axis denotes the wall-clock time of optimizing the model. The y-axis denotes the lowest model latency obtained as time moves on. Overall, the observation is that DynaTune’s dynamic optimization converges significantly faster than the baseline most of the time and is 1.2–2.4 times faster than the baseline to achieve the lowest latency. The baseline has a much slower convergence, because it tries to find optimal transformation plan for one operator before starting optimizing another one. Since there can be an order of magnitude difference between the optimized and unoptimized code, the model latency remains high until the last operator has been optimized. In contrast, DynaTune is able to expedite the optimization significantly by reordering the optimization sequence of operators and dynamically pick promising operators to optimize. As a result, DynaTune obtains the same optimization quality as the baseline but in a much faster speed. Furthermore, we also plot the optimization by assuming having access to the oracle information. For ResNet-18 and VGG, DynaTune gets optimization results close to the oracle. For SqueezeNet and Transformer, DynaTune converges slower than the oracle in the beginning but quickly catches up at around one third of the optimization time, presumably because it is more difficult for the Bayesian model to predict performance in the beginning then later, indicating room for improvement. These\nresults confirm that a dynamic approach like DynaTune is capable of performing model-level code optimization in a much more efficient way than the existing approach." }, { "heading": "5.2 COMPARISON OF DYNATUNE WITH STATIC SCHEDULES", "text": "In this section, we compare the effectiveness of DynaTune with static allocation schemes. In particular, we compare with three mechanisms: (1) Random, which randomly assign time slots to operators, (2) Round-robin, which assigns time slots in circular order, and (3) Linear, which allocate time linearly with respect to the number of floating-point operations each operator has. Fig. 8 shows that DynaTune consistently outperforms other static schemes and achieves 1.1–2.4 times speedup to obtain the lowest latency. As mentioned in Sec. 3, static knowledge alone is insufficient for making well-suited schedule decisions. In contrast, the improvement in DynaTune comes from constantly making decisions and replaning the allocation strategy based on new observations." }, { "heading": "5.3 COMPARISON OF DYNATUNE WITH ALTERNATIVE DYNAMIC SCHEMES", "text": "We also compare the effectiveness of our approach by comparing the following dynamic schemes: (1) Dynamic allocation (DA) + Random selection (Rand): Randomly assigns time slots to operators and reallocates a time slot if the chosen operator has been early terminated. (2) Dynamic allocation (DA) + Round-robin selection (RR): Assigns time slots in circular order, and keep other settings the same as the ”DA + Rand” configuration. For all schemes, we use the same early termination condition, e.g., an operator is considered as finished if it has not seen any GFLOPS improvement for a given number of iterations (e.g., 200). Fig. 9 shows that, when not equipped with the Bayesian model (e.g., Rand and RR), the optimization converges relatively slower than with the model to obtain a lower latency. In contrast, DynaTune achieves 1–1.4 times speedup to obtain the lowest latency, presumably because the prediction helps to identify operators that potentially bring high latency reduction. We also evaluated DynaTune with two alternative action functions: -greedy and softmax sampling. As Fig. 14 shown, while UCB offers marginal gains in some cases, the performance of all three action functions are very similar (more results can be found in Appendix A), indicating that potentially all three action functions can be used for operator selection." }, { "heading": "5.4 COMPARISON WITH CHAMELEON", "text": "Chameleon (Ahn et al., 2020) is a recent work that uses reinforcement learning and adaptive sampling to efficiently explore the code transformation space of a tensor operator. In this part, we compare DynaTune with the open-sourced version of Chameleon4. We follow the provided instructions and evaluate Chameleon on the same set of models: ResNet-18 and SqueezeNet on CPU, VGG and Transformer on GPU. Figure 10c and Figure 10d show that although Chameleon is faster than\n4https://github.com/anony-sub/chameleon\nthe baseline AutoTVM on VGG and Transformer on GPU, it is slower than DynaTune to converge to the lowest latency. This is because although the optimization speed of each operator has been improved by Chameleon, it still sequentially optimizes one operator at a time. Therefore, the overall convergence speed is still bounded by the least optimized tensor operators. In contrast, DynaTune focuses on improving the convergence speed of multi-tensor-operators and dynamically allocates time to improve the overall convergence speed.\nWe also evaluate Chameleon on CPU, given that model inference is not uncommon on CPU (Wu et al., 2019a; Zhang et al., 2019; 2018). When running on CPU, a bit surprisingly, we observe that Chameleon is slower than the baseline AutoTVM on ResNet-18 (Figure 10a) and SqueezeNet (Figure 10b). We analyze the performance and find that the RL optimizer in Chameleon adds a non-trivial amount of overhead than the default optimizer (SA + XGBoost) used in AutoTVM on CPU. As a result, although Chameleon reduces the hardware measurement cost and the number of iterations to find the best configuration, its optimization time on CPU is longer than the baseline AutoTVM because of this extra overhead and is therefore also slower than DynaTune on CPU. Overall, DynaTune is 1.4–4.7 times faster than Chameleon to reach the same latency. Although DynaTune is faster than Chameleon, we want to point out that DynaTune can be combined with Chameleon to achieve better performance, at least on GPU." }, { "heading": "5.5 MORE ANALYSIS RESULTS", "text": "Figure 11: Inference time comparison.\nFigure 12: Cost breakdown.\nFigure 13: DynaTune curve fitting.\nFigure 14: Comparison with -greedy and softmax sampling (VGG).\nInference time comparison Fig. 11 compares the final inference time optimized by TVM, AutoTVM, and DynaTune respectively. Overall, DynaTune achieves up to 26.7% faster inference speed over TVM and similar code performance as AutoTVM. AutoTVM and DynaTune achieve much higher speedup on ResNet-18 and Transformer, presumably because the heuristic-based optimizations in TVM are sub-optimal. While achieving comparable optimization quality as AutoTVM, DynaTune significantly reduces the lengthy optimization time.\nSegment curve fitting. Fig. 13 gives an example on the curve fitting. It is easy to see that the data between 1 and 80 could be approximated well by one segment, and the data between 80 and 260 could be approximated by another segment.\nCost breakdown. Fig. 12 shows the breakdown in the average time required scheduling a time slot (averaged across four models), which involves the optimization time (e.g., 100s) and the scheduling cost. Overall, baseline (sequential), random, and round-robin incur almost no overhead in scheduling. Our Bayesian belief model takes time to do fitting (MLE and MCMC) and inference. However,\nit only adds 1.44s, which adds 1.4% overhead and can be easily compensated by the time saved from picking a promising operator to optimize." }, { "heading": "6 RELATED WORK", "text": "DynaTune uniquely offers a solution that exclusively enables (i) Multi-Armed Bandits based dynamic multi-tensor-operator optimization and (ii) extrapolation of optimization performance by Bayesian inference in the context of (iii) optimizing DL compilers. As such, we discuss the related work from each of the three independent research directions.\nOptimizing compilers. TVM (Chen et al., 2018a) and TensorComprehensions (Vasilache et al., 2018) use simulated annealing and genetic algorithm to search the best code transformation plan for neural networks. Subsequently, AutoTVM (Chen et al., 2018b) incorporates boosted decision trees as a surrogate model to reduce the number of real hardware measurements. CHAMELEON (Ahn et al., 2020) proposes to use reinforcement learning for efficient search space exploration. AdaTune (Li et al., 2020) cuts the cost of hardware measurement through adaptive evaluator and allows to better adapt optimizations against hardware and model heterogeneity. However, while existing approaches study how to accelerate the convergence of individual operators, DynaTune focuses on improving the convergence speed of the entire model using Multi-Armed Bandit learning for dynamic optimization cross all tensor operators. DynaTune can be combined with other approaches that speed up the convergence of individual operators to maximize gains. Program autotuning has also been studied in the context of generic programming, where black box optimization (Ansel et al., 2014) and runtime information (Novillo, 2014) are utilized to generate optimized code. Rather than advancing the generic problem of program auto-tuning, our technique is designed and validated specifically for the DL model compilation problem.\nMulti-Armed Bandits. The Multi-Armed Bandits (MAB), first introduced by Robbins (Robbins, 1952), with various modifications have been used to model a plethora of dynamic optimization problems under uncertainty (Zelen, 1969; Bergemann & Välimäki, 1996; Kleinberg & Leighton, 2003; Awerbuch & Kleinberg, 2004; Bergemann & Hege, 2005; Caro & Gallien, 2007; Pandey et al., 2007). To the best of our knowledge, our work exclusively explores a different problem, which is optimizing DL compilers using MAB. Furthermore, our problem can be viewed as an instance of a rested, non-stationary MAB, which is complicated to solve since the evolution of the stochastic process depends on the choices made by the algorithm. We are able to design new UCB-style algorithms that improve the DL compilation in this specific non-stationary rested bandit scenario.\nBayesian Inference. Bayesian inference is a broad field and has been widely used in statistical analysis (Box & Tiao, 2011). For example, Bayesian inference has been used in hyperparameter tuning (Domhan et al., 2015; Lu et al., 2019). DynaTune shares similarities with these methods in building a Bayesian model for prediction, but it differs in its context and has its unique challenges. The Bayesian belief model we design is to predict operators that may lead to greater latency reduction whilst performing an optimization to accelerate the process." }, { "heading": "7 CONCLUSION", "text": "Although highly optimized code can be achieved through existing DL compilers, an obvious drawback is that they optimize operators one at a time, leading to slow convergence of optimization speed when the model has multiple operators. In this paper we have introduced a method, called DynaTune, which treats the optimization of multiple operators in a model as a whole and dynamically optimizes all operators to expedite convergence. Combined with a Bayesian belief model, the dynamic optimization prioritizes operators that have larger latency reduction. As a result, DynaTune achieves much faster convergence speed in getting optimized models, outperforming the state-of-the-art approaches." }, { "heading": "ACKNOWLEDGEMENT", "text": "The authors appreciate the anonymous ICLR reviewers for providing constructive feedback for improving the quality of this paper. All authors are not funded by any other agency." }, { "heading": "A COMPARISON WITH -GREEDY AND SOFTMAX SAMPLING", "text": "We compare DynaTune with two alternative operator selection methods: (1) -greedy: Like our approach, but UCB is replaced with -greedy, which select the operator with the highest expected latency reduction based on EI with probability 1− (e.g., =0.05) and randomly select an operator with to ensure exploration. (2) Softmax: Like our approach but UCB is replaced with softmax sampling, which stochastically samples operators to optimize based on expected latency reduction at each step. Fig. 15–Fig. 18 show the comparison results. With the Bayesian model enabled in cases, we observe that softmax sampling, -greedy, and UCB work very similarly across multiple models. We choose UCB in our design because it has been theoretically studied that UCB has a sublinear regret growth rate." }, { "heading": "B MCMC ANALYSIS", "text": "Fig. 19 plots the time series (i.e., trace) of the parameters in the Markov chains to assess the behavior of our MCMC sampling. It shows the positions of each walker (10) as a function of the number of steps in the chain. The walkers start in small distributions around the maximum likelihood values and then quickly wander and start exploring the full posterior distribution. In fact, after about 100 steps, the MCMC appear to have converged to stationarity (i.e., self-replicating). We also examine\nthe mean acceptance fraction of the ensemble, which is 0.54 by average, and it indicates that the MCMC sampler has generated a sufficient number of parameter samples that have also gained good information about the underlying distribution." }, { "heading": "C COMPARISON WITH ANSOR SCHEDULER", "text": "In this section, we compare our approach with a concurrent work called Ansor (Zheng et al., 2020). We use the default hyperparameters α = 0.2, β = 2, backward window size=3 in the open-sourced implementation 5 for the evaluation. We use the same evaluation methodology as ours by performing five independent runs of each configuration with different random seeds and reporting the median together with a 95% confidence interval. Figure 20a–20d show that both DynaTune and the Ansor scheduler outperform the baseline AutoTVM by a large margin. This is expected, because both approaches accelerate the convergence of model optimization through dynamic optimization. Figure 21a–21d show a more detailed comparison between DynaTune and Ansor. Overall, Ansor seems to reduce the latency faster in the beginning, but DynaTune always catches up and is often quicker to reach the lowest latency. For example, DynaTune achieves a faster convergence to reach the lowest latency than Ansor on ResNet-18, VGG, and Transformer.\nAlthough Zheng et al. (2020) and our approach share a similar high-level idea of dynamically allocating time resources to different tasks, the exact mechanisms on how to allocate resources are very different. The tasks scheduler from Zheng et al. (2020) decides which task to optimize based on a heuristic score, which is a weighted sum between the latency reduction rate of a task in a recent small time window and an expected latency reduction in the future based on task similarity information. The estimation of the score not only requires defining and heuristically adjusting similarity groups but also requires two hyperparameters α and β to control the weight to decide which estimations to trust more. However, it is not immediately clear how such weights should be set and how to adapt them to different models. For example, the default hyperparameter settings may cause the scheduling to be overly greedy (i.e., getting stuck at a local optimum), which may explain why Ansor converges faster in the beginning but is slower to reach the best latency towards the end. In contrast, we use a Bayesian belief model to predict how much expected latency reduction from each task in the next time slot, and all the free parameters in our belief model are taken care of by MCMC sampling. Furthermore, our approach takes uncertainty quantification into account, which presumably helps the search escape from local optimum.\n5https://github.com/apache/incubator-tvm/blob/main/python/tvm/auto_ scheduler/task_scheduler.py" } ]
2,021
DYNATUNE: DYNAMIC TENSOR PROGRAM OPTI- MIZATION IN DEEP NEURAL NETWORK COMPILATION
SP:fff5b8e98a9909fb289cd1455d381df4b75f01fe
[ "The objective of this paper is to present a benchmark of code understanding tasks in the spirit of GLUE benchmarks in NLP. Towards this, it designs 5 Java language tasks: NPath complexity, operator prediction, method naming, completion of method calls, and null dereference prediction. An evaluation on some common neural architectures is performed." ]
A multitude of machine learning models for source code have been proposed in the recent years capturing various aspects of the inherent rich structure and semantics of code. However, these models are commonly designed to perform well on a single task, failing to capture code’s multifaceted nature. To address this, we present GLUECode, Global and Local Understanding Evaluation of Code, a benchmark of diverse tasks to evaluate machine learning models of source code. Crucially, GLUECode accounts for the distinct characteristics of source code: (1) source code is highly structured and (2) source code is often composed of multiple interacting entities. Existing tasks incentivize researchers to create models and code representations that perform well on a single task commonly focusing on local reasoning. GLUECode aims to allow researchers to experiment with multiple local and global source code representations, and evaluate these models on their ability to capture the diverse characteristics of source code, thus driving the community towards building robust source code models incorporating global reasoning. We present results for several baselines. The GLUECode tasks are challenging for the evaluated baselines; no model achieves convincing performance across all tasks. This indicates that there is ample room for progress on GLUECode.
[ { "affiliations": [], "name": "A BENCHMARK" } ]
[ { "authors": [ "Miltiadis Allamanis" ], "title": "The adverse effects of code duplication in machine learning models of code", "venue": "In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software,", "year": 2019 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Christian Bird", "Charles Sutton" ], "title": "Suggesting accurate method and class names", "venue": "In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering,", "year": 2015 }, { "authors": [ "Miltiadis Allamanis", "Hao Peng", "Charles A. Sutton" ], "title": "A convolutional attention network for extreme summarization of source", "venue": "code. CoRR,", "year": 2016 }, { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": "CoRR, abs/1711.00740,", "year": 2017 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Soline Ducousso", "Zheng Gao" ], "title": "Typilus: neural type hints", "venue": "arXiv preprint arXiv:2004.10657,", "year": 2020 }, { "authors": [ "Miltos Allamanis", "Daniel Tarlow", "Andrew Gordon", "Yi Wei" ], "title": "Bimodal modelling of source code and natural language", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Uri Alon", "Omer Levy", "Eran Yahav" ], "title": "code2seq: Generating sequences from structured representations of code. CoRR, abs/1808.01400, 2018a", "venue": "URL http://arxiv.org/abs/1808", "year": 2018 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "A general path-based representation for predicting program properties", "venue": "CoRR, abs/1803.09544,", "year": 2018 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "code2vec: Learning distributed representations of code. CoRR, abs/1803.09473, 2018c", "venue": "URL http://arxiv.org/abs/1803", "year": 2018 }, { "authors": [ "Uri Alon", "Roy Sadaka", "Omer Levy", "Eran Yahav" ], "title": "Structural language models of code, 2020", "venue": null, "year": 2020 }, { "authors": [ "Steven Arzt", "Siegfried Rasthofer", "Christian Fritz", "Eric Bodden", "Alexandre Bartel", "Jacques Klein", "Yves Le Traon", "Damien Octeau", "Patrick McDaniel" ], "title": "Flowdroid: Precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps", "venue": "Acm Sigplan Notices,", "year": 2014 }, { "authors": [ "Nathaniel Ayewah", "William Pugh", "David Hovemeyer", "J David Morgenthaler", "John Penix" ], "title": "Using static analysis to find bugs", "venue": "IEEE software,", "year": 2008 }, { "authors": [ "Pavol Bielik", "Veselin Raychev", "Martin Vechev" ], "title": "PHOG: probabilistic model for code", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Stephen M. Blackburn", "Robin Garner", "Chris Hoffmann", "Asjad M. Khang", "Kathryn S. McKinley", "Rotem Bentzur", "Amer Diwan", "Daniel Feinberg", "Daniel Frampton", "Samuel Z. Guyer" ], "title": "The dacapo benchmarks: Java benchmarking development and analysis", "venue": "SIGPLAN Not.,", "year": 2006 }, { "authors": [ "Marc Brockschmidt", "Miltiadis Allamanis", "Alexander L Gaunt", "Oleksandr Polozov" ], "title": "Generative code modeling with graphs", "venue": "arXiv preprint arXiv:1805.08490,", "year": 2018 }, { "authors": [ "Lutz Büch", "Artur Andrzejak" ], "title": "Learning-based recursive aggregation of abstract syntax trees for code clone detection", "venue": "IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER),", "year": 2019 }, { "authors": [ "Boxing Chen", "Colin Cherry" ], "title": "A systematic comparison of smoothing techniques for sentencelevel BLEU", "venue": "In Proceedings of the Ninth Workshop on Statistical Machine Translation,", "year": 2014 }, { "authors": [ "Daniel DeFreez", "Aditya V. Thakur", "Cindy Rubio-González" ], "title": "Path-based function embedding and its application to specification mining", "venue": "CoRR, abs/1802.07779,", "year": 2018 }, { "authors": [ "Etienne Denoual", "Yves Lepage" ], "title": "Bleu in characters: towards automatic mt evaluation in languages without word delimiters", "venue": "In Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts,", "year": 2005 }, { "authors": [ "Zhangyin Feng", "Daya Guo", "Duyu Tang", "Nan Duan", "Xiaocheng Feng", "Ming Gong", "Linjun Shou", "Bing Qin", "Ting Liu", "Daxin Jiang", "Ming Zhou" ], "title": "Codebert: A pre-trained model for programming and natural languages, 2020", "venue": null, "year": 2020 }, { "authors": [ "Patrick Fernandes", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "Structured neural summarization", "venue": "arXiv preprint arXiv:1811.01824,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Jörn-Henrik Jacobsen", "Claudio Michaelis", "Richard Zemel", "Wieland Brendel", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Shortcut learning in deep neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Vincent J Hellendoorn", "Premkumar Devanbu" ], "title": "Are deep neural networks the best choice for modeling source code", "venue": "In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering,", "year": 2017 }, { "authors": [ "Vincent J Hellendoorn", "Sebastian Proksch", "Harald C Gall", "Alberto Bacchelli" ], "title": "When code completion fails: A case study on real-world completions", "venue": "IEEE/ACM 41st International Conference on Software Engineering (ICSE),", "year": 2019 }, { "authors": [ "Vincent J Hellendoorn", "Charles Sutton", "Rishabh Singh", "Petros Maniatis", "David Bieber" ], "title": "Global relational models of source code", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Thong Hoang", "Julia Lawall", "Yuan Tian", "Richard J Oentaryo", "David Lo" ], "title": "Patchnet: Hierarchical deep learning-based stable patch identification for the linux kernel", "venue": "IEEE Transactions on Software Engineering,", "year": 2019 }, { "authors": [ "Hamel Husain", "Ho-Hsiang Wu", "Tiferet Gazit", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "Codesearchnet challenge: Evaluating the state of semantic code search, 2020", "venue": null, "year": 2020 }, { "authors": [ "Aditya Kanade", "Petros Maniatis", "Gogul Balakrishnan", "Kensen Shi" ], "title": "Learning and evaluating contextual embedding of source code, 2020", "venue": null, "year": 2020 }, { "authors": [ "Rafael-Michael Karampatsis", "Hlib Babii", "Romain Robbes", "Charles Sutton", "Andrea Janes" ], "title": "Big code!= big vocabulary: Open-vocabulary models for source code", "venue": "arXiv preprint arXiv:2003.07914,", "year": 2020 }, { "authors": [ "Seohyun Kim", "Jinman Zhao", "Yuchi Tian", "Satish Chandra" ], "title": "Code prediction by feeding trees to transformers, 2020", "venue": null, "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "A. LeClair", "S. Jiang", "C. McMillan" ], "title": "A neural model for generating natural language summaries of program subroutines", "venue": "IEEE/ACM 41st International Conference on Software Engineering (ICSE),", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach, 2019", "venue": null, "year": 2019 }, { "authors": [ "Chris J. Maddison", "Daniel Tarlow" ], "title": "Structured generative models of natural source", "venue": "code. CoRR,", "year": 2014 }, { "authors": [ "Pedro Martins", "Rohan Achar", "Cristina V. Lopes" ], "title": "50k-c: A dataset of compilable, and compiled, java projects", "venue": "In Proceedings of the 15th International Conference on Mining Software Repositories,", "year": 2018 }, { "authors": [ "Bryan McCann", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "CoRR, abs/1806.08730,", "year": 2018 }, { "authors": [ "Lili Mou", "Ge Li", "Zhi Jin", "Lu Zhang", "Tao Wang" ], "title": "TBCNN: A tree-based convolutional neural network for programming language processing", "venue": "CoRR, abs/1409.5718,", "year": 2014 }, { "authors": [ "Lili Mou", "Ge Li", "Lu Zhang", "Tao Wang", "Zhi Jin" ], "title": "Convolutional neural networks over tree structures for programming language processing", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Brian A Nejmeh" ], "title": "Npath: a measure of execution path complexity and its applications", "venue": "Communications of the ACM,", "year": 1988 }, { "authors": [ "Michael Pradel", "Koushik Sen" ], "title": "Deep learning to find bugs", "venue": "TU Darmstadt, Department of Computer Science,", "year": 2017 }, { "authors": [ "Michael Pradel", "Koushik Sen" ], "title": "Deepbugs: A learning approach to name-based bug detection, 2018", "venue": null, "year": 2018 }, { "authors": [ "Veselin Raychev", "Martin Vechev", "Andreas Krause" ], "title": "Predicting program properties from ”big code", "venue": "In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2015 }, { "authors": [ "Romain Robbes", "Michele Lanza" ], "title": "Improving code completion with program history", "venue": "Automated Software Engineering,", "year": 2010 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "FeiFei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "CoRR, abs/1409.0575,", "year": 2014 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "arXiv preprint arXiv:1508.07909,", "year": 2015 }, { "authors": [ "Susan Sim", "Steve Easterbrook", "R.C. Holt" ], "title": "Using benchmarking to advance research: A challenge to software engineering", "venue": "pp. 74– 83,", "year": 2003 }, { "authors": [ "Jeffrey Svajlenko", "Chanchal K Roy" ], "title": "Evaluating clone detection tools with bigclonebench", "venue": "IEEE International Conference on Software Maintenance and Evolution (ICSME),", "year": 2015 }, { "authors": [ "Yaza Wainakh", "Moiz Rauf", "Michael Pradel" ], "title": "Evaluating semantic representations of source code, 2019", "venue": null, "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "CoRR, abs/1804.07461,", "year": 2018 }, { "authors": [ "Alex Wang", "Yada Pruksachatkun", "Nikita Nangia", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "venue": "CoRR, abs/1905.00537,", "year": 2019 }, { "authors": [ "Ke Wang" ], "title": "Learning scalable and precise representation of program", "venue": "semantics. ArXiv,", "year": 2019 }, { "authors": [ "Ke Wang", "Mihai Christodorescu" ], "title": "COSET: A benchmark for evaluating neural program embeddings", "venue": "CoRR, abs/1905.11445,", "year": 2019 }, { "authors": [ "Yanlin Wang", "Lun Du", "Ensheng Shi", "Yuxuan Hu", "Shi Han", "Dongmei Zhang" ], "title": "Cocogum: Contextual code summarization with multi-relational gnn on umls, 2020", "venue": null, "year": 2020 }, { "authors": [ "Huihui Wei", "Ming Li" ], "title": "Supervised deep features for software functional clone detection by exploiting lexical and syntactical information in source code", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "Jiayi Wei", "Maruth Goyal", "Greg Durrett", "Isil Dillig" ], "title": "Lambdanet: Probabilistic type inference using graph neural networks", "venue": "arXiv preprint arXiv:2005.02161,", "year": 2020 }, { "authors": [ "Jason Weston", "Antoine Bordes", "Sumit Chopra", "Alexander M Rush", "Bart van Merriënboer", "Armand Joulin", "Tomas Mikolov" ], "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "venue": "arXiv preprint arXiv:1502.05698,", "year": 2015 }, { "authors": [ "Martin White", "Michele Tufano", "Christopher Vendome", "Denys Poshyvanyk" ], "title": "Deep learning code fragments for code clone detection", "venue": "In 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE),", "year": 2016 }, { "authors": [ "William E Winkler" ], "title": "String comparator metrics and enhanced decision rules in the fellegi-sunter model of record linkage", "venue": null, "year": 1990 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: Stateof-the-art natural language processing, 2020", "venue": null, "year": 2020 }, { "authors": [ "Pengcheng Yin", "Graham Neubig" ], "title": "A syntactic neural model for general-purpose code generation", "venue": "arXiv preprint arXiv:1704.01696,", "year": 2017 }, { "authors": [ "Pengcheng Yin", "Bowen Deng", "Edgar Chen", "Bogdan Vasilescu", "Graham Neubig" ], "title": "Learning to mine aligned code and natural language pairs from stack overflow", "venue": "In International Conference on Mining Software Repositories,", "year": 2018 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever" ], "title": "Learning to execute", "venue": "arXiv preprint arXiv:1410.4615,", "year": 2014 }, { "authors": [ "REPRESENTATIONS A" ], "title": "THE 50K-C DATASET The projects in 50K-C (Martins et al., 2018) where harvested from GitHub, and selected as they included a build script which made automated compilation of the dataset available. We need compilable projects", "venue": null, "year": 2018 }, { "authors": [ "Allamanis" ], "title": "This representation allows us to also extract the AST and token representations, by simply omitting unnecessary edges. Note that compiling projects and extracting feature graphs both took several weeks to simply execute. Of note, these feature graphs are at the file level, not the project level. We thus use the Java call graph extractor (https://github.com/gousiosg/java-callgraph) of Georgios Gousios", "venue": null, "year": 2017 }, { "authors": [ "Adam (Kingma", "Ba" ], "title": "optimizer, and use sparse categorical cross-entropy as our loss since we are going to use the same model for classification and generation (this models treat generation as classification over the entire vocabulary). BiLSTM: A model with an embedding layer of vocabulary size 10,000, embedding dimension 64, and input maximum length", "venue": null, "year": 2014 }, { "authors": [ "bAbI Tasks Weston" ], "title": "NLP tasks in simple question-answering format intended to test dialogue agents on natural language understanding. bAbI aimed to provide a yardstick for researchers to assess their NLP models for intelligent dialogue agents. The tasks in bAbI are artificial, but measure specific aspects of reading", "venue": null, "year": 2015 }, { "authors": [ "Fernandes" ], "title": "combine information from multiple sources, such as token sequences, ASTs, control-flow, data-flow graphs etc. of a program to generate feature graphs, which consider long-range dependencies and the structural nature of source code, to reason over source code", "venue": "Feature Graphs Allamanis et al", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, there has been considerable interest in researching machine learning models on source code artifacts. Machine learning models have been used to address a variety of software engineering tasks, as the inherent rich structure of code has allowed machine learning researchers to explore new models and ideas. However, research has focused on single-purpose application models, targeting a single task each time while using varying source code representations and datasets. This impedes progress towards general-purpose machine learning models of code that can learn and reason across many tasks.\nIn this work, we present GLUECode (Global and Local Understanding Evaluation of Code), with the goal of measuring progress in source code modelling across a range of tasks that account for the diverse characteristics of software and require diverse reasoning capabilities over several thousands of software projects. As GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) does for natural language, GLUECode highlights important aspects of reasoning about code: (1) since code in software is composed of multiple interacting entities, it includes tasks that leverage both local (single method) and global (multiple inter-related methods, information beyond the local method) reasoning to varying degrees. This is in contrast to most tasks and models that have been introduced so far that focus on local reasoning; (2) since source code mixes structured and unstructured information, GLUECode tasks leverage both kinds of information, and (3) since the space of modelling choices is large, we provide several source code representations ranging from raw text to abstract syntax trees (AST) and graph representations, lowering the barrier to entry and ease of experimentation.\nThe design space for source code models is extremely large and spans a wide range of source code representations. These range from the simplest (software metrics and n-grams), to very complex that fully take advantage of the structure and semantics of source code (such as graph-based representations). Even seemingly simple choices, such as how to preprocess identifiers, can be handled in many different ways and have disproportionate impact (Karampatsis et al., 2020). GLUECode aims to provide a unified benchmark to explore this design space.\nWe provide performance results on a set of baselines, ranging from simple neural architectures such as LSTMs and CNNs, to variants of pre-trained transformers. These models leverage purely local reasoning and limited amounts of structural information. We show that existing models perform well in a few tasks but fail to yield good results in others: In contrast to NLP, where (pre-trained) transformers outperform other models, we find that no single model of code consistently outperforms the others in all tasks.\nFinally, while models can be evaluated on any single task in the benchmark in isolation (as the field is presently doing), a long-term goal of GLUECode is the creation of unified multi-task source code models that perform well across multiple tasks. A source code model that is jointly trained and can perform well on all the task in the benchmark would be a significant step towards more versatile models, that can, beyond the tasks they were trained, also adapt to downstream tasks, especially when there is not enough data. Given the performance of our baselines in the single-task scenario, defining a model that performs well across the board is thus very much an open problem." }, { "heading": "2 THE GLUECODE BENCHMARK", "text": "Benchmarks are a common practice in machine learning and NLP, prominently featuring GLUE and SuperGLUE (Wang et al., 2018; 2019) among others. In the domain of machine learning on source code, several benchmarks have been proposed. However, in contrast to GLUECode, they consider relatively local contexts and do not incentivize non-local reasoning: Idbench looks at identifiers, (Wainakh et al., 2019), BigCloneBench (Svajlenko & Roy, 2015) and OJClone (Mou et al., 2016) at clone detection, and CodeSearchNet at a function-level text-to-code search (Husain et al., 2020). Finally, COSET concerns classifying small programs by their functionality in 38 classes (Wang & Christodorescu, 2019), and CoNaLa is a line-level text-to-code generation benchmark (Yin et al., 2018). In this section, we provide an overview of GLUECode. We first describe the software-specific characteristics that impact the choice of tasks, before detailing the dataset and the tasks involved. Details about other related benchmarks can be found in the Appendix D." }, { "heading": "2.1 LOCAL VERSUS GLOBAL CONTEXT", "text": "Most existing machine learning models of source code work at the level of a single function or method. We call these local models, as they reason over the local context of a single software entity. This is in contrast to global models that reason over multiple software entities and scales. Global models are highly desirable since software systems are composed of multiple entities such as modules and functions, that communicate with each other. This composition of communicating entities dictates the behavior of a software system. For instance, a function may have a radically different behavior, depending on its arguments. Indeed, small local changes can manifest in large changes in behaviour in distant program locations. Only global models will be able to detect that. To push forward the state of the art, it is thus critical to focus on global models.\nFully global models are currently out of reach but GLUECode incentivizes building models that feature some form of global reasoning, in addition to local reasoning. Existing work uses simplified projections of global representations: the GNN works of Allamanis et al. (2017; 2020) look solely at file-level tokens, syntax, data and control flow information. CocoGum (Wang et al., 2020) uses class context represented as abstracted UML diagrams. LambdaNet extracts type dependencies in JavaScript into a single graph (Wei et al., 2020) for a few mid-sized projects (500-10k lines of code), ignoring syntactic information, code comments, etc. Finally, Func2Vec (DeFreez et al., 2018) computes function embeddings over an interprocedural call graph, ignoring local syntax, function arguments, etc. An extended related work discussion can be found in Appendix D.\nInstead to reason over global contexts two limitations need to be overcome: First, time-consuming interprocedural static analyses need to be performed at scale. These require compiling projects and resolving all its dependencies. In GLUECode, we take a step towards this direction, by using the largest publicly available corpus of compilable Java code (Sec. 2.3). (2) Existing methods do not operate well on large and sparse inputs and thus representations are tailored to use only the necessary information. In GLUECode, we provide access to a variety of representations and propose a set of tasks that cannot focus solely on local or global information (Sec 2.2)." }, { "heading": "2.2 FLEXIBILITY IN REPRESENTATIONS OF CODE", "text": "Representations of source code in machine learning are a central topic of research. Source code has a known rich structure, as it can be unambiguously parsed, while valuable information is present in identifiers, literals, and comments, which are unstructured. As a result, there has been sustained work in exploring architectures and representations that leverage the different structural aspects of software, ranging from treating software as a textual artifact, to tree and graph-based representations. These representations come with distinct trade-offs.\nSequence-level models treating source code as text are simpler and easy to scale to large amounts of data, at the expense of obscuring the information obtained from distinct structural inter-relations in code. LSTM (Zaremba & Sutskever, 2014), CNN (Allamanis et al., 2016) and transformer (Husain et al., 2020; Kanade et al., 2020; Feng et al., 2020) based models for source code have been explored. Meanwhile, more structured models commonly learn from less data thanks to the provided structure, but are harder to scale as they require extensive pre-processing. Such models use a program’s abstract syntax tree (AST) in Tree-LSTMs (Wei & Li, 2017), tree-based CNNs (Mou et al., 2014), or use linearized forms fed to sequence models (LeClair et al., 2019; Kim et al., 2020), or linearized as bags of AST paths (Alon et al., 2018c;a). Graph representations have been used in conjunctions with GNNs (Allamanis et al., 2017; Brockschmidt et al., 2018; Wei et al., 2020) and have been recently combined with RNNs and (relational) transformers (Hellendoorn et al., 2019b).\nYet, most of these works are evaluated on a single task, yielding limited insights on the tradeoffs of various representations and models. GLUECode’s goal is to ease experimentation across representation and modelling choices on a variety of local and global tasks. To achieve this, we provide several pre-processed representations at the level of source code files: raw text, tokenized code, abstract syntax trees, graph representations (as in Allamanis et al. (2017)), and bags of AST paths as in Alon et al. (2018c;a). For global context we provide project-level call graphs. Across all representations, source code entities (methods and classes) are identified via a Universally Unique Identifier (UUID), and can be linked together. Appendix A provides details and examples.\nModelling decisions have significant impact on the performance of models and many different representations are possible, especially when considering models that perform global reasoning. GLUECode tasks are defined as a mapping from the UUID of the entity of interest to its label. Researchers can build their own input representations based on how they want to solve GLUECode. This allows researchers to combine these preprocessed representations as they see fit. GLUECode provides an API to efficiently access these representations to define the model. We show examples of the representations in Appendix A." }, { "heading": "2.3 DATA", "text": "Performing pre-processing at scale is very challenging and time consuming. To extract the representations and some of the labels for the tasks, we use a variety of tools. Some of these tools perform extensive static analyses, and for this reason they require code that is compilable. Automatically compiling large amounts of arbitrary code is surprisingly difficult, as some systems may have convoluted build processes, or depend on a large number of libraries that may need to be present at compile time. We restrict our scope to Java since it is a popular language, with a lot of mature projects, and extensive tool support. To ease this task, our starting point is the 50K-C dataset (Martins et al., 2018), which is a set of 50,000 Java projects extracted from GitHub, that are compilable. Of the 50,000 projects in 50K-C, many are too small to represent realistic software projects, such as projects authored by students. This is why we restrict our scope to projects that have 50 or more Java files. This leaves us with 6,925 projects, of which we were able to compile ∼5,300. These projects have a combined total of 371,492 class files, and 2,361,111 method declarations. Once the projects are compiled, we run additional tools to extract all the representations, and extract some of the labels that the tasks need. Note that the entire process took several months, which we thus spare other researchers—simply trying to compile ∼7,000 projects is a weeks-long endeavour. We provide additional data processing details in Appendix A." }, { "heading": "2.4 THE GLUECODE TASKS", "text": "To incentivize the community to develop models that leverage the structured and unstructured nature of code to perform global reasoning, we define several tasks that cover a spectrum in terms of reliance on the structure of code, and the need for non-local reasoning. Thus, each of the five GLUECode tasks is meant to test different reasoning capabilities of a model. An overview is shown in Table 1. We describe the tasks next and provide an extended discussion on the design of each tasks in Appendix B, including discussion of alternatives we discarded. Figure 1 shows how each task looks like in an artificial snippet. Note that global tasks may need additional context; for instance, a caller of countBlueElements passing a buffer that triggers a null dereference.\nTask Selection Rationale. We selected five tasks: three are inspired by practical scenarios, while two have labels generated by static analyzers. Models that succeed at the Operator Prediction task may be used to spot potential bugs in existing code (Pradel & Sen, 2018); models that succeed at Method Naming may be used to provide refactoring recommendations on legacy code bases; and models that succeed at Code Completion may be integrated in an IDE’s code completion engine. For the two tasks that have labels generated by static analyzers (NPath complexity and NullToken), we are not interested in merely replicating these programs. Rather, our goal is to incentivize the development of neural architectures that can demonstrate these forms of reasoning (fine-grained reasoning about the control and data flow of programs, both locally and globally), so that future models may incorporate these reasonings to succeed in more practical tasks.\nTask format and metrics. Two tasks in GLUECode are classification tasks, while the other three other are sequence generation tasks. We initially wanted all the tasks to use the same format, for simplicity and uniformity. However, this proved too restrictive as it severely limited the tasks that we could include, or led to variants of the tasks that were too easy. The sequence generation tasks use different metrics, to more closely fit to the scenario they represent. Since all performance metrics range between 0 and 1, we simply average them to obtain an overall score for a given model.\nUnit of interest. In all GLUECode tasks, the unit of interest is a method. Thus, for each task, the dataset is a mapping from a unique method ID to a label. As part of pre-processing, researchers can retrieve the representation they wish, including related source code entities (e.g., callers and callees of the current method). Note that we mask information that could lead to data leakage in these additional source code entities (e.g., for the method naming task, we mask the method call in the callers). To further prevent data leakage, for tasks that rely on global context, the training, validation, and test set is split at the project level, such that samples from projects in the validation and test set are unseen during evaluation. We also provide a development set.\nSize of datasets. The size of the dataset is dictated by several factors. Overall, we are limited by the number of projects we have analyzed, as adding more projects requires a significant pre-processing effort. For tasks like Method Naming and Code Completion we have about a million samples per task. While for other tasks (e.g. NullToken), the number of available examples is limited, as the analysis is expensive to run and returns a small number of examples. For classification tasks, some classes are less common, and we take as many samples as possible across all classes to have a balanced dataset. While several other works propose larger datasets, which may be more desirable for some purposes, we note that smaller datasets have two advantages: they ease the computational burden, and incentivize the community to work towards more sample-efficient models. Moreover, other models may use the pre-training paradigm to generate convincing results with limited samples." }, { "heading": "2.4.1 NPATH COMPLEXITY", "text": "NPath complexity prediction is purely structural and local: it can be solved while fully ignoring identifiers and non-local context. We used PMD to extract the NPath code complexity metric (Nejmeh, 1988), which counts the number of distinct paths control flow can take in a method. To succeed at this task, a model needs to keep track of the control structures and how they relate to each other (e.g. via nesting). It needs to do this while considering the entire scope of each method. The task is formulated as a classification task, with a balanced set of 12 complexity buckets. Note that since NPath is unevenly distributed, we use buckets that redistribute the complexity values in our dataset evenly. The target metric is classification accuracy." }, { "heading": "2.4.2 OPERATOR PREDICTION", "text": "The second task involves mostly local reasoning, but in contrast to NPath complexity, it leverages both structured and unstructured information. The task consists of predicting a masked operator in the method body, similar to DeepBug (Pradel & Sen, 2018). This involves structural reasoning as the context is useful in determining the type of operators (e.g., Is the operator in an if condition?), as well on the identifier names which may embed information valuable in determining the operator type (e.g., an identifier “maxQuantity”). While we expect the task to mostly rely on local reasoning in the method body, non-local reasoning may be helpful too (e.g., getting type information from instance variables or method return types).\nThe task has 12 classes spanning the most common operators: The 5 arithmetic operators (basic operations and modulo), six Boolean comparison operators, and the assignment operator. The classes are balanced, and we use accuracy as a metric. For each method, a single operator is masked, even if there are multiple operators present in the method." }, { "heading": "2.4.3 METHOD NAMING IN CONTEXT", "text": "In method naming task (Allamanis et al., 2016; Alon et al., 2018c), the method name is masked and needs to be predicted. This can be seen as a summarization task (of the method body). A model must reason over the body, both at the level of the structure (control and data flow), and at the level of identifiers, to succeed at this task.\nWhile most existing formulations of the task have been restricted to using the method body, GLUECode does not impose such a restriction; indeed we expect that adding additional context, such as class-level information and information from the calling contexts, to lead to performance improvements. For instance, having access to the class context may allow a model to better leverage naming conventions of the project. Likewise, useful information may be found on method usages (invocations), such as the names or values given to the parameters or the return value. Thus, GLUECode provides the facilities to incorporate such information in models and representations. Note that to avoid data leakage, we mask the target method name in each caller’s context, across representations. In contrast to traditional method naming, we use a character-level BLEU as an evaluation metric. The rationale is that is independent of tokenization (Denoual & Lepage, 2005), and reduces the weight of common, but short subwords such as “get” (see Appendix B for details)." }, { "heading": "2.4.4 CODE COMPLETION IN CONTEXT", "text": "Code completion is another task that has been used to evaluate recommendation algorithms (Robbes & Lanza, 2010) and source code models, particularly autoregressive language models (Hellendoorn & Devanbu, 2017; Karampatsis et al., 2020). We recast the task as masked language modelling task, similar to Alon et al. (2020). Having a code completion task as a masked language modelling task allows model to leverage both the preceding context and the following context, which makes the task relevant in a scenario where a programmer would be modifying existing code. Furthermore, we restrict the task to predict only method calls, not other types of tokens. This has two benefits: 1) it makes the task more challenging by removing tokens that are very easy to predict such as parentheses and semicolon, and 2) it emphasizes the tokens for which non-local reasoning is beneficial.\nSince the goal is to predict a method call inside a method body, the whole project scope is relevant. While in method naming, models summarize an entire method body in a new — possibly unseen — name, in code completion, a model should identify which of the existing method calls fits. These\nmethods could be defined in the same class, in another class or package in the system, or imported from a dependency. This makes the method completion task much more amenable to performance improvements when the non-local context is taken into account.\nFor this task, GLUECode uses exact match accuracy: models should generate the exact masked token. Unlike method naming, a close match does is not valid (in a practical scenario, a close match would likely result in an error). The call graph representation of the system hides any links between the target and the called method, to avoid data leakage." }, { "heading": "2.4.5 NULL DEREFERENCE PREDICTION", "text": "The last task is null dereference prediction. This task should benefit the most from non-local reasoning. To succeed at this task, models should be able to reason across the control flow and the data flow of several methods at once. For this task, we use the Infer static analyzer (Facebook, 2015) to find possible null dereferences. Infer performs full-program static analysis to track the possible values of variables, and emits warnings when it finds a possible execution path in which a null pointer dereference can occur. These execution paths can span several methods, across several files, and point to the line number and exact token in which the null dereference can occur. This kind of reasoning requires non-local reasoning for most of the warnings emitted by Infer (except those where the execution path that was found does not exit the method body). We ran Infer on all the projects in the dataset. Since Infer’s analysis is precise, it does not produce many warnings (∼20,000 in total), unlike other static analysis tools such as FindBugs (Ayewah et al., 2008) which are more prone to false positives.\nThe goal of the task is to output the token where the null dereference may occur. Similar to code completion, we measure accuracy, considering only exact matches. We also added 20% of negative examples, in which the model has to output a special token signifying that no null dereference warning could be found, to incentivize models to account for this eventuality. Thus, a naive baseline always predicting this token would have a maximum accuracy of 20%." }, { "heading": "3 EVALUATION", "text": "We provide performance results for several simple baselines (MLPs, LSTMs and CNNs), as well as a more advanced model: a pre-trained transformer. All these models perform local reasoning, and treat the source code as a sequence of tokens. There are, of course, many more advanced models that could be evaluated on GLUECode, starting with models that are limited to local reasoning but also exploit source code’s structure, such as Tree-LSTMs, linearized ASTs, or Graph Neural Networks. The space of possibilities grows even further if we consider models that incorporate non-local reasoning; if not, there would not be a need for GLUECode in the first place. Thus, the baselines we provide should be taken as a starting point, giving insights on the lower bound exhibited by simple baselines, as well as the performance of a pre-trained transformers that are closer to the state of the art. Significant exploration of the performance of models lies ahead, a task for which we welcome the involvement of the community.\nMLP. A simple Multilayer Perceptron with a single hidden layer, intended to represent a very simple but non-naive baseline. The input embedding layer has a maximum size of 200 tokens. The single dense hidden layer has 64 hidden units. The output layer is a softmax layer over the all the classes for classification, or the entire vocabulary for the generation task." }, { "heading": "Model NPath Operators Naming Completion NullToken", "text": "CNN. A Convolutional Neural Network, with an embedding layer, followed by a 1D convolution layer of size 5, and by a global average pooling layer. These are followed by a dense hidden layer and an output layer similar to the MLP above. We use it to explore the impact of the inductive bias of convolution on the GLUECode tasks.\nBiLSTMs A Bidirectional sequential model, where the embedding layer is followed by a single bidirectional LSTM layer, a dense layer and the output layer. It also uses a softmax layer for all tasks (predicting tokens over all the vocabulary for sequence generation tasks).\nSeq2Seq/Seq2Tok Another LSTM variant that uses a unidirectional encoder-decoder architecture and predict tokens as sequences of camelCase-separated subtokens (Seq2Seq), or a single token for the classification tasks (Seq2Tok). Both variants allow us to explore the impact of the sequential inductive bias. Seq2Seq and Seq2Tok allow us to reduce the impact of OoV tokens as we use subtokens.\nTransformer. We include a stronger baseline, a Transformer, to explore the impact of the popular NLP pre-training then fine-tune paradigm. CodeBERTa is a pre-trained, 6-layer Transformer trained on the CodeSearchNet challenge dataset (Husain et al., 2020) by HuggingFace. We fine-tune it separately on each task. We chose this as our stronger baseline since pretrained transformers for code have performed very well on other tasks (Kanade et al., 2020)" }, { "heading": "3.1 RESULTS", "text": "The baseline evaluation results on the GLUECode tasks are presented in Table 2 above.\nOverall, we see that the Transformer exhibits higher performance on the first four tasks (NPath prediction, Operator prediction, Method naming), but is only having reasonably acceptable performance on the first two tasks (Npath prediction and Operator prediction), which are the most local ones. For the tasks which have some globalness aspect to it, the transformers have an average accuracy of 40% with highest score being barely above the fifty percent threshold for the method call completion task. Even in the local tasks, where the transformers score well, there is still a margin for improvement of more than 20%.\nIt is important to note here that unlike method naming, completion task has many labels (method api calls) which belong to the Java standard library, such as println(), toString() etc. which are commonly used, and which are easier to predict for DL models (Hellendoorn et al.,2019a). About 20% of the dataset consist of standard library method calls. This might explain why the models perform better in comparison solely against the method naming task.\nWe suspect that we may have over-sampled API methods, which are easier to predict for DL models. We are considering making the task more challenging by using stratified sampling, to force the sample to have more locally defined methods than it has now." }, { "heading": "4 DISCUSSION", "text": "There is ample room for improvement. Our goal was to provide tasks that are challenging for models that employ only local reasoning. None of the models have high performance across all the tasks; struggling on most tasks. While we expect state of the art structured models (e.g., using ASTs or graphs) to perform better on the tasks requiring mostly local reasoning, we do not except that they will reach acceptable performance on the tasks that require non-local reasoning.\nIncorporating non-local reasoning. Significant improvements are required to develop models that better handle more global context. We expect that simple solutions such as growing models to accommodate more context will hit diminishing returns as the size of the input grows considerably. Better strategies will need to be devised.\nImpact of inductive bias. On some tasks, the performance of the models vary widely. We hypothesize that the inductive bias of some of the models is not a good fit for some task. For instance, the Transformer trained with the MLM objective works very well for operator prediction (even without fine-tuning!), but the MLP outperforms it on the NullToken task.\nMulti-task models. While a longer-term goal is to define multi-task models that perform well on all the tasks in the benchmark, the tasks proved challenging enough that we expect most short-term development should be geared towards single-task performance first." }, { "heading": "4.1 LIMITATIONS OF THE BENCHMARK", "text": "Additional software characteristics. With GLUECode, we focus on two principal characteristics of software: the fact that it is structured, and that non-local reasoning is necessary. There are other characteristics we didn’t take into account, such as the prevalence of natural language comments (Allamanis et al., 2015b), the fact that code can be executed (Wang, 2019), or that it evolves (Hoang et al., 2019). Other benchmarks or an extension of GLUECode would be needed to account for this.\nComparison with previous work. Some of our tasks (code completion and method naming) exist in previous work. While comparing with the literature would be insightful, it is difficult, as our task formulation (and our dataset) are quite different.\nShortcuts. Deep learning models can take shortcuts and exploit spurious correlations if they are present in the data (Geirhos et al., 2020). We spent considerable time iterating on the task selection and formulation to avoid these issues (particularly on the Nulldef task), by thoroughly investigating when our baselines had suspiciously high performance. However we cannot guarantee we have found all issues.\nChoice of metrics. We tried to select metrics that present a fair view of performance, at the expense sometimes of reformulating a task (e.g. for method naming). When using accuracy, we were careful to balance the datasets.\nLimited number of baselines. Our principal focus in this work is the definition of the tasks. We have a limited number of baselines that we include as a result. We plan to evalaute more baselines in future work, and we invite the community to contribute.\nCode duplication. Code duplication is known to be extensive in software (Allamanis, 2019). A simple approach that filters out duplicated code would not work in our case, as it would make the projects to be incomplete for global contexts. We ensured that the methods in the test set are not seen in the training set, but it is possible that a handful of methods are duplicated, with unknown effects." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "We introduce GLUECode, a benchmark for source code machine learning models that emphasizes that code is composed of interacting entities and has a fundamental structured nature. The GLUECode tasks include both tasks that require local and global reasoning, to account for source code’s interacting entities. Moreover, to facilitate experimentation on range of structures, includes an exhaustive set of pre-processed source code representations (textual, ASTs, graphs) that researchers are free to leverage when they are building their models. The data collection and preprocessing for the task datasets and generating multiple representations for each data sample, scaled at the size of thousands of projects, took several months, which we spare the community. We also tested several baselines, ranging from simple neural models to pretrained transformers. The results indicate that there is a lot of progress to be made on the GLUECode tasks. The design space of models that leverage global reasoning on complex, structured data is even larger than for local models. Thus, we invite the community to download our code representations, write “glue code” to transform these representations as they see fit, and evaluate the resulting source code models on GLUECode tasks." }, { "heading": "A APPENDIX: ADDITIONAL DETAILS ON THE DATASET AND REPRESENTATIONS", "text": "" }, { "heading": "A.1 THE 50K-C DATASET", "text": "The projects in 50K-C (Martins et al., 2018) where harvested from GitHub, and selected as they included a build script which made automated compilation of the dataset available. We need compilable projects as additional post-processing tools require Java bytecode to work. However, many of the projects are small, so we selected the ∼7,000 projects with 50 or more classes, as a proxy for more mature projects. While trying to compile the projects, we did notice some failures, mainly related to some unresolved libraries. Since we had still ∼5,300 projects that compiled successfully, we did not investigate it further. We use Andrew Rice’s feature graph extractor (https://github.com/acr31/features-javac) to extract feature graphs similar to the ones in Allamanis et al. (2017), but for Java instead of C#. This representation allows us to also extract the AST and token representations, by simply omitting unnecessary edges. Note that compiling projects and extracting feature graphs both took several weeks to simply execute.\nOf note, these feature graphs are at the file level, not the project level. We thus use the Java call graph extractor (https://github.com/gousiosg/java-callgraph) of Georgios Gousios to extract inter-procedural call graphs. We then link the entities across representations using their UUIDs, and apply further post-processing to disambiguate some method calls between file. In the cases where a method call can not be disambiguated (e.g., a polymorphic method call), we include all possible edges in the call graph." }, { "heading": "A.2 AVAILABLE REPRESENTATIONS IN GLUECODE", "text": "Here, we present the code representations readily-available with our benchmark. We choose a data sample from our dataset, and present the same data sample in various representations. Based on machine learning model, different representations corresponding to the same data samples are readily available making evaluation on tasks versatile across different model types. All representations are stored in a database, where they are accessible via a sample’s UUID.\nRaw Text The first text representation we have for every data sample is the raw text. Each line is comma separated, and even the line breaks and tab spaces are preserved.\npublic static Key getKey(String ahex) , { , try , { , byte[] bytes = CHexString.toByteArr(ahex); , SecretKeySpec skeySpec = new SecretKeySpec(bytes, \"AES\"); , return skeySpec; , } , catch( Exception e ) , { , System.err.println(\"CAesEncrypt.getKey: \" + e); , return null; , } , }\nTokens The second representation is the list of method tokens which are ready to use, or further pre-processed if a model using subword units is desired.\nPUBLIC,STATIC,Key,getKey,LPAREN,String,ahex,RPAREN,LBRACE,TRY,LBRACE,byte, LBRACKET,RBRACKET,bytes,EQ,CHexString,DOT,toByteArr,LPAREN,ahex,RPAREN,SEMI, SecretKeySpec,skeySpec,EQ,NEW,SecretKeySpec,LPAREN,bytes,COMMA,\"AES\",RPAREN, SEMI,RETURN,skeySpec,SEMI,RBRACE,CATCH,LPAREN,Exception,e,RPAREN,LBRACE, System,DOT,err,DOT,println,LPAREN,\"CAesEncrypt.getKey:\",PLUS,e,RPAREN,SEMI, RETURN,null,SEMI,RBRACE,RBRACE\nAST We also have AST representation of every data sample, where the ast labels are the list of nodes of the data sample, and ast edges are the list of tuples with parent-child edges.\n{ \"ast_labels\": [\"METHOD\", \"NAME\", \"MODIFIERS\", \"FLAGS\", \"RETURN_TYPE\",\n\"IDENTIFIER\", \"NAME\", \"PARAMETERS\", \"VARIABLE\", \"NAME\", \"TYPE\", \"IDENTIFIER\", \"NAME\", \"BODY\", \"BLOCK\", \"STATEMENTS\", \"TRY\", \"BLOCK\", \"STATEMENTS\", \"VARIABLE\", \"NAME\", \"TYPE\", \"ARRAY_TYPE\", \"TYPE\", \"PRIMITIVE_TYPE\", \"PRIMITIVE_TYPE_KIND\", \"INITIALIZER\", \"METHOD_INVOCATION\", \"METHOD_SELECT\", \"MEMBER_SELECT\", \"EXPRESSION\", \"IDENTIFIER\", \"NAME\", \"IDENTIFIER\", \"ARGUMENTS\", \"IDENTIFIER\", \"NAME\", \"VARIABLE\", \"NAME\", \"TYPE\", \"IDENTIFIER\", \"NAME\", \"INITIALIZER\", \"NEW_CLASS\", \"ARGUMENTS\", \"IDENTIFIER\", \"NAME\", \"STRING_LITERAL\", \"IDENTIFIER\", \"NAME\", \"RETURN\", \"EXPRESSION\", \"IDENTIFIER\", \"NAME\", \"CATCHES\", \"CATCH\", \"BLOCK\", \"STATEMENTS\", \"EXPRESSION_STATEMENT\", \"EXPRESSION\", \"METHOD_INVOCATION\", \"METHOD_SELECT\", \"MEMBER_SELECT\", \"EXPRESSION\", \"MEMBER_SELECT\", \"EXPRESSION\", \"IDENTIFIER\", \"NAME\", \"IDENTIFIER\", \"IDENTIFIER\", \"ARGUMENTS\", \"PLUS\", \"LEFT_OPERAND\", \"STRING_LITERAL\", \"RIGHT_OPERAND\", \"IDENTIFIER\", \"NAME\", \"RETURN\", \"EXPRESSION\", \"NULL_LITERAL\", \"VALUE\", \"PARAMETER\", \"VARIABLE\", \"NAME\", \"TYPE\", \"IDENTIFIER\", \"NAME\"],\n\"ast_edges\": [ [0, 1], [0, 4], [0, 7], [0, 13], [0, 2], [2, 3], ... [54, 55], [55, 81], [55, 56], [56, 57], ... [79, 80], [81, 82], [82, 83], [82, 84], [84, 85], [85, 86]\n] }\nCode2Vec We have Code2Vec representations for every data sample. Each method is represented as a set of up to 200 AST paths; in case the method has more than 200 possible paths, the 200 paths are selected at random. Each path is a combination of AST node labels, represented as a unique symbol.\nget|key key,362150388,getKey key,714300710,ahex key,-1248995371,string getKey,-1103308019,ahex\ngetKey,1228363196,string ... e,-850278433,println e,910578178,null println,-1488546123,null\nCode2Seq We also have Code2Seq representations for the entire dataset of samples. These are similar to Code2Vec representations, but the identifiers are sequences of camelCase-separated tokens, while the paths are sequences of AST node labels.\nget|key key,Cls0|Mth|Nm1,getKey key,Cls0|Mth|Prm|VDID0,ahex key,Cls0|Mth|Prm|Cls1,string getKey,Nm1|Mth|Prm|VDID0,ahex getKey,Nm1|Mth|Prm|Cls1,string ... e,Nm1|Plus2|Cal|Nm3,println e,Nm1|Plus2|Cal|Ex|Bk|Ret|Null0,null\nprintln,Nm3|Cal|Ex|Bk|Ret|Null0,null\nFeature Graphs Finally, we have the feature graph representation for each sample of the dataset. The node labels key lists all nodes in the feature graph, while the edges key has information about every edge type and the corresponding connections.\n{ \"backbone_sequence\": [13, 14, 15, 16, 17, 18, 19, 20, 21, 22], \"node_labels\": [\"METHOD\", \"NAME\", \"MODIFIERS\", \"FLAGS\",\n\"RETURN_TYPE\", \"IDENTIFIER\", \"NAME\", \"BODY\", \"BLOCK\", \"STATEMENTS\", \"RETURN\", \"EXPRESSION\", \"STRING_LITERAL\", \"PUBLIC\", \"String\", \"METH_PLACEHOLDER\", \"LPAREN\", \"RPAREN\", \"LBRACE\", \"RETURN\", \"\\\"Login request processing\\\"\", \"SEMI\", \"RBRACE\"],\n\"edges\": { \"CH\": [\n[0, 1], [0, 4], [0, 7], [0, 2], [2, 3], [4, 5], [5, 6], [7, 8], [8, 9], [9, 10], [10, 11], [11, 12]\n], \"NT\": [\n[13, 14], [14, 15], [15, 16], [16, 17], [17, 18], [18, 19], [19, 20], [20, 21], [21, 22]\n], \"LU\": [], \"LW\": [], \"CF\": [], \"LL\": [], \"RT\": [], \"FA\": [], \"GB\": [], \"GN\": []\n}, \"method_name\": [\"get\", \"Servlet\", \"Info\"]\n}" }, { "heading": "A.3 COMBINING REPRESENTATIONS FOR GLOBAL CONTEXT", "text": "For global context we provide project-level call graphs. Across all representations, source code entities (methods and classes) are identified via a Universally Unique Identifier (UUID), and can be linked together.\nFor every project, we provide a call graph representation of the entire project. This representation is a graph where the nodes are method UUIDs, and the edges represent caller/callees relationships. This representation can be used to retrieve callers and callees of the method of interest, or even the entire project’s call graph, should researchers wish to do so." }, { "heading": "B APPENDIX: ADDITIONAL DETAILS ON THE GLUECODE TASKS", "text": "" }, { "heading": "B.1 NPATH", "text": "We used the PMD static analyzer to compute the NPATH complexity of the methods in the dataset. PMD implements a variety of static analysis checks. The detailed description of the NPATH complexity metric, as implemented in PMD, is available at https://pmd.github.io/ latest/pmd_java_metrics_index.html#npath-complexity-npath. Of note, NPATH grows exponentially, as consecutive statements have their complexity multiplied. This can lead to very high NPATH values. The distribution of the metric is highly skewed, with many more methods that have low complexity values than ones with higher ones. In addition, there are peaks in the distribution as values that are powers of two are more numerous than others. As a result, we defined variable size bins to have an appropriately balanced dataset. Our bins are 1,2,3,4,5-6,7-8,9- 10,11-15,16-20,21-30,31-50,51-100\nAlternatives we considered. We considered several other tasks that incentivize structure at the local level, such as tasks that would involve replicating local static analyzes. We considered having four tasks representing each canonical local static analyses: Live variables (“backwards may”); Reaching definitions (“forwards may”); available expressions (“forwards must”); and very busy expressions (“backwards must”). However, we felt this would have weighted too heavily on local tasks, hence we decided for a single task. We had considered other common complexity metrics such as Halstead’s complexity metrics and McCabe’s cyclomatic complexity, and we prototyped a version of this task using McCabe’s complexity. Ultimately, we decided against it, as it did not require models to reason on how control flow statements relate to each other; it was limited to counting operators." }, { "heading": "B.2 OPERATOR PREDICTION", "text": "Since not all operators are equally rare, we made choices among the most common operators, in order to have a balanced dataset in the end. We also had to select operators that could be plausibly mistaken from one another, leading us to discard additional operators. We ended up choosing the following operators: ‘‘+’’, ‘‘-’’, ‘‘*’’, ‘‘/’’, ‘‘%’’, ‘‘=’’, ‘‘==’’, ‘‘!=’’, ‘‘<’’, ‘‘>’’, ‘‘<=’’, and ‘‘>=’’. Thus, we have two larger classes of arithmetic operators on the one hand, and boolean operators on the other. We find that models do pick up on this, and tend to missclassify arithmetic operators with other arithmetic operators, and boolean operators with other boolean operators.\nAlternatives we considered. We considered other tasks that, similarly to operator prediction, were mostly local but were more “holistic” in their reasoning. An early candidate was the “VarMisuse” task of (Allamanis et al., 2017), where models have to detect whether a variable is replaced by another, type-compatible variable. However, this requires extensive static analysis, that is so far only implemented for C#, not Java. We also considered other “Misuse” variants, such as an “OperatorMisuse” variant of operator prediction. We decided against this as we were concerned that substituting an operator with another may turn out to be too easy of a task, and that models may take shortcuts in their reasoning. An interesting other task would be predicting the output of programs, as in (Zaremba & Sutskever, 2014); this would however diverge from our goal, as the task involves generated code snippets." }, { "heading": "B.3 METHOD NAMING", "text": "We initially considered all the methods in the corpus, after accounting for code duplication. We did find that a significant number of methods had very short names, which inflated performance on the task. Thus, we filtered out most method names that were shorter than 4 characters; we left a small portion of them (around 23,000) in order to arrive at a round number of one million method names. We use the character-level BLEU metric described in Denoual & Lepage (2005), with smoothing “Smoothing1” from (Chen & Cherry, 2014). We replace the method name with a special mask token, also replacing it in the method body (in case the method is recursive or forwards it to a similar, or\nuses super, and also replacing it in the callers of the method, for models that want to use those in their global reasoning.\nAlternatives we considered. We considered other tasks that involve reasoning over the whole method body, such as a summarization variant in which the task is to predict a method comment (such as in (LeClair et al., 2019). This task had the advantage of also requiring models to generate natural language, but we felt this complexified the architecture on the decoding side, and would dillute the focus of the benchmark. We also considered clone detection tasks (Mou et al., 2016; Wei & Li, 2017), but these would require the models to reason over a pair of entities, which would also complexify the models for a single task (a more drastic change, as it is on the encoder side).\nWe also had extensive discussions on the metric to use. The state of the art evaluates method naming by tokenizing the prediction and the target according to camelCase convention. This has two disadvantages: 1) it adds a bias towards models that tokenize identifiers in the same way (while recent models tend to use variants of byte-pair encoding (Sennrich et al., 2015), that may not respect the camelCase convention), and 2) it weights common subwords such as “get”, “set”, or “is” too heavily, distorting performance. We instead use a character-level BLEU metric that is independent of the tokenization (Denoual & Lepage, 2005), and reduces the weight of these common, but very short subwords. This allows researchers to experiment with the tokenization that they prefer, and makes the task more challenging while still rewarding close, but not exact matches (e.g., similar words but with different endings). We also considered other character-level metrics, such as the Jaro-Winkler string distance (Winkler, 1990). However, we found that it had a “high floor”, giving relatively high scores to very distant guesses, and emphasizing similarities in the prefix, which increased the weight of the easy subwords; both issues made it harder to accurately measure progress on the task." }, { "heading": "B.4 METHOD COMPLETION", "text": "In each method in the dataset (the same one as method naming), we mask a single method call in the method body, at random. The task is to predict this token, with only exact matches allowed: a code completion engine that would recommend “near misses” would not be very useful. The method call could be to a method in the same class, to a method in a different class in the same java package, to a method anywhere in the system, or to a method imported from a library. Each of these cases involves different kinds sizes of context and different kinds of reasoning. Models leveraging only local reasoning will have to generate identifiers from scratch, increasing the probability of these “near misses”. Models that use global reasoning could, on the other hand, learn to copy an identifier in the extended context. Existing work show that deep learning with local reasoning can be more successful in predicting API method calls (more likely to be seen in training) than method calls found in the project (Hellendoorn et al., 2019a). Beyond masking the method call token, we also mask call edges to the method that might be present in other representations.\nAlternatives we considered. While looking for tasks that involve local masking of the method body, but would require models to take into account global context, a very close second alternative we considered was type prediction, for which a few more global models already exist (Wei et al., 2020; Allamanis et al., 2020). We ultimately preferred method call completion as the set of potential candidates (methods) is larger and finer grained than in type prediction (classes). We also discussed variants of method call completion, namely whether to ask models to hide and complete the arguments to the method call, as is done in (Alon et al., 2020). However, completing the arguments to the method call would have increased the weight of the local context, as most arguments are variables defined in the context. This would have made the task less aligned with the benchmark’s goal." }, { "heading": "B.5 NULLTOKEN", "text": "For each warning, Infer produces a report that contains: an error message, the line number where the null dereference happens, and a trace of abstract interpretation steps that Infer took to find the potential null dereference. This trace ranges from simple, local cases (e.g., taking a particular if branch while a variable is not yet initialized), to highly complex cases covering dozens of steps across multiple methods, scattered over several files. Over all the projects, infer took several weeks to execute, and produces on the order of 20,000 warnings, showing that these warnings are pretty\nrare. We did filter some of the warnings: some methods had more than one warning, which would make the task ambiguous for the models, so we discarded all warnings in this case.\nAlternatives we considered. Infer (Facebook, 2015) has several precise, interprocedural analyses that are strong candidates for tasks that require precise modelling and reasoning over multiple entities. Examples include reachability analysis (finding whether method A can call method B, directly or indirectly), or an analysis that estimates the runtime cost of a method (including the cost of methods that it calls). All of these tasks have the drawback that we are asking the model to emulate the reasoning of an existing tool. One of the deciding factors was that Null dereference detection, while being a task that requires us to emulate the reasoning of a tool, is closer to a practical scenario, as it provides warnings for real bugs. Another alternative in that area would be to use a Taint analysis tool, such as (Arzt et al., 2014); however, we would expect that taint analysis warnings would be even rarer than possible null dereferences.\nWe initially tried a simpler version of the task, which was a straightforward binary classification at the method level (whether there a null dereference warning in this method), with a balanced sample of positive and negative methods. However, selecting negative examples proved to be difficult, as even simple models found spurious correlations that led to inflated performance in this simplified version of the task. We thus settled for a generation version of the task, where the goal is to output the token in which the null dereference can occur. We also discussed the amount of negative examples to include, finding that 20% was a reasonable tradeoff, that required models to envision that having no null dereference was a possiblity, while not inflating disproportionately the performance of trivial baselines that always predict this label.\nWe also considered more complex version of the task, such as requiring models to predict steps in Infer’s execution traces, but we thought they might prove too difficult at this time. We also considered a variant where the model would need to predict the line number (starting from the beginning of the method) instead of the actual token, but didn’t choose this since task would then become sensitive to code formatting choices." }, { "heading": "C APPENDIX: DETAILS ON THE BASELINES", "text": "Vocabulary MLP, CNN and BiLSTM all use a full-token vocabulary of 10,000 elements, initialized on the training set of each task. Tokens that are not in the top 10,000 are replaced by OOV tokens. Seq2Seq splits token via the camelCase coding convention to reduce vocabulary size, while the pretrained Transformer uses it’s original open vocabulary (using Byte-Pair encoding).\nMLP: A model with an embedding layer of vocabulary size 10,000, embedding dimension 64, and input maximum length 200, as its first layer. This converts our words or tokens into meaningful embedding vectors. This is fed into a single, dense hidden layer of size 64. We use ReLU as our activation function. The output layer has a softmax activation. We compile the model with the Adam (Kingma & Ba, 2014) optimizer, and use sparse categorical cross-entropy as our loss since we are going to use the same model for classification and generation (this models treat generation as classification over the entire vocabulary).\nBiLSTM: A model with an embedding layer of vocabulary size 10,000, embedding dimension 64, and input maximum length 200, as its first layer. This converts our words or tokens into meaningful embedding vectors. Then we add our Bidirectional LSTM layer. The standalone LSTM layer is initialized with the value of the embedding dimension. The LSTM layer is then wrapped with a Bidirectional layer wrapper. We then add a densely-connected neural network layer on top of that with the number of units equal to the embedding dimension, and use ReLU as our activation function. And yet another layer, with softmax activation, which is our output layer. We compile the model with the Adam (Kingma & Ba, 2014) optimizer, and use sparse categorical cross-entropy as our loss since we are going to use the same model for multi-class classification.\nSeq2Seq/Seq2Tok: Same as BiLSTM, but is unidirectional with an encoder/decoder architecture and uses camelCase-separated tokens, reducing OOV.\nCNN: For our base CNN model, use an embedding layer of vocabulary size 10,000, embedding dimension 64, and input maximum length 200, as our first layer. We then add a 1D convolution layer, specifying the dimensionality of the output space 128, the size of 1D convolution window 5, and the activation function which we set to ReLU. We then add a 1D global average pooling layer to reduce the data dimensionality, so as to make our model faster. The last two layers on top of the pooling layer are identical to our LSTM model, we add a densely-connected neural network layer with the number of units equal to the embedding dimension, and use ReLU as our activation function. We then add another dense layer as our output layer, with a softmax activation.\nWe also choose sparse categorical cross-entropy as our loss function as we use the same model for all the tasks. We compile the CNN model with the Adam Kingma & Ba (2014) optimizer.\nTransformer: We use CodeBERTa-small1, a pre-trained, 6-layer transformer based on the RoBERTa (Liu et al., 2019) architecture. The model was pre-trained on 2 million functions written in six different languages (including Java) from the CodeSearchNet dataset(Husain et al., 2020) and released by Huggingface (Wolf et al., 2020).\n1https://huggingface.co/huggingface/CodeBERTa-small-v1" }, { "heading": "D APPENDIX: RELATED WORK", "text": "" }, { "heading": "D.1 BENCHMARKS", "text": "Many communities create benchmarks to advance the state-of-the-art of their field. Arguably, the ImageNet challenge (Russakovsky et al., 2014) is one of the most well-known benchmarks in the machine learning and computer vision community. In software engineering, Sim et al. (2003) urged to adopt benchmarking as an evaluation measure, based on the impact it has on community building. While in the performance community, benchmarks such as the one from Blackburn et al. (2006) have been used. Below we provide a brief overview of some NLP benchmarks, as an extended related work, which focus beyond a single task.\nbAbI Tasks Weston et al. (2015) present several NLP tasks in simple question-answering format intended to test dialogue agents on natural language understanding. bAbI aimed to provide a yardstick for researchers to assess their NLP models for intelligent dialogue agents. The tasks in bAbI are artificial, but measure specific aspects of reading comprehension, such as reasoning by chaining facts, simple induction, deduction, etc., and have well-defined degrees of difficulty.\nGLUE Benchmark To progress towards the generalizability of NLP models, Wang et al. (2018) present the GLUE benchmark to evaluate and analyze the performance of NLP models across a diverse range of existing tasks. They further evaluate baselines for multi-task and transfer learning, comparing them to training a separate model per task.\nSuperGLUE Benchmark With the performance of NLP models on the GLUE benchmark surpassing the level of non-expert humans, Wang et al. (2019) reinforce their GLUE benchmark by presenting the SuperGLUE benchmark with harder tasks and more diverse task formats.\nDecaNLP Benchmark Going beyond the paradigm of task-specific NLP models, McCann et al. (2018) present a set of ten tasks, to evaluate general NLP models. They cast all tasks in a Question-Answering format over a given context, and present their own Multitask Question Answering Network (MQAN) that jointly learns on all tasks." }, { "heading": "D.2 CODE PROBLEM TASKS", "text": "Here we detail some related problem tasks in the source code domain, for machine learning source code models. Several studies have worked on source code-related tasks (Allamanis et al., 2018), some of which we discuss here. These tasks are examples of problem tasks we could address to a great degree with the aid of modern deep learning methods.\nMethodNaming A machine learning model of source code aims to predict the name of a certain method, given its code body. This problem task was explored by multiple studies (Allamanis et al., 2015a; 2016; Alon et al., 2018a; Fernandes et al., 2018).\nVarMisuse This goal of this task is to detect and fix incorrect variable uses within a program. Given the source code, a machine learning model should determine if a certain variable has been misused at a given location. For example, a developer, might use i instead of j in an index. Allamanis et al. (2017); Hellendoorn et al. (2019b) addressed this task and showed that a graph neural network learns to reason about the correct variable that should be used at a given program location; they could also identify a number of bugs in mature open-source projects.\nDefect Prediction Finding a broader set of defects in source code is another task with the potential to be extremely useful. Pradel & Sen (2017) address the problem of defect prediction by training a deep-learning based model that can distinguish correct from incorrect code. They present a general framework for extracting positive training examples from a code corpus, make simple code transformations to convert them into negative training samples, and then train a model to indicate one or the other.\nClone Detection This tasks deals with the identification of code clones. With available pairs of code fragments, a source code model should be able to indicate whether the sample pairs are clones. White et al. (2016) utilize a deep learning approach for the classic task of code clone detection, both at the file and the method level with promising results." }, { "heading": "D.3 SOURCE CODE REPRESENTATIONS", "text": "Representing source code for the consumption in machine learning models is an active research area. In the recent past, programs were generally represented as a bag of tokens to be fed into machine learning models, but multiple studies (Allamanis et al., 2017; Alon et al., 2018a;b; Maddison & Tarlow, 2014) have now shown that leveraging the structured nature of source code helps machine learning models to reason better over code; and the models trained on such representations perform consistently well over sequential or less-structured program representations. Therefore, in our discussion here we include program representations which make use of some form of program structure, whether by extracting information from abstract syntax tress, control-flow or data-flow graphs, or similar structures.\nAST The abstract syntax tree (AST) is one of the most commonly used structured representation for code. There are multiple ways to exploit this structure. Some studies directly model the AST as a sequence of applications of a context-free grammar (Bielik et al., 2016; Maddison & Tarlow, 2014), and augment the grammar with long-range information (Yin & Neubig, 2017; Brockschmidt et al., 2018). Various other approaches have considered “summarizing” the tree-like structures recursively, inspired from work in NLP. For example, Büch & Andrzejak (2019) use the AST node type and node content to create node representations of a function. Mou et al. (2016) use a convolutional architecture on ASTs. More recently, Alon et al. (2018b;a) linearize an AST into a bag of AST paths. By sampling paths from one leaf node to another, they generate a set of these paths. Finally, they use representations of the paths for the task of MethodNaming as code summarization, and code captioning.\nPath-based Embedding of CFGs DeFreez et al. (2018) utilize inter-procedural control flow graphs (CFG) to generate function embeddings for code. They consider paths from random walks on the inter-procedural control flow graph of a program to generate the embeddings. They then use the embeddings, for C code, to detect function clones.\nFeature Graphs Allamanis et al. (2017); Fernandes et al. (2018); Raychev et al. (2015) combine information from multiple sources, such as token sequences, ASTs, control-flow, data-flow graphs etc. of a program to generate feature graphs, which consider long-range dependencies and the structural nature of source code, to reason over source code. To learn from these graphs, these works use methods such as conditional random fields (CRF) and graph neural networks (GNN)." } ]
2,020
null
SP:b49b8d0d0ece60538ce7629c6affeefbcdaf2d3c
[ "This paper proposes four novel efficient adversarial attack methods beyond Lp threat models. Together with other two existing attack methods, these six attack methods combine as a framework to evaluate robustness of defenses against unforeseen attacks. In this framework, the novel measure is normalized with the performance of adversarial training. The experiments show that the Linf adversarially trained model may not lead to improvement of robustness against other threat models. It is expected that the framework could help test model robustness." ]
Most existing adversarial defenses only measure robustness to Lp adversarial attacks. Not only are adversaries unlikely to exclusively create small Lp perturbations, adversaries are unlikely to remain fixed. Adversaries adapt and evolve their attacks; hence adversarial defenses must be robust to a broad range of unforeseen attacks. We address this discrepancy between research and reality by proposing a new evaluation framework called ImageNet-UA. Our framework enables the research community to test ImageNet model robustness against attacks not encountered during training. To create ImageNet-UA’s diverse attack suite, we introduce a total of four novel adversarial attacks. We also demonstrate that, in comparison to ImageNet-UA, prevailing L∞ robustness assessments give a narrow account of adversarial robustness. By evaluating current defenses with ImageNet-UA, we find they provide little robustness to unforeseen attacks. We hope the greater variety and realism of ImageNet-UA enables development of more robust defenses which can generalize beyond attacks seen during training.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian G Goodfellow", "Aleksander Madry" ], "title": "On evaluating adversarial robustness: Principles of rigorous evaluations. 2019a", "venue": null, "year": 2019 }, { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian J. Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": "CoRR, abs/1902.06705,", "year": 2019 }, { "authors": [ "Pin-Yu Chen", "Yash Sharma", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "EAD: Elastic-net attacks to deep neural networks via adversarial examples", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Kenneth T. Co", "Luis Muñoz-González", "Emil C. Lupu" ], "title": "Sensitivity of deep convolutional networks to Gabor noise. CoRR, abs/1906.03455, 2019", "venue": null, "year": 1906 }, { "authors": [ "Jeremy M. Cohen", "Elan Rosenfeld", "J. Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "CoRR, abs/1902.02918,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": "In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling CNNs with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Ivan Evtimov", "Kevin Eykholt", "Earlence Fernandes", "Tadayoshi Kohno", "Bo Li", "Atul Prakash", "Amir Rahmati", "Dawn Xiaodong Song" ], "title": "Robust physical-world attacks on deep learning", "venue": null, "year": 2017 }, { "authors": [ "Alain Fournier", "Don Fussell", "Loren Carpenter" ], "title": "Computer rendering of stochastic models", "venue": "Commun. ACM,", "year": 1982 }, { "authors": [ "Marguerite Frank", "Philip Wolfe" ], "title": "An algorithm for quadratic programming", "venue": "Naval research logistics quarterly,", "year": 1956 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Ryan P. Adams", "Ian J. Goodfellow", "David Andersen", "George E. Dahl" ], "title": "Motivating the rules of the game for adversarial example research", "venue": null, "year": 2018 }, { "authors": [ "Ian J. Goodfellow" ], "title": "A research agenda: Dynamic models to defend against correlated", "venue": "attacks. ArXiv,", "year": 2019 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D. Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "AugMix: A simple data processing method to improve robustness and uncertainty", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Forrest N. Iandola", "Matthew W. Moskewicz", "Khalid Ashraf", "Song Han", "William J. Dally", "Kurt Keutzer" ], "title": "Squeezenet: AlexNet-level accuracy with 50x fewer parameters and <1mb model", "venue": "size. ArXiv,", "year": 2017 }, { "authors": [ "Jrn-Henrik Jacobsen", "Jens Behrmannn", "Nicholas Carlini", "Florian Tramr", "Nicolas Papernot" ], "title": "Exploiting excessive invariance caused by norm-bounded adversarial robustness, 2019", "venue": null, "year": 2019 }, { "authors": [ "Matt Jordan", "Naren Manoj", "Surbhi Goel", "Alexandros G. Dimakis" ], "title": "Quantifying perceptual distortion of adversarial examples", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Ares Lagae", "Sylvain Lefebvre", "George Drettakis", "Philip Dutré" ], "title": "Procedural noise using sparse Gabor convolution", "venue": "ACM Trans. Graph.,", "year": 2009 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin Dogus Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch gaussian augmentation", "venue": null, "year": 1906 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "Computer Vision – ECCV", "year": 2018 }, { "authors": [ "Arkadi Nemirovski", "D Yudin" ], "title": "On Cezari’s convergence of the steepest descent method for approximating saddle point of convex-concave functions", "venue": "In Soviet Math. Dokl,", "year": 1978 }, { "authors": [ "Arkadi Nemirovski", "D Yudin" ], "title": "Problem Complexity and Method Efficiency in Optimization", "venue": "Intersci. Ser. Discrete Math. Wiley,", "year": 1983 }, { "authors": [ "A. Emin Orhan" ], "title": "Robustness properties of Facebook’s", "venue": "ResNeXt WSL models. ArXiv,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Fabio Pierazzi", "Feargus Pendlebury", "Jacopo Cortellazzi", "Lorenzo Cavallaro" ], "title": "Intriguing properties of adversarial ml attacks in the problem space", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2020 }, { "authors": [ "Haifeng Qian", "Mark N. Wegman" ], "title": "L2-nonexpansive neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization, 2019", "venue": null, "year": 2019 }, { "authors": [ "Haonan Qiu", "Chaowei Xiao", "Lei Yang", "Xinchen Yan", "Honglak Lee", "Bo Li" ], "title": "Semanticadv: Generating adversarial examples via attribute-conditional image", "venue": "editing. ArXiv,", "year": 2019 }, { "authors": [ "Edward Raff", "Jared Sylvester", "Steven Forsyth", "Mark McLean" ], "title": "Barrage of random transforms for adversarially robust defense", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "P. Rajpurkar", "R. Jia", "P. Liang" ], "title": "Know what you don’t know: Unanswerable questions for SQuAD", "venue": "In Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "In ICML,", "year": 2019 }, { "authors": [ "L. Schott", "J. Rauber", "W. Brendel", "M. Bethge" ], "title": "Towards the first adversarially robust neural network model on MNIST", "venue": "URL https://arxiv.org/pdf/1805.09190. pdf", "year": 2019 }, { "authors": [ "Mahmood Sharif", "Sruti Bhagavatula", "Lujo Bauer", "Michael K. Reiter" ], "title": "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition", "venue": "In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Mahmood Sharif", "Sruti Bhagavatula", "Lujo Bauer", "Michael K Reiter" ], "title": "A general framework for adversarial examples with objectives", "venue": "ACM Transactions on Privacy and Security (TOPS),", "year": 2019 }, { "authors": [ "Richard Shin", "Dawn Song" ], "title": "JPEG-resistant adversarial images", "venue": "In NIPS 2017 Workshop on Machine Learning and Computer Security,", "year": 2017 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Florian Tramèr", "Pascal Dupré", "Gili Rusak", "Giancarlo Pellegrino", "Dan Boneh" ], "title": "Ad-versarial: Defeating perceptual ad-blocking", "venue": "CoRR, abs/1811.03194,", "year": 2018 }, { "authors": [ "Tong Wu", "Liang Tong", "Yevgeniy Vorobeychik" ], "title": "Defending against physically realizable attacks on image classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chaowei Xiao", "Jun-Yan Zhu", "Bo Li", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Spatially transformed adversarial examples", "venue": "arXiv preprint arXiv:1801.02612,", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 }, { "authors": [ "Saining Xie", "Ross B. Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zhengyu Zhao", "Zhuoran Liu", "Marisa Larson" ], "title": "Towards large yet imperceptible adversarial image perturbations with perceptual color", "venue": "distance. ArXiv,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks perform well on many datasets (He et al., 2016) yet can be consistently fooled by minor adversarial distortions (Goodfellow et al., 2014). The research community has responded by quantifying and developing adversarial defenses against such attacks (Madry et al., 2017), but these defenses and metrics have two key limitations.\nFirst, the vast majority of existing defenses exclusively defend against and quantify robustness to Lp-constrained attacks (Madry et al., 2017; Cohen et al., 2019; Raff et al., 2019; Xie et al., 2018). Though real-world adversaries are not Lp constrained (Gilmer et al., 2018) and can attack with diverse distortions (Brown et al., 2017; Sharif et al., 2019), the literature largely ignores this and evaluates against the Lp adversaries already seen during training (Madry et al., 2017; Xie et al., 2018), resulting in optimistic robustness assessments. The attacks outside the Lp threat model that have been proposed (Song et al., 2018; Qiu et al., 2019; Engstrom et al., 2017; Evtimov et al., 2017; Sharif et al., 2016) are not intended for general defense evaluation and suffer from narrow dataset applicability, difficulty of optimization, or fragility of auxiliary generative models.\nSecond, existing defenses assume that attacks are known in advance (Goodfellow, 2019) and use knowledge of their explicit form during training (Madry et al., 2017). In practice, adversaries can deploy unforeseen attacks not known to the defense creator. For example, online advertisers use attacks such as perturbed pixels in ads to defeat ad blockers trained only on the previous generation of ads in an ever-escalating arms race (Tramèr et al., 2018). However, current evaluation setups implicitly assume that attacks encountered at test-time are the same as those seen at train-time, which is unrealistic. The reality that future attacks are unlike those encountered during training is akin to a train-test distribution mismatch—a problem studied outside of adversarial robustness (Recht et al., 2019; Hendrycks & Dietterich, 2019)—but now brought to the adversarial setting.\nThe present work addresses these limitations by proposing an evaluation framework ImageNet-UA to measure robustness against unforeseen attacks. ImageNet-UA assesses a defense which may have been created with knowledge of the commonly used L∞ or L2 attacks with six diverse attacks (four of which are novel) distinct from L∞ or L2. We intend these attacks to be used at test-time only and not during training. Performing well on ImageNet-UA thus demonstrates generalization to a diverse set of distortions not seen during defense creation. While ImageNet-UA\ndoes not provide an exhaustive guarantee over all conceivable attacks, it evaluates over a diverse unforeseen test distribution similar to those used successfully in other studies of distributional shift (Rajpurkar et al., 2018; Hendrycks & Dietterich, 2019; Recht et al., 2019). ImageNet-UA works for ImageNet models and can be easily used with our code available at https://github.com/ anon-submission-2020/anon-submission-2020.\nDesigning ImageNet-UA requires new attacks that are strong and varied, since real-world attacks are diverse in structure. To meet this challenge, we contribute four novel and diverse adversarial attacks which are easily optimized. Our new attacks produce distortions with occlusions, spatial similarity, and simulated weather, all of which are absent in previous attacks. Performing well on ImageNet-UA thus demonstrates that a defense generalizes to a diverse set of distortions distinct from the commonly used L∞ or L2.\nWith ImageNet-UA, we show weaknesses in existing evaluation practices and defenses through a study of 8 attacks against 48 models adversarially trained on ImageNet-100, a 100-class subset of ImageNet. While most adversarial robustness evaluations use only L∞ attacks, ImageNet-UA reveals that models with high L∞ attack robustness can remain susceptible to other attacks. Thus, L∞ evaluations are a narrow measure of robustness, even though much of the literature treats this evaluation as comprehensive (Madry et al., 2017; Qian & Wegman, 2019; Schott et al., 2019; Zhang et al., 2019). We address this deficiency by using the novel attacks in ImageNet-UA to evaluate robustness to a more diverse set of unforeseen attacks. Our results demonstrate that L∞ adversarial training, the current state-of-the-art defense, has limited generalization to unforeseen adversaries, and is not easily improved by training against more attacks. This adds to the evidence that achieving robustness against a few train-time attacks is insufficient to impart robustness to unforeseen test-time attacks (Jacobsen et al., 2019; Jordan et al., 2019; Tramèr & Boneh, 2019).\nIn summary, we propose the framework ImageNet-UA to measure robustness to a diverse set of attacks, made possible by our four new adversarial attacks. Since existing defenses scale poorly to multiple attacks (Jordan et al., 2019; Tramèr & Boneh, 2019), finding defense techniques which generalize to unforeseen attacks is crucial to create robust models. We suggest ImageNet-UA as a way to measure progress towards this goal." }, { "heading": "2 RELATED WORK", "text": "Adversarial robustness is notoriously difficult to correctly evaluate (Papernot et al., 2017; Athalye et al., 2018a). To that end, Carlini et al. (2019a) provide extensive guidance for sound adversarial robustness evaluation. By measuring attack success rates across several distortion sizes and using a broader threat model with diverse differentiable attacks, ImageNet-UA has several of their recommendations built-in, while greatly expanding the set of attacks over previous work on evaluation.\nWe are only aware of a few prior works which evaluate on unforeseen attacks in specific limited circumstances. Wu et al. (2020) evaluate against physically-realizable attacks from Evtimov et al. (2017) and Sharif et al. (2016), though this limits the threat model to occlusion attacks on narrow datasets. Outside of vision, Pierazzi et al. (2020) proposes constraining attacks by a more diverse set of problem-space constraints in diverse domains such as text and malware or source code generation; however, even in this framework, analytically enumerating all such constraints is impossible.\nWithin vision, prior attacks outside the Lp threat model exist, but they lack the general applicability and fast optimization of ours. Song et al. (2018) and Qiu et al. (2019) attack using variational autoencoders and StarGANs, respectively, resulting in weaker attacks which require simple image distributions suitable for VAEs and GANs. Engstrom et al. (2017) apply Euclidean transformations determined by brute-force search. Zhao et al. (2019) use perceptual color distances to align human perception and L2 perturbations. Evtimov et al. (2017) and Sharif et al. (2016) attack stop signs and face-recognition systems with carefully placed patches or modified eyeglass frames, requiring physical object creation and applying only to specific image types." }, { "heading": "3 NEW ATTACKS FOR A BROADER THREAT MODEL", "text": "There are few diverse, easily optimizable, plug-and-play adversarial attacks in the current literature; outside of Elastic (Xiao et al., 2018), most are Lp attacks such as L∞ (Goodfellow et al., 2014), L2 (Szegedy et al., 2013; Carlini & Wagner, 2017), L1 (Chen et al., 2018). We rectify this deficiency with four novel adversarial attacks: JPEG, Fog, Snow, and Gabor. Our attacks are differentiable and fast, while optimizing over enough parameters to be strong. We show example adversarial images in Figure 1 and compare stochastic and adversarial distortions in Figure 2.\nOur novel attacks provide a range of test-time adversaries visually and semantically distinct from L∞ and L2 attacks. Namely, they cause distortions with large L∞ and L2 norm, but result in images that are perceptually close to the original. These attacks are intended as unforeseen attacks not used during training, allowing them to evaluate whether a defense can generalize from L∞ or L2 to a more varied set of distortions than current evaluations. Though our attacks are not exhaustive, performing well against them already demonstrates robustness to occlusion, spatial similarity, and simulated weather, which are absent from previous evaluations.\nOur attacks create an adversarial image x′ from a clean image x with true label y. Let model f map images to a softmax distribution, and let `(f(x), y) be the cross-entropy loss. Given a target class y′ 6= y, our attacks attempt to find a valid image x′ such that (1) the attacked image x′ is obtained by applying a distortion (of size controlled by a parameter ε) to x, and (2) the loss `(f(x′), y′) is minimized. An unforeseen adversarial attack is a white- or black-box adversarial attack unknown to the defense designer which does not change the true label of x according to an oracle or human." }, { "heading": "3.1 FOUR NEW UNFORESEEN ATTACKS", "text": "JPEG. JPEG applies perturbations in a JPEG-encoded space of compressed images rather than raw pixel space. More precisely, JPEG compression is a linear transform JPEG which applies colorspace conversion, the discrete cosine transform, and then quantization. Our JPEG attack imposes the L∞-constraint\n‖JPEG(x)− JPEG(x′)‖∞ ≤ ε\non the attacked image x′. We optimize z = JPEG(x′) under this constraint to find an adversarial perturbation in the resulting frequency space. The perturbed frequency coefficients are quantized, and we then apply a right-inverse of JPEG to obtain the attacked image x′ in pixel space. We use ideas from Shin & Song (2017) to make this differentiable. The resulting attack is conspicuously distinct from Lp attacks.\nFog. Fog simulates worst-case weather conditions. Robustness to adverse weather is a safety critical priority for autonomous vehicles, and Figure 2 shows Fog provides a more rigorous stress-test than stochastic fog (Hendrycks & Dietterich, 2019). Fog creates adversarial fog-like occlusions by adversarially optimizing parameters in the diamond-square algorithm (Fournier et al., 1982) typically used to render stochastic fog effects.\nThis algorithm starts with random perturbations to the four corner pixels of the image. At step t, it iteratively perturbs pixels at the centers of squares and diamonds formed by those pixels perturbed at step t−1. The perturbation of a step t pixel is the average of the neighboring step t−1 perturbations plus a parameter value which we adversarially optimize. We continue this process until all pixels have been perturbed; the outcome is a fog-like distortion to the original image.\nSnow. Snow simulates snowfall with occlusions of randomly located small image regions representing snowflakes. Because the distortions caused by snowflakes are not differentiable in their locations, we instead place occlusions representing snowflakes at randomly chosen locations and orientations and adversarially optimize their intensities. This choice results in a fast, differentiable, and strong attack. Compared to synthetic stochastic snow (Hendrycks & Dietterich, 2019), our adversarial snow is faster and includes snowflakes at differing angles. Figure 2 shows adversarial snow exposes model weaknesses more effectively than the easier stochastic, average-case snow.\nGabor. Gabor spatially occludes the image with visually diverse Gabor noise Lagae et al. (2009). Gabor noise is a form of band-limited anisotropic procedural noise which convolves a parameter mask with a Gabor kernel which is a product of a Gaussian kernel and a harmonic kernel. We choose the Gabor kernel randomly and adversarially optimize the parameters of the mask starting from a sparse initialization. We apply spectral variance normalization (Co et al., 2019) to the resulting distortion and add it to the input image to create the attack." }, { "heading": "3.2 IMPROVING EXISTING ATTACKS", "text": "Elastic modifies the attack of Xiao et al. (2018); it warps the image by distortions x′ = Flow(x, V ), where V : {1, . . . , 224}2 → R2 is a vector field on pixel space, and Flow sets the value of pixel (i, j) to the bilinearly interpolated original value at (i, j) + V (i, j). We construct V by smoothing a vector field W by a Gaussian kernel (size 25 × 25, σ ≈ 3 for a 224 × 224 image) and optimize W under ‖W (i, j)‖∞ ≤ ε for all i, j. The resulting attack is suitable for large-scale images. The other three attacks are L1, L2, L∞ attacks, but we improve the L1 attack. For L∞ and L2 constraints, we use randomly-initialized projected gradient descent (PGD), which applies gradient descent and projection to the L∞ and L2 balls (Madry et al., 2017). Projection is difficult for L1, and previous L1 attacks rely on computationally intensive methods for it (Chen et al., 2018; Tramèr & Boneh, 2019). We replace PGD with the Frank-Wolfe algorithm (Frank & Wolfe, 1956), which\noptimizes a linear function instead of projecting at each step (pseudocode in Appendix D). This makes our L1 attack more principled than previous implementations.\n4 ImageNet-UA: MEASURING ROBUSTNESS TO UNFORESEEN ATTACKS\nWe propose the framework ImageNet-UA and its CIFAR-10 analogue CIFAR-10-UA to measure and summarize model robustness while fulfilling the following desiderata: (1) defenses should be evaluated against a broad threat model through a diverse set of attacks, (2) defenses should exhibit generalization to attacks not exactly identical to train-time attacks, and (3) the range of distortion sizes used for an attack must be wide enough to avoid misleading conclusions caused by overly weak or strong versions of that attack (Figure 3).\nThe ImageNet-UA evaluation framework aggregates robustness information into a single measure, the mean Unforeseen Adversarial Robustness (mUAR). The mUAR is an average over six different attacks of the Unforeseen Adversarial Robustness (UAR), a metric which assesses the robustness of a defense against a specific attack by using a wide range of distortion sizes. UAR is normalized using a measure of attack strength, the ATA, which we now define.\nAdversarial Training Accuracy (ATA). The Adversarial Training Accuracy ATA(A, ε) estimates the strength of an attack A against adversarial training (Madry et al., 2017), one of the strongest known defense methods. For a distortion size ε, it is the best adversarial test accuracy against A achieved by adversarial training against A. We allow a possibly different distortion size ε′ during training, since this can improves accuracy, and we choose a fixed architecture for each dataset.\nFor ImageNet-100, we choose ResNet-50 for the architecture, and for CIFAR-10 we choose ResNet56. When evaluating a defense with architecture other than ResNet-50 or ResNet-56, we recommend using ATA values computed with these architectures to enable consistent comparison. To estimate ATA(A, ε) in practice, we evaluate models adversarially trained against distortion size ε′ for ε′ in a large range (we describe this range at this section’s end).\nUAR: Robustness Against a Single Attack. The UAR, a building block for the mUAR, averages a model’s robustness to a single attack over six distortion sizes ε1, . . . , ε6 chosen for each attack (we describe the selection procedure at the end of this section). It is defined as\nUAR(A) := 100× ∑6\nk=1 Acc(A, εk,M)∑6 k=1 ATA(A, εk) , (1)\nwhere Acc(A, εk,M) is the accuracy Acc(A, εk,M) of a model M after attack A at distortion size εk. The normalization in (1) makes attacks of different strengths more commensurable in a stable way. We give values of ATA(A, εk) and εk for our attacks on ImageNet-100 and CIFAR-10 in Tables 4 and 5 (Appendix B), allowing computation of UAR of a defense against a single attack with six adversarial evaluations and no adversarial training.\nmUAR: Mean Unforeseen Attack Robustness. We summarize a defense’s performance on ImageNet-UA with the mean Unforeseen Attack Robustness (mUAR), an average of UAR scores\nfor the L1, Elastic, JPEG, Fog, Snow, and Gabor attacks:\nmUAR := 1\n6\n[ UAR(L1)+UAR(Elastic)+UAR(JPEG)+UAR(Fog)+UAR(Snow)+UAR(Gabor) ] .\nOur measure mUAR estimates robustness to a broad threat model containing six unforeseen attacks at six distortion sizes each, meaning high mUAR requires generalization to several held-out attacks. In particular, it cannot be achieved by the common practice of engineering defenses to a single attack, which Figure 4 shows does not necessarily provide robustness to different attacks.\nOur four novel attacks play a crucial role in mUAR by allowing us to estimate robustness to a sufficiently large set of adversarial attacks. As is customary when studying train-test mismatches and distributional shift, we advise against adversarially training with these six attacks when evaluating ImageNet-UA to preserve the validity of mUAR, though we encourage training with other attacks.\nDistortion Sizes. We explain the ε′ values used to estimate ATA and the choice of ε1, . . . , ε6 used to define UAR. This calibration of distortion sizes adjusts for the fact (Figure 3) that adversarial robustness against an attack may vary drastically with distortion size. Further, the relation between distortion size and attack strength varies between attacks, so too many or too few εk values in a certain range may cause an attack to appear artificially strong or weak according to UAR.\nWe choose distortion sizes between εmin and εmax as follows. The minimum distortion size εmin is the largest ε for which the adversarial accuracy of an adversarially trained model at distortion size ε is comparable to that of a model trained and evaluated on unattacked data (for ImageNet-100, within 3 of 87). The maximum distortion size εmax is the smallest ε which either reduces adversarial accuracy of an adversarially trained model at distortion size ε below 25 or yields images confusing humans (adversarial accuracy can remain non-zero in this case).\nAs is typical in recent work on adversarial examples (Athalye et al., 2018b; Evtimov et al., 2017; Dong et al., 2019; Qin et al., 2019), our attacks can be perceptible at large distortion sizes. We make this choice to reflect perceptibility of attacks in real world threat models per Gilmer et al. (2018).\nFor ATA, we evaluate against models adversarially trained with ε′ increasing geometrically from εmin to εmax by factors of 2. We then choose εk as follows: We compute ATA at ε increasing geometrically from εmin to εmax by factors of 2 and take the size-6 subset whose ATA values have minimum `1-distance to the ATA values of the L∞ attack in Table 4 (Appendix B.1). For example, for Gabor, (εmin, εmax) = (6.25, 3200), so we compute ATAs at the 10 values ε = 6.25, . . . , 3200. Viewing size-6 subsets of the ATAs as vectors with decreasing coordinates, we select εk for Gabor corresponding to the vector with minimum `1-distance to the ATA vector for L∞.\n5 NEW INSIGHTS FROM ImageNet-UA\nWe use ImageNet-UA to assess existing methods for adversarial defense and evaluation. First, ImageNet-UA reveals that L∞ trained defenses fail to generalize to different attacks, indicating substantial weakness in current L∞ adversarial robustness evaluation. We establish a baseline for ImageNet-UA using L2 adversarial training which is difficult to improve upon by adversarial training alone. Finally, we show non-adversarially trained models can still improve robustness on ImageNet-UA over standard models and suggest this as a direction for further inquiry." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "We adversarially train 48 models against the 8 attacks from Section 3 and evaluate against targeted attacks. We use the CIFAR-10 and ImageNet-100 datasets for ImageNet-UA and CIFAR-10-UA. ImageNet-100 is a 100-class subset of ImageNet-1K (Deng et al., 2009) containing every tenth class by WordNet ID order; we use a subset of ImageNet-1K due to the high compute cost of adversarial training. We use ResNet-56 for CIFAR-10 and ResNet-50 from torchvision for ImageNet-100 (He et al., 2016). We provide training hyperparameters in Appendix A.\nTo adversarially train against attack A, at each mini-batch we select a uniform random (incorrect) target class for each training image. For maximum distortion size ε, we apply targeted attack A to the current model with distortion size ε′ ∼ Uniform(0, ε) and take a SGD step using only the attacked images. Randomly scaling ε′ improves performance against smaller distortions.\nWe train on 10-step attacks for attacks other than Elastic, where we use 30 steps due to a harder optimization. For Lp, JPEG, and Elastic, we use step size ε/ √ steps; for Fog, Gabor, and Snow, we use step size √\n0.001/steps because the latent space is independent of ε. These choices have optimal rates for non-smooth convex functions (Nemirovski & Yudin, 1978; 1983). We evaluate on 200-step targeted attacks with uniform random (incorrect) target, using more steps for evaluation than training per best practices (Carlini et al., 2019b).\nFigure 4 summarizes ImageNet-100 results. Full results for ImageNet-100 and CIFAR-10 are in Appendix E and robustness checks to random seed and attack iterations are in Appendix F.\n5.2 ImageNet-UA REVEALS WEAKNESSESS IN L∞ TRAINING AND TESTING\nWe use ImageNet-UA to reveal weaknesses in the common practices of L∞ robustness evaluation and L∞ adversarial training. We compute the mUAR and UAR(L∞) for models trained against the L∞ attack with distortion size ε and show results in Figure 5. For small ε ≤ 4, mUAR and\nUAR(L∞) increase together with ε. For larger ε ≥ 8, UAR(L∞) continues to increase with ε, but the mUAR decreases, a fact which is not apparent from L∞ evaluation.\nThe decrease in mUAR while UAR(L∞) increases suggests that L∞ adversarial training begins to heavily fit L∞ distortions at the expense of generalization at larger distortion sizes. Thus, while it is the most commonly used defense procedure, L∞ training may not lead to improvements on other attacks or to real-world robustness.\nWorse, L∞ evaluation against L∞ adversarial training at higher distortions indicates higher robustness. In contrast, mUAR reveals that L∞ adversarial training at higher distortions in fact hurts robustness against a more diverse set of attacks. Thus, L∞ evaluation gives a misleading picture of robustness. This is particularly important because L∞ evaluation is the most ubiquitous measure of robustness in deep learning (Goodfellow et al., 2014; Madry et al., 2017; Xie et al., 2018).\n5.3 LIMITS OF ADVERSARIAL TRAINING FOR ImageNet-UA\nWe establish a baseline on ImageNet-UA using L2 adversarial training but show a significant performance gap even for more sophisticated existing adversarial training methods. To do so, we evaluate several adversarial training methods on ImageNet-UA and show results in Table 1.\nOur results show that L2 trained models outperform L∞ trained models and have significantly improved absolute performance, increasing mUAR from 14.0 to 50.7 compared to an undefended model. The individual UAR values in Figure 7 (Appendix E.1) improve substantially against all attacks other than Fog, including several (Elastic, Gabor, Snow) of extremely different nature to L2.\nThis result suggests pushing adversarial training further by training against multiple attacks simultaneously via joint adversarial training (Jordan et al., 2019; Tramèr & Boneh, 2019) detailed in Appendix C. Table 2 shows that, despite using twice the compute of L2 training, (L∞, L2) joint training only improves the mUAR from 50.7 to 50.9. We thus recommend L2 training as a baseline for ImageNet-UA, though there is substantial room for improvement compared to the highest UARs against individual attacks in Figure 4, which are all above 80 and often above 90.\n5.4 ImageNet-UA ROBUSTNESS THROUGH NON-ADVERSARIAL DEFENSES\nWe find that methods can improve robustness to unforeseen attacks without adversarial training. Table 3 shows mUAR for SqueezeNet (Iandola et al., 2017), ResNeXts (Xie et al., 2016), and ResNets. For ImageNet-1K models, we mask 900 logits to predict ImageNet-100 classes.\nA popular defense against average case distortions (Hendrycks & Dietterich, 2019) is Stylized ImageNet (Geirhos et al., 2019), which modifies training images using image style transfer in hopes of making networks rely less on textural features. Table 3 shows it provides some improvement on ImageNet-UA. More recently, Lopes et al. (2019) propose to train against Gaussian noise applied to small image patches, improving the mUAR by 3% over the ResNet-50 baseline. The second largest mUAR improvement comes from training a ResNeXt on approximately 1 billion images (Mahajan et al., 2018). This three orders of magnitude increase in training data yields a 5.4% mUAR\nincrease over a vanilla ResNeXt baseline. Finally, Hendrycks et al. (2020) create AugMix, which randomly mixes stochastically generated augmentations. Although AugMix did not use random nor adversarial noise, it improves robustness to unforeseen attacks by 10%.\nThese results imply that defenses not relying on adversarial examples can improve ImageNet-UA performance. They indicate that training on more data only somewhat increases robustness on ImageNet-UA, unlike many other robustness benchmarks (Hendrycks & Dietterich, 2019; Hendrycks et al., 2019) where more data helps tremendously (Orhan, 2019). While models with lower clean accuracy (e.g., SqueezeNet and ResNet-18) have higher UAR(L∞) and UAR(L2) than many other models, there is no clear difference in mUAR. Last, these non-adversarial defenses have minimal cost to accuracy on clean examples, unlike adversarial defenses. Much remains to explore, and we hope non-adversarial defenses will be a promising avenue toward adversarial robustness." }, { "heading": "6 CONCLUSION", "text": "This work proposes a framework ImageNet-UA to evaluate robustness of a defense against unforeseen attacks. Because existing adversarial defense techniques do not scale to multiple attacks, developing models which can defend against attacks not seen at train-time is essential for robustness. Our results using ImageNet-UA show that the common practice of L∞ training and evaluation fails to achieve or measure this broader form of robustness. As a result, it can provide a misleading sense of robustness. By incorporating our 4 novel and strong adversarial attacks, ImageNet-UA enables evaluation on the diverse held-out attacks necessary to measure progress towards robustness more broadly." }, { "heading": "A TRAINING HYPERPARAMETERS", "text": "For ImageNet-100, we trained on machines with 8 NVIDIA V100 GPUs using standard data augmentation (He et al., 2016). Following best practices for multi-GPU training (Goyal et al., 2017), we ran synchronized SGD for 90 epochs with batch size 32×8 and a learning rate schedule with 5 “warm-up” epochs and a decay at epochs 30, 60, and 80 by a factor of 10. Initial learning rate after warm-up was 0.1, momentum was 0.9, and weight decay was 10−4. For CIFAR-10, we trained on a single NVIDIA V100 GPU for 200 epochs with batch size 32, initial learning rate 0.1, momentum 0.9, and weight decay 10−4. We decayed the learning rate at epochs 100 and 150.\nB CALIBRATION OF ImageNet-UA AND CIFAR-10-UA\nB.1 CALIBRATION FOR ImageNet-UA\nCalibrated distortion sizes and ATA values are in Table 4.\nB.2 CALIBRATION FOR CIFAR-10-UA\nThe ε calibration procedure for CIFAR-10 was similar to that used for ImageNet-100. We started with small εmin values and increased ε geometrically with ratio 2 until adversarial accuracy of an adversarially trained model dropped below 40. Note that this threshold is higher for CIFAR-10 than ImageNet-100 because there are fewer classes. The resulting ATA values for CIFAR-10 are shown in Table 5." }, { "heading": "C JOINT ADVERSARIAL TRAINING", "text": "Our joint adversarial training procedure for two attacksA andA′ is as follows. At each training step, we compute the attacked image under both A and A′ and backpropagate with respect to gradients induced by the image with greater loss. This corresponds to the “max” loss of Tramèr & Boneh (2019). We train ResNet-50 models for (L∞, L2), (L∞, L1), and (L∞,Elastic) on ImageNet-100.\nTable 6 shows training against (L∞, L1) is worse than training against L1 at the same distortion size and performs particularly poorly at large distortion sizes. Table 7 shows joint training against\n(L∞,Elastic) also performs poorly, never matching the UAR score of training against Elastic at moderate distortion size (ε = 2)." }, { "heading": "D THE FRANK-WOLFE ALGORITHM", "text": "We chose to use the Frank-Wolfe algorithm for optimizing the L1 attack, as Projected Gradient Descent would require projecting onto a truncated L1 ball, which is a complicated operation. In contrast, Frank-Wolfe only requires optimizing linear functions g>x over a truncated L1 ball; this can be done by sorting coordinates by the magnitude of g and moving the top k coordinates to the boundary of their range (with k chosen by binary search). This is detailed in Algorithm 1." }, { "heading": "E FULL EVALUATION RESULTS", "text": "E.1 FULL EVALUATION RESULTS AND ANALYSIS FOR IMAGENET-100\nWe show the full results of all adversarial attacks against all adversarial defenses for ImageNet-100 in Figure 6. These results also include L1-JPEG and L2-JPEG attacks, which are modifications of the JPEG attack applying Lp-constraints in the compressed JPEG space instead of L∞ constraints. Full UAR scores are provided for ImageNet-100 in Figure 7.\nE.2 FULL EVALUATION RESULTS AND ANALYSIS FOR CIFAR-10\nWe show the results of adversarial attacks and defenses for CIFAR-10 in Figure 8. We experienced difficulty training theL2 andL1 attacks at distortion sizes greater than those shown and have omitted those runs, which we believe may be related to the small size of CIFAR-10 images. Full UAR values for CIFAR-10 are shown in Figure 9." }, { "heading": "F ROBUSTNESS OF OUR RESULTS", "text": "F.1 REPLICATION\nWe replicated our results for the first three rows of Figure 6 with different random seeds to see the variation in our results. As shown in Figure 10, deviations in results are minor.\nNo at\ntac k L\n= 1 L =\n2 L\n= 4 L =\n8 L\n= 16 L =\n32 L2 =\n15 0 L2 =\n30 0 L2 =\n60 0\nL2 =\n12 00\nL2 =\n24 00\nL2 =\n48 00 L1 =\n95 62\n.44\nL1 =\n19 12\n5\nL1 =\n38 25\n0.1\nL1 =\n76 50\n0\nL1 =\n15 30\n00\nL1 =\n30 60\n00\nL1 =\n61 20\n00\nL -JP\nEG =\n0.0 31\n25\nL -JP\nEG =\n0.0 62\n5\nL -JP\nEG =\n0.1 25\nL -JP\nEG =\n0.2 5\nL -JP\nEG =\n0.5\nL -JP\nEG =\n1\nL -JP\nEG =\n2 L2\n-JP EG\n= 2\nL2 -JP\nEG =\n4\nL2 -JP\nEG =\n8\nL2 -JP\nEG =\n16\nL2 -JP\nEG =\n32\nL2 -JP\nEG =\n64\nL2 -JP\nEG =\n12 8\nL2 -JP\nEG =\n25 6\nL1 -JP\nEG =\n12 8\nL1 -JP\nEG =\n25 6\nL1 -JP\nEG =\n51 2\nL1 -JP\nEG =\n10 24\nL1 -JP\nEG =\n20 48\nL1 -JP\nEG =\n40 96\nL1 -JP\nEG =\n81 92\nL1 -JP\nEG =\n16 38\n4\nL1 -JP\nEG =\n32 76\n8\nL1 -JP\nEG =\n65 53\n6\nL1 -JP\nEG =\n13 10 72 Ela\nstic =\n0.2 5\nEla stic\n= 0.5 Ela stic\n= 1\nEla stic\n= 2\nEla stic\n= 4\nEla stic\n= 8\nEla stic\n= 16 Fog\n= 12\n8 Fog =\n25 6 Fog =\n51 2\nFog =\n10 24\nFog =\n20 48\nFog =\n40 96\nFog =\n81 92\nFog =\n16 38\n4\nFog =\n32 76\n8\nFog =\n65 53 6 Ga bo r\n=6 .25\nGa bo\nr =1\n2.5\nGa bo\nr =2\n5\nGa bo\nr =5\n0\nGa bo\nr =1\n00\nGa bo\nr =2\n00\nGa bo\nr =4\n00\nGa bo\nr =8\n00\nGa bo\nr =1\n60 0\nGa bo\nr =3\n20 0\nSn ow\n= 0.0\n31 25\nSn ow\n= 0.0\n62 5\nSn ow\n= 0.1\n25\nSn ow\n= 0.2\n5\nSn ow\n= 0.5\nSn ow\n= 1\nSn ow\n= 2\nSn ow\n= 4\nSn ow\n= 8\nSn ow\n= 16\nNo rm\nal tr\nai ni\nng\nL\n= 1\nL\n= 2\nL\n= 4\nL\n= 8\nL\n= 16\nL\n= 32\nL 2\n= 15 0 L 2 = 30 0 L 2 = 60 0 L 2 = 12 00 L 2 = 24 00 L 2 = 48 00\nL 1\n= 95\n62 .4 4 L 1 = 19 12 5 L 1 = 38 25 0. 1 L 1 = 76 50 0 L 1 = 15 30 00 L 1 = 30 60 00 L 1 = 61 20 00\nL -JP\nEG\n= 0.\n03 12 5 L -JP EG = 0. 06 25 L -JP EG = 0. 12 5 L -JP EG = 0. 25 L -JP EG = 0. 5 L -JP EG = 1 L -JP EG = 2 L 2 -JP EG = 2 L 2 -JP EG = 4 L 2 -JP EG = 8 L 2 -JP EG = 16 L 2 -JP EG = 32 L 2 -JP EG = 64 L 2 -JP EG = 12 8 L 2 -JP EG = 25 6 L 1 -JP EG = 12 8 L 1 -JP EG = 25 6 L 1 -JP EG = 51 2 L 1 -JP EG = 10 24 L 1 -JP EG = 20 48 L 1 -JP EG = 40 96 L 1 -JP EG = 81 92 L 1 -JP EG = 16 38 4 L 1 -JP EG = 32 76 8 L 1 -JP EG = 65 53 6 L 1 -JP EG = 13 10 72 El as tic = 0. 25 El as tic = 0. 5 El as tic = 1 El as tic = 2 El as tic = 4 El as tic = 8 El as tic = 16 Fo g = 12 8 Fo g = 25 6 Fo g = 51 2 Fo g = 10 24 Fo g = 20 48 Fo g = 40 96 Fo g = 81 92 Fo g = 16 38 4 Fo g = 32 76 8 Fo g = 65 53 6 Ga bo r = 6. 25 Ga bo r = 12 .5 Ga bo r = 25 Ga bo r = 50 Ga bo r = 10 0 Ga bo r = 20 0 Ga bo r = 40 0 Ga bo r = 80 0 Ga bo r = 16 00 Ga bo r = 32 00 Sn ow = 0. 03 12 5 Sn ow = 0. 06 25 Sn ow = 0. 12 5 Sn ow = 0. 25 Sn ow = 0. 5 Sn ow = 1 Sn ow = 2 Sn ow = 4 Sn ow = 8 Sn ow = 16\n87 25\n1 0\n0 0\n0 57\n11 0\n0 0\n0 61\n29 5\n0 0\n0 0\n20 1\n0 0\n0 0\n0 70\n25 1\n0 0\n0 0\n0 50\n20 3\n0 0\n0 0\n0 0\n0 0\n79 47\n6 0\n0 0\n0 54\n15 1\n0 0\n0 0\n0 0\n0 12\n3 1\n0 0\n0 0\n0 0\n0 64\n32 7\n1 0\n0 0\n0 0\n0\n86 84\n70 14\n0 0\n0 86\n81 48\n2 0\n0 80\n66 35\n5 0\n0 0\n84 71\n13 0\n0 0\n0 86\n84 66\n10 0\n0 0\n0 76\n60 28\n5 0\n0 0\n0 0\n0 0\n84 75\n36 3\n0 0\n0 74\n47 9\n0 0\n0 0\n0 0\n0 74\n28 4\n1 0\n0 0\n0 0\n0 80\n68 29\n4 0\n0 0\n0 0 0 85 85 81 50 2 0 0 85 83 71 18 0 0 81 72 52 18 1 0 0 84 81 48 1 0 0 0 85 84 77 34 1 0 0 0 79 69 46 16 2 0 0 0 0 0 0 84 78 52 7 0 0 0 73 47 10 0 0 0 0 0 0 0 81 59 12 1 0 0 0 0 0 0 79 73 44 9 1 0 0 0 0 0 84 83 82 74 22 0 0 84 83 79 48 2 0 80 75 61 32 6 0 0 84 82 69 10 0 0 0 84 83 79 54 5 0 0 0 79 73 58 31 7 1 0 0 0 0 0 83 79 62 15 1 0 0 70 43 9 1 0 0 0 0 0 0 82 75 37 3 0 0 0 0 0 0 79 74 56 18 3 1 0 0 0 0 80 79 79 76 59 6 0 79 78 73 50 7 0 72 64 49 27 6 1 0 79 77 59 12 0 0 0 79 78 70 43 6 0 0 0 74 67 54 33 11 2 0 0 0 0 0 79 76 66 31 2 0 0 63 33 6 1 1 0 0 0 1 0 79 77 64 17 1 0 0 0 0 0 74 72 62 31 9 2 0 0 0 0 75 74 74 73 67 34 1 73 71 63 30 3 0 58 43 24 8 1 0 0 73 67 37 3 0 0 0 73 70 59 26 3 0 0 0 64 56 42 25 11 3 0 0 0 0 0 73 71 66 42 11 1 0 55 25 4 1 1 0 0 0 0 0 74 73 68 44 4 0 0 0 0 0 68 66 59 40 17 6 1 0 0 0 71 71 70 69 62 40 8 69 60 33 5 0 0 37 21 8 2 0 0 0 65 42 8 0 0 0 0 68 61 34 7 1 0 0 0 56 47 36 23 11 4 1 0 0 0 0 70 68 62 44 15 2 0 55 29 5 1 0 0 0 0 0 0 70 69 63 49 16 2 2 1 0 0 63 60 54 37 17 5 1 0 0 0 87 82 53 3 0 0 0 85 78 34 1 0 0 80 69 36 5 0 0 0 81 51 2 0 0 0 0 85 81 48 2 0 0 0 0 73 48 15 2 0 0 0 0 0 0 0 83 71 28 1 0 0 0 73 46 10 0 0 0 0 0 0 0 62 15 3 0 0 0 0 0 0 0 78 60 20 3 0 0 0 0 0 0 85 84 74 22 0 0 0 85 82 65 8 0 0 82 76 57 18 1 0 0 84 75 22 0 0 0 0 85 83 73 17 0 0 0 0 78 66 35 7 0 0 0 0 0 0 0 83 76 43 3 0 0 0 74 47 11 0 0 0 0 0 0 0 77 39 6 1 0 0 0 0 0 0 79 67 33 5 0 0 0 0 0 0 84 84 81 56 4 0 0 84 83 77 40 1 0 83 80 71 44 9 0 0 84 81 60 3 0 0 0 85 84 80 49 2 0 0 0 82 75 59 24 3 0 0 0 0 0 0 83 79 56 9 0 0 0 72 45 11 1 0 0 0 0 0 0 81 65 17 2 0 0 0 0 0 0 78 72 46 11 1 0 0 0 0 0 82 82 81 74 28 0 0 82 82 80 69 15 0 82 80 77 65 32 4 0 82 81 76 34 0 0 0 82 82 81 74 25 0 0 0 81 80 73 56 24 4 0 0 0 0 0 81 78 66 23 1 0 0 67 40 8 1 0 0 0 0 0 0 81 75 42 5 0 0 0 0 0 0 76 72 57 22 3 1 0 0 0 0 77 77 76 74 56 6 0 77 77 76 73 50 2 77 76 75 71 57 23 1 77 76 75 63 9 0 0 77 76 76 74 57 7 0 0 77 76 73 68 52 25 5 1 0 0 0 76 74 68 38 4 0 0 59 30 6 1 1 1 0 1 0 0 76 74 61 16 1 0 0 0 0 0 70 68 58 35 10 2 0 0 0 0 68 68 68 67 61 28 1 68 68 68 67 59 20 69 68 68 66 61 44 13 68 68 67 64 37 2 0 68 68 68 67 62 33 1 0 68 68 68 66 61 49 30 12 4 2 3 68 66 63 47 11 1 0 49 23 5 1 1 1 0 1 0 0 68 67 61 30 3 1 1 0 0 0 60 58 54 40 17 5 2 1 0 0 86 71 24 1 0 0 0 82 64 14 0 0 0 83 77 53 13 0 0 0 67 18 0 0 0 0 0 84 71 22 1 0 0 0 0 71 44 11 1 0 0 0 0 0 0 0 82 66 19 1 0 0 0 68 33 3 0 0 0 0 0 0 0 41 9 2 0 0 0 0 0 0 0 73 50 14 2 0 0 0 0 0 0 86 78 41 3 0 0 0 84 74 32 1 0 0 84 81 68 32 3 0 0 76 41 2 0 0 0 0 84 79 48 3 0 0 0 0 78 63 28 5 0 0 0 0 0 0 0 83 71 28 1 0 0 0 69 36 4 0 0 0 0 0 0 0 56 17 3 0 0 0 0 0 0 0 75 59 22 2 0 0 0 0 0 0 85 81 62 11 0 0 0 84 80 55 6 0 0 84 82 77 54 12 0 0 81 64 14 0 0 0 0 84 82 68 20 0 0 0 0 81 75 51 18 3 0 0 0 0 0 0 82 75 41 3 0 0 0 70 39 6 0 0 0 0 0 0 0 71 32 5 1 0 0 0 0 0 0 77 64 30 4 0 0 0 0 0 0 84 82 71 28 1 0 0 83 81 67 20 0 0 84 83 81 72 40 4 0 82 75 45 3 0 0 0 84 82 76 48 4 0 0 0 83 80 72 47 14 2 0 0 0 0 0 82 77 52 6 0 0 0 68 37 5 0 0 0 0 0 0 0 77 50 12 1 0 0 0 0 0 0 76 67 34 8 1 0 0 0 0 0 81 79 72 43 3 0 0 80 78 69 35 2 0 81 80 79 76 64 26 1 79 74 60 17 0 0 0 80 79 76 62 16 0 0 0 80 80 78 71 45 12 1 0 0 0 0 79 75 57 12 0 0 0 65 36 5 0 0 0 0 0 0 0 77 61 23 3 0 0 0 0 0 0 73 66 44 12 1 0 0 0 0 0 79 77 72 53 10 0 0 78 77 71 46 6 0 79 78 78 75 69 45 7 77 74 66 40 3 0 0 78 77 75 68 41 2 0 0 78 78 77 74 64 36 7 1 0 0 0 77 73 62 21 1 0 0 61 32 4 0 0 0 0 0 0 0 76 65 35 6 0 0 0 0 0 0 70 65 49 20 4 1 0 0 0 0 72 71 69 59 24 1 0 72 71 69 55 18 0 72 72 71 70 67 55 24 71 69 65 51 15 0 0 71 71 70 67 53 13 0 0 71 71 70 69 64 52 24 6 1 0 0 70 68 61 33 3 0 0 50 24 5 0 0 0 0 0 0 0 69 62 42 12 1 0 0 0 0 0 60 56 46 24 8 2 0 0 0 1 87 75 28 1 0 0 0 83 58 7 0 0 0 75 50 14 1 0 0 0 86 83 54 2 0 0 0 86 86 82 55 3 0 0 0 83 80 67 36 7 0 0 0 0 0 0 83 65 14 0 0 0 0 67 28 2 0 0 0 0 0 0 0 34 6 1 0 0 0 0 0 0 0 72 44 11 1 0 0 0 0 0 0 87 80 47 3 0 0 0 84 71 19 0 0 0 77 60 24 2 0 0 0 86 84 75 17 0 0 0 87 86 84 74 18 0 0 0 85 82 76 57 21 2 0 0 0 0 0 83 70 20 1 0 0 0 69 33 3 0 0 0 0 0 0 0 51 11 2 0 0 0 0 0 0 0 75 51 13 1 0 0 0 0 0 0 86 83 68 14 0 0 0 84 79 43 2 0 0 80 67 37 7 0 0 0 85 85 83 57 1 0 0 86 86 85 81 51 1 0 0 85 83 81 72 47 14 1 0 0 0 0 83 73 28 1 0 0 0 66 30 3 0 0 0 0 0 0 0 69 27 4 1 0 0 0 0 0 0 77 59 20 3 0 0 0 0 0 0 84 83 77 42 3 0 0 83 81 66 13 0 0 80 73 53 18 2 0 0 84 84 83 77 14 0 0 84 84 84 82 73 17 0 0 84 83 82 78 68 43 13 2 0 0 0 82 75 40 3 0 0 0 65 30 3 0 0 0 0 0 0 0 78 52 13 1 0 0 0 0 0 0 76 64 29 5 1 0 0 0 0 0 81 80 78 66 17 1 0 80 79 74 41 3 0 78 74 62 35 7 0 0 81 81 80 79 64 0 0 81 81 81 80 77 57 2 0 80 80 80 79 76 69 52 28 9 2 1 80 76 52 7 0 0 0 63 30 3 0 0 0 0 0 0 0 79 69 35 6 1 0 0 0 0 0 74 67 46 11 2 0 0 0 0 0 79 79 77 68 27 1 0 79 78 74 50 7 0 77 75 68 50 21 3 0 80 79 79 77 73 34 0 80 79 79 79 76 68 32 0 80 79 79 79 78 76 73 68 61 48 48 78 75 58 12 0 0 0 61 29 3 0 0 0 0 0 0 0 78 71 47 12 1 0 0 0 0 0 71 67 48 17 4 1 0 0 0 0 78 77 76 63 19 1 0 78 77 73 47 5 0 77 75 68 48 17 2 0 78 78 77 75 62 35 1 78 78 78 78 76 67 45 17 78 78 78 77 77 75 73 69 65 55 47 77 74 57 11 0 0 0 60 29 4 0 0 0 0 0 0 0 77 70 44 14 2 0 0 0 0 0 70 65 45 16 4 1 0 0 0 1 87 64 12 0 0 0 0 80 44 2 0 0 0 72 43 10 1 0 0 0 85 71 15 0 0 0 0 86 85 74 23 0 0 0 0 83 76 54 17 1 0 0 0 0 0 0 82 59 10 0 0 0 0 64 24 2 0 0 0 0 0 0 0 22 4 1 0 0 0 0 0 0 0 70 40 9 1 0 0 0 0 0 0 87 75 26 1 0 0 0 83 59 8 0 0 0 75 52 16 1 0 0 0 86 82 49 1 0 0 0 86 86 82 56 3 0 0 0 84 81 70 41 8 0 0 0 0 0 0 82 65 15 0 0 0 0 67 27 2 0 0 0 0 0 0 0 33 6 1 0 0 0 0 0 0 0 74 44 10 1 0 0 0 0 0 0 86 81 50 4 0 0 0 84 74 24 1 0 0 79 64 28 3 0 0 0 86 85 74 15 0 0 0 86 86 84 76 24 0 0 0 85 83 79 64 28 3 0 0 0 0 0 83 69 19 1 0 0 0 68 33 3 0 0 0 0 0 0 0 50 11 2 0 0 0 0 0 0 0 75 52 14 1 0 0 0 0 0 0 85 83 70 15 0 0 0 84 80 50 3 0 0 81 73 46 10 0 0 0 86 85 82 56 1 0 0 86 85 85 82 61 3 0 0 85 84 83 78 60 24 2 0 0 0 0 83 73 30 1 0 0 0 69 35 4 0 0 0 0 0 0 0 68 25 4 0 0 0 0 0 0 0 77 59 20 3 0 0 0 0 0 0 84 83 78 44 3 0 0 83 82 70 18 0 0 81 77 62 25 2 0 0 84 84 83 75 14 0 0 84 84 84 82 76 29 0 0 84 83 83 81 75 57 22 3 0 0 0 82 76 42 3 0 0 0 68 35 4 0 0 0 0 0 0 0 77 47 10 1 0 0 0 0 0 0 77 67 32 5 1 0 0 0 0 0 81 81 79 67 17 0 0 81 80 76 49 4 0 80 77 70 45 11 1 0 81 81 81 79 58 0 0 81 81 81 80 79 65 4 0 81 81 81 80 78 73 60 34 10 2 1 80 76 54 9 0 0 0 66 34 5 0 0 0 0 0 0 0 79 68 31 4 0 0 0 0 0 0 75 69 45 13 2 0 0 0 0 0 77 78 76 72 40 3 0 78 77 75 63 16 0 77 75 71 57 26 3 0 78 77 77 76 71 19 0 78 78 77 77 76 72 39 0 77 77 77 77 76 75 72 67 55 38 32 77 75 62 19 1 0 0 61 29 4 0 0 0 0 0 0 0 76 72 48 11 1 0 0 0 0 0 71 68 55 23 6 1 0 0 0 1 77 77 76 71 36 2 0 77 77 75 62 15 0 77 76 71 59 30 5 0 78 77 77 76 71 40 0 78 78 78 77 76 72 47 3 78 78 77 77 77 75 74 70 63 56 57 77 75 64 19 1 0 0 60 29 4 0 0 0 0 0 0 0 76 71 47 13 1 0 0 0 0 0 69 66 53 25 7 1 0 0 0 1 87 66 15 0 0 0 0 81 51 5 0 0 0 77 55 17 1 0 0 0 83 69 16 0 0 0 0 86 85 79 40 1 0 0 0 85 83 75 48 12 0 0 0 0 0 0 81 58 10 0 0 0 0 64 24 1 0 0 0 0 0 0 0 24 4 1 0 0 0 0 0 0 0 71 40 8 1 0 0 0 0 0 0 86 74 27 1 0 0 0 83 63 11 0 0 0 79 63 26 2 0 0 0 85 78 38 1 0 0 0 86 86 82 61 6 0 0 0 85 84 80 63 26 2 0 0 0 0 0 81 63 13 0 0 0 0 68 30 2 0 0 0 0 0 0 0 34 7 1 0 0 0 0 0 0 0 74 43 10 1 0 0 0 0 0 0 86 79 47 3 0 0 0 84 74 26 1 0 0 81 71 38 5 0 0 0 85 82 64 8 0 0 0 86 85 84 75 27 0 0 0 85 84 82 75 49 10 0 0 0 0 0 82 67 18 0 0 0 0 68 30 3 0 0 0 0 0 0 0 48 11 2 0 0 0 0 0 0 0 76 52 14 2 0 0 0 0 0 0 86 83 66 12 0 0 0 84 80 51 3 0 0 82 75 53 13 1 0 0 85 84 77 36 0 0 0 86 86 85 81 57 3 0 0 86 85 84 80 66 34 4 0 0 0 0 83 73 29 2 0 0 0 69 34 4 0 0 0 0 0 0 0 63 21 3 0 0 0 0 0 0 0 78 61 19 2 0 0 0 0 0 0 84 82 75 34 1 0 0 84 81 67 13 0 0 82 78 63 27 2 0 0 84 84 81 66 6 0 0 85 84 84 82 73 22 0 0 84 84 83 81 75 57 22 2 0 0 0 82 75 39 3 0 0 0 69 35 5 0 0 0 0 0 0 0 74 39 7 1 0 0 0 0 0 0 78 66 31 4 0 0 0 0 0 0 83 82 79 55 5 0 0 83 81 75 34 1 0 81 78 69 39 6 0 0 83 83 82 76 32 0 0 83 83 83 82 78 50 1 0 83 83 83 82 79 71 49 18 3 0 0 81 76 50 6 0 0 0 66 34 5 0 0 0 0 0 0 0 79 56 16 2 0 0 0 0 0 0 76 68 38 7 1 0 0 0 0 0 81 80 78 64 14 0 0 80 80 76 48 3 0 80 77 70 48 12 1 0 81 80 80 77 54 1 0 81 81 80 80 79 64 6 0 80 81 80 80 79 75 67 45 17 4 4 79 75 55 9 0 0 0 65 33 4 0 0 0 0 0 0 0 78 67 26 3 0 0 0 0 0 0 74 69 47 12 2 0 0 0 0 0 80 79 78 70 26 1 0 79 79 76 58 8 0 78 76 71 53 18 1 0 80 79 79 77 66 3 0 80 79 79 79 78 70 17 0 79 79 79 79 78 76 73 62 41 19 17 79 76 58 12 0 0 0 62 28 3 0 0 0 0 0 0 0 78 71 39 6 0 0 0 0 0 0 73 69 50 17 3 1 0 0 0 0 77 77 77 71 33 1 0 78 77 75 60 11 0 77 75 70 56 22 2 0 78 78 77 77 68 6 0 78 78 78 77 76 70 22 0 78 77 77 77 77 75 73 65 51 31 26 77 74 60 15 1 0 0 59 27 3 0 0 0 0 0 0 0 77 72 44 9 1 0 0 0 0 0 71 67 50 21 4 1 0 0 0 0 76 75 74 69 37 2 0 75 75 73 59 13 0 74 72 68 54 22 2 0 75 75 75 74 68 9 0 75 75 75 74 74 68 24 0 75 75 74 75 74 73 72 67 59 46 42 74 71 57 16 1 0 0 52 22 2 0 0 0 0 0 0 0 75 70 52 14 1 0 0 0 0 0 66 63 49 21 5 1 0 0 0 1 72 73 71 65 29 2 0 73 72 69 54 10 0 72 70 65 50 19 2 0 73 73 72 72 64 5 0 73 73 73 72 71 64 18 0 73 73 73 72 72 71 69 64 55 41 37 72 69 54 13 1 0 0 47 17 2 0 0 0 0 0 0 0 72 67 44 11 1 0 0 0 0 0 63 59 43 16 3 1 0 0 0 1 87 63 14 0 0 0 0 79 44 4 0 0 0 72 48 14 2 0 0 0 64 14 0 0 0 0 0 82 58 11 0 0 0 0 0 62 33 9 1 0 0 0 0 0 0 0 85 78 45 2 0 0 0 70 35 4 0 0 0 0 0 0 0 39 10 2 1 0 0 0 0 0 0 75 52 20 3 0 0 0 0 0 0 87 73 25 1 0 0 0 82 58 10 0 0 0 76 58 25 4 0 0 0 76 34 1 0 0 0 0 84 71 25 1 0 0 0 0 67 43 17 3 0 0 0 0 0 0 0 86 83 69 15 0 0 0 72 42 8 0 0 0 0 0 0 0 56 21 5 1 0 0 0 0 0 0 76 61 30 8 1 0 0 0 0 0 85 77 40 3 0 0 0 82 67 21 1 0 0 77 63 35 9 1 0 0 80 54 9 0 0 0 0 83 76 41 4 0 0 0 0 67 47 22 6 1 0 0 0 0 0 0 84 83 78 51 3 0 0 72 47 11 0 0 0 0 0 0 0 71 42 14 3 0 0 0 0 0 0 76 68 43 16 4 1 0 0 0 0 84 78 49 7 0 0 0 81 71 30 2 0 0 75 62 37 11 1 0 0 79 56 11 0 0 0 0 82 76 45 8 0 0 0 0 68 52 29 9 2 0 0 0 0 0 0 84 83 81 73 32 1 0 73 50 15 1 0 0 0 0 0 0 76 55 22 5 1 0 0 0 0 0 76 69 51 26 9 2 1 1 0 0 81 74 47 8 0 0 0 78 67 27 2 0 0 66 49 25 7 1 0 0 70 36 4 0 0 0 0 78 68 31 4 0 0 0 0 61 42 21 7 1 0 0 0 0 0 0 81 80 79 78 71 17 0 68 46 13 1 0 0 0 0 0 0 75 59 32 9 2 1 1 1 0 0 72 68 55 34 15 5 3 1 1 1 78 69 39 5 0 0 0 73 54 15 1 0 0 47 28 10 2 0 0 0 53 13 1 0 0 0 0 70 46 12 1 0 0 0 0 45 27 11 3 0 0 0 0 0 0 0 78 76 76 75 76 57 4 63 42 13 1 0 0 0 0 0 0 71 59 35 12 3 1 1 1 0 0 66 63 54 36 19 9 5 2 1 1 74 58 22 2 0 0 0 63 32 5 0 0 0 31 14 4 1 0 0 0 27 3 0 0 0 0 0 55 22 2 0 0 0 0 0 32 16 4 1 0 0 0 0 0 0 0 74 73 72 71 70 57 21 60 38 12 1 0 0 0 0 0 0 68 52 29 10 3 1 1 1 0 0 63 60 52 38 24 14 9 4 1 1 87 40 2 0 0 0 0 69 16 0 0 0 0 62 26 4 0 0 0 0 23 1 0 0 0 0 0 74 27 1 0 0 0 0 0 49 18 2 0 0 0 0 0 0 0 0 81 54 9 0 0 0 0 83 69 33 3 0 0 0 0 0 0 19 3 1 0 0 0 0 0 0 0 77 47 12 1 0 0 0 0 0 0 88 40 2 0 0 0 0 68 17 0 0 0 0 60 24 3 0 0 0 0 17 1 0 0 0 0 0 71 22 1 0 0 0 0 0 45 15 2 0 0 0 0 0 0 0 0 82 58 12 0 0 0 0 85 78 59 21 2 0 0 0 0 0 25 5 1 0 0 0 0 0 0 0 80 57 19 2 0 0 0 0 0 0 87 29 1 0 0 0 0 59 12 0 0 0 0 54 19 2 0 0 0 0 9 0 0 0 0 0 0 57 11 0 0 0 0 0 0 30 6 0 0 0 0 0 0 0 0 0 82 61 14 1 0 0 0 86 82 74 51 16 2 1 1 1 1 31 6 1 0 0 0 0 0 0 0 81 65 27 4 0 0 0 0 0 0 86 23 1 0 0 0 0 50 9 0 0 0 0 49 16 2 0 0 0 0 6 0 0 0 0 0 0 45 8 0 0 0 0 0 0 29 7 1 0 0 0 0 0 0 0 0 82 63 16 1 0 0 0 86 84 78 67 43 16 5 2 1 1 40 9 2 0 0 0 0 0 0 0 82 68 39 8 1 0 0 0 0 0 85 17 1 0 0 0 0 42 6 0 0 0 0 46 16 2 0 0 0 0 4 0 0 0 0 0 0 35 5 0 0 0 0 0 0 27 7 1 0 0 0 0 0 0 0 0 81 63 19 1 0 0 0 84 83 79 71 56 34 16 7 3 1 52 16 3 1 0 0 0 0 0 0 80 72 44 11 2 0 0 0 0 0 78 8 0 0 0 0 0 24 2 0 0 0 0 26 5 0 0 0 0 0 2 0 0 0 0 0 0 23 2 0 0 0 0 0 0 15 3 0 0 0 0 0 0 0 0 0 74 60 19 1 0 0 0 79 78 76 72 67 58 46 29 10 3 62 34 9 2 0 0 0 0 0 0 75 69 50 19 4 1 0 0 0 0 69 4 0 0 0 0 0 12 1 0 0 0 0 25 8 2 0 0 0 0 4 0 0 0 0 0 0 12 2 0 0 0 0 0 0 5 1 0 0 0 0 0 0 0 0 0 65 49 11 0 0 0 0 70 70 70 68 68 68 65 57 39 17 68 66 63 48 15 2 3 1 0 0 67 65 60 49 30 12 2 0 0 0 62 3 0 0 0 0 0 9 1 0 0 0 0 18 6 1 0 0 0 0 3 0 0 0 0 0 0 8 1 0 0 0 0 0 0 4 1 0 0 0 0 0 0 0 0 0 56 38 6 0 0 0 0 64 65 65 65 64 64 64 62 53 33 62 61 59 52 27 6 34 22 10 2 61 59 56 48 36 19 6 1 0 0 51 3 0 0 0 0 0 8 1 0 0 0 0 22 9 2 0 0 0 0 4 0 0 0 0 0 0 8 1 0 0 0 0 0 0 5 2 0 0 0 0 0 0 0 0 0 47 26 3 0 0 0 0 56 57 57 57 57 57 55 51 41 23 54 54 52 45 21 4 27 19 10 3 53 51 46 39 28 14 3 1 0 0 42 5 1 0 0 0 0 10 2 0 0 0 0 20 9 3 1 0 0 0 6 1 0 0 0 0 0 12 3 0 0 0 0 0 0 7 3 1 0 0 0 0 0 0 0 0 41 26 5 0 0 0 0 51 50 51 50 50 49 48 45 39 28 46 46 45 38 19 4 26 18 8 2 47 44 40 34 22 11 3 0 0 0 86 55 9 0 0 0 0 75 33 2 0 0 0 66 39 11 1 0 0 0 35 2 0 0 0 0 0 64 16 1 0 0 0 0 0 30 8 1 0 0 0 0 0 0 0 0 83 73 34 2 0 0 0 74 49 12 0 0 0 0 0 0 0 82 66 22 4 0 0 0 0 0 0 80 71 41 9 1 0 0 0 0 0 85 37 3 0 0 0 0 64 17 1 0 0 0 60 29 6 1 0 0 0 16 1 0 0 0 0 0 42 5 0 0 0 0 0 0 15 3 0 0 0 0 0 0 0 0 0 83 74 38 2 0 0 0 75 52 17 1 0 0 0 0 0 0 84 79 56 14 1 0 0 0 0 0 80 75 54 18 3 1 0 0 0 0 85 24 2 0 0 0 0 51 10 0 0 0 0 50 22 5 1 0 0 0 8 0 0 0 0 0 0 32 3 0 0 0 0 0 0 12 3 0 0 0 0 0 0 0 0 0 82 73 37 3 0 0 0 72 49 12 0 0 0 0 0 0 0 83 80 72 41 7 0 0 0 0 0 79 74 57 29 6 1 0 0 0 0 84 21 2 0 0 0 0 48 9 0 0 0 0 49 19 4 1 0 0 0 8 0 0 0 0 0 0 31 3 0 0 0 0 0 0 12 3 0 0 0 0 0 0 0 0 0 81 74 40 3 0 0 0 71 44 9 0 0 0 0 0 0 0 82 80 80 64 23 2 1 1 0 0 78 71 57 30 11 2 1 0 0 0 83 23 2 0 0 0 0 50 9 0 0 0 0 45 18 3 0 0 0 0 9 0 0 0 0 0 0 38 5 0 0 0 0 0 0 17 4 1 0 0 0 0 0 0 0 0 80 72 39 4 0 0 0 70 43 6 0 0 0 0 0 0 0 81 77 73 69 61 12 9 2 0 0 76 70 52 27 10 2 0 0 0 0 83 33 3 0 0 0 0 57 13 1 0 0 0 45 17 3 0 0 0 0 14 1 0 0 0 0 0 51 12 1 0 0 0 0 0 27 9 2 0 0 0 0 0 0 0 0 80 72 38 4 0 0 0 68 39 5 0 0 0 0 0 0 0 80 76 70 66 62 53 21 2 0 0 76 70 52 25 6 1 0 0 0 0 82 34 4 0 0 0 0 58 15 1 0 0 0 41 15 2 0 0 0 0 16 1 0 0 0 0 0 58 16 1 0 0 0 0 0 29 8 1 0 0 0 0 0 0 0 0 79 71 42 6 0 0 0 68 40 6 0 0 0 0 0 0 0 77 73 67 61 57 47 29 3 0 0 75 69 51 25 7 1 0 0 0 0 81 37 4 0 0 0 0 59 15 1 0 0 0 40 15 2 0 0 0 0 17 1 0 0 0 0 0 59 16 1 0 0 0 0 0 32 10 2 0 0 0 0 0 0 0 0 78 70 43 9 0 0 0 69 43 9 0 0 0 0 0 0 0 78 72 64 57 52 50 46 9 0 0 75 71 55 25 7 2 0 0 0 0 79 38 5 0 0 0 0 60 16 1 0 0 0 41 14 2 0 0 0 0 22 1 0 0 0 0 0 63 21 1 0 0 0 0 0 31 10 2 0 0 0 0 0 0 0 0 77 69 45 10 0 0 0 69 47 13 1 0 0 0 0 0 0 76 71 62 54 47 49 61 28 3 0 74 71 60 32 10 3 0 0 0 0 77 35 5 0 0 0 0 56 15 1 0 0 0 36 12 2 0 0 0 0 21 1 0 0 0 0 0 59 20 1 0 0 0 0 0 31 10 2 0 0 0 0 0 0 0 0 75 68 47 13 1 0 0 69 50 21 4 1 0 0 0 0 0 74 69 61 53 46 51 66 45 15 1 74 73 67 50 23 8 2 0 0 0 82 46 5 0 0 0 0 68 24 1 0 0 0 58 29 7 1 0 0 0 27 1 0 0 0 0 0 68 27 1 0 0 0 0 0 41 15 2 0 0 0 0 0 0 0 0 75 52 13 1 0 0 0 74 51 13 0 0 0 0 0 0 0 32 7 2 0 0 0 0 0 0 0 85 76 33 4 0 0 0 0 0 0 81 47 6 0 0 0 0 67 26 1 0 0 0 58 30 7 0 0 0 0 26 1 0 0 0 0 0 65 24 1 0 0 0 0 0 46 18 3 0 0 0 0 0 0 0 0 74 55 16 1 0 0 0 76 59 22 2 0 0 0 0 0 0 51 14 3 1 0 0 0 0 0 0 86 83 63 16 1 0 0 0 0 0 79 43 7 0 0 0 0 62 23 2 0 0 0 53 26 6 0 0 0 0 20 1 0 0 0 0 0 54 16 1 0 0 0 0 0 33 11 1 0 0 0 0 0 0 0 0 73 57 23 2 0 0 0 75 60 28 4 0 0 0 0 0 0 65 30 6 1 0 0 0 0 0 0 85 84 79 51 10 1 0 0 0 0 78 31 4 0 0 0 0 51 14 1 0 0 0 44 18 4 1 0 0 0 14 1 0 0 0 0 0 39 8 1 0 0 0 0 0 22 6 1 0 0 0 0 0 0 0 0 74 61 28 3 0 0 0 75 63 36 10 1 0 0 0 0 0 73 53 17 3 0 0 0 0 0 0 85 84 81 74 40 9 1 0 0 0 76 27 4 0 0 0 0 43 11 1 0 0 0 36 15 4 1 0 0 0 13 1 0 0 0 0 0 36 8 1 0 0 0 0 0 19 6 1 0 0 0 0 0 0 0 0 72 63 32 4 0 0 0 74 64 40 14 2 0 0 0 0 0 74 65 35 8 2 0 0 0 0 0 82 82 81 78 69 39 13 2 0 0 74 22 3 0 0 0 0 34 7 1 0 0 0 26 9 2 0 0 0 0 7 0 0 0 0 0 0 25 5 0 0 0 0 0 0 19 6 1 0 0 0 0 0 0 0 0 69 58 31 4 0 0 0 72 60 35 8 1 0 0 0 0 0 69 58 29 6 1 0 0 0 0 0 80 80 77 69 49 26 10 2 0 0 71 25 4 0 0 0 0 28 6 0 0 0 0 12 3 1 0 0 0 0 9 1 0 0 0 0 0 24 4 0 0 0 0 0 0 16 5 1 0 0 0 0 0 0 0 0 68 60 38 9 1 0 0 70 60 36 12 2 0 0 0 0 0 70 67 53 24 5 1 1 1 0 0 79 78 76 74 65 51 40 20 4 0 67 24 5 1 0 0 0 25 6 1 0 0 0 11 4 1 0 0 0 0 7 1 0 0 0 0 0 20 4 1 0 0 0 0 0 15 6 1 0 0 0 0 0 0 0 0 63 59 42 14 1 0 0 66 55 34 13 4 1 0 0 0 0 64 62 53 29 9 2 2 2 1 0 75 75 74 72 67 61 48 34 16 3 66 34 10 2 0 0 0 41 14 2 0 0 0 25 11 3 1 0 0 0 15 3 0 0 0 0 0 29 8 2 0 0 0 0 0 20 8 3 1 0 0 0 0 0 0 0 64 62 53 27 3 0 0 65 56 37 17 7 2 1 0 1 0 65 65 63 55 35 14 13 9 6 2 72 72 71 71 70 69 66 60 41 8 61 34 13 3 0 0 0 42 19 4 1 0 0 30 16 6 2 0 0 0 27 8 1 0 0 0 0 40 17 4 1 0 0 0 0 22 11 4 1 0 0 0 0 0 0 0 60 58 52 31 6 1 0 61 53 35 19 7 3 1 1 1 1 61 61 59 54 42 21 19 14 8 3 65 65 64 64 63 62 59 54 36 9\n0. 0 0. 2 0. 4 0. 6 0. 8 1. 0\nAdversarial accuracy\nFi gu\nre 6:\nA cc\nur ac\ny of\nad ve\nrs ar\nia la\ntta ck\n(c ol\num n)\nag ai\nns ta\ndv er\nsa ri\nal ly\ntr ai\nne d\nm od\nel (r\now )o\nn Im\nag eN\net -1\n00 .\nAlgorithm 1 Pseudocode for the Frank-Wolfe algorithm for the L1 attack.\n1: Input: function f , initial input x ∈ [0, 1]d, L1 radius ρ, number of steps T . 2: Output: approximate maximizer x̄ of f over the truncated L1 ball B1(ρ;x) ∩ [0, 1]d centered\nat x. 3: 4: x(0) ← RandomInit(x) {Random initialization} 5: for t = 1, . . . , T do 6: g ← ∇f(x(t−1)) {Obtain gradient} 7: for k = 1, . . . , d do 8: sk ← index of the coordinate of g by with kth largest norm 9: end for\n10: Sk ← {s1, . . . , sk}. 11: 12: {Compute move to boundary of [0, 1] for each coordinate.} 13: for i = 1, . . . , d do 14: if gi > 0 then 15: bi ← 1− xi 16: else 17: bi ← −xi 18: end if 19: end for 20: Mk ← ∑ i∈Sk |bi| {Compute L1-perturbation of moving k largest coordinates.} 21: k∗ ← max{k |Mk ≤ ρ} {Choose largest k satisfying L1 constraint.} 22: 23: {Compute x̂ maximizing g>x over the L1 ball.} 24: for i = 1, . . . , d do 25: if i ∈ Sk∗ then 26: x̂i ← xi + bi 27: else if i = sk∗+1 then 28: x̂i ← xi + (ρ−Mk∗) sign(gi) 29: else 30: x̂i ← xi 31: end if 32: end for 33: x(t) ← (1− 1t )x\n(t−1) + 1t x̂ {Average x̂ with previous iterates} 34: end for 35: x̄← x(T )\nF.2 CONVERGENCE\nWe replicated the results in Figure 6 with 50 instead of 200 steps to see how the results changed based on the number of steps in the attack. As shown in Figure 11, the deviations are minor.\nNo at\ntac k\nL =\n1 L =2 L\n= 4 L\n=8 L =1\n6 L\n= 32\nL2 =\n10 L2 =2\n0 L2\n= 40 L2\n=8 0 L2 =\n16 0 L2 =\n32 0 L2 =\n64 0\nL2 =\n12 80 L2 =\n25 60 L2 =\n51 20 L\n1 =1\n95 L1\n= 39 0 L1\n= 78 0 L1\n= 15 60 L1\n= 31 20 L1\n= 62\n40\nL1 =\n12 48\n0\nL1 =\n24 96\n0\nL1 =\n49 92\n0\nL -JP\nEG =\n0.0 31\n25\nL -JP\nEG =\n0.0 62\n5\nL -JP\nEG =\n0.1 25\nL -JP\nEG =\n0.2 5\nL -JP\nEG =\n0.5\nL -JP\nEG =\n1\nL2 -JP\nEG =\n0.0 62\n5\nL2 -JP\nEG =\n0.1 25\nL2 -JP\nEG =\n0.2 5\nL2 -JP\nEG =\n0.5\nL2 -JP\nEG =\n1\nL2 -JP\nEG =\n2\nL2 -JP\nEG =\n4\nL2 -JP\nEG =\n8 L1\n-JP EG\n= 1\nL1 -JP\nEG =\n2\nL1 -JP\nEG =\n4\nL1 -JP\nEG =\n8\nL1 -JP\nEG =\n16\nL1 -JP\nEG =\n32\nL1 -JP\nEG =\n64\nL1 -JP\nEG =\n12 8\nL1 -JP\nEG =\n25 6\nL1 -JP\nEG =\n51 2\nL1 -JP\nEG =\n10 24 Ela stic\n= 0.1\n25\nEla stic\n= 0.2\n5\nEla stic\n= 0.5 Ela stic\n= 1 Ela stic\n= 2 Ela stic\n= 4 Ela stic\n= 8\nEla stic\n= 16\nNo rm\nal tr\nai ni\nng\nL\n= 1\nL\n= 2\nL\n= 4\nL\n= 8\nL\n= 16\nL\n= 32\nL 2\n= 10\nL 2\n= 20\nL 2\n= 40\nL 2\n= 80\nL 2\n= 16 0 L 2 = 32 0 L 2 = 64 0 L 2 = 12 80 L 2 = 25 60 L 2 = 51 20 L 1 = 19 5 L 1 = 39 0 L 1 = 78 0 L 1 = 15 60 L 1 = 31 20 L 1 = 62 40\nL 1\n= 12\n48 0\nL 1\n= 24\n96 0\nL 1\n= 49\n92 0\nL -JP\nEG\n= 0.\n03 12 5 L -JP EG = 0. 06 25 L -JP EG = 0. 12 5 L -JP EG = 0. 25 L -JP EG = 0. 5 L -JP EG = 1 L 2 -JP EG = 0. 06 25 L 2 -JP EG = 0. 12 5 L 2 -JP EG = 0. 25 L 2 -JP EG = 0. 5 L 2 -JP EG = 1 L 2 -JP EG = 2 L 2 -JP EG = 4 L 2 -JP EG = 8 L 1 -JP EG = 1 L 1 -JP EG = 2 L 1 -JP EG = 4 L 1 -JP EG = 8 L 1 -JP EG = 16 L 1 -JP EG = 32 L 1 -JP EG = 64 L 1 -JP EG = 12 8 L 1 -JP EG = 25 6 L 1 -JP EG = 51 2 L 1 -JP EG = 10 24 El as tic = 0. 12 5 El as tic = 0. 25 El as tic = 0. 5 El as tic = 1 El as tic = 2 El as tic = 4 El as tic = 8 El as tic = 16\n93 59\n9 0\n0 0\n0 91\n83 49\n5 1\n3 4\n3 3\n1 85\n71 39\n9 0\n0 0\n0 0\n21 0\n0 0\n0 0\n92 91\n83 50\n5 0\n2 2\n89 82\n61 24\n2 0\n0 0\n0 0\n0 9\n0 0\n0 0\n0 0\n0\n93 89\n79 37\n1 0\n0 92\n92 89\n74 25\n0 2\n2 2\n1 90\n86 73\n44 9\n0 0\n0 0\n83 50\n3 0\n0 0\n92 92\n91 88\n72 22\n0 0\n91 90\n84 66\n28 3\n0 0\n0 0\n0 67\n11 0\n0 0\n0 0 0 93 91 86 64 12 0 0 92 92 90 83 50 4 0 2 2 1 90 87 78 54 17 1 0 0 0 87 69 14 0 0 0 92 92 91 90 81 41 1 0 92 91 86 73 39 7 0 0 0 0 0 78 26 1 0 0 0 0 1 91 90 88 78 40 1 0 91 91 90 85 67 17 0 0 1 0 90 88 80 62 28 3 0 0 0 88 79 40 1 0 0 91 90 90 89 84 59 8 0 90 89 86 77 52 16 1 0 0 0 0 83 50 3 0 0 0 0 0 89 88 87 82 63 13 0 88 88 87 85 74 33 0 0 0 0 87 86 80 65 35 5 0 0 0 86 80 53 4 0 0 88 88 88 87 83 66 16 0 88 87 85 77 57 24 3 0 0 0 0 83 66 12 0 0 0 0 0 83 83 82 80 71 39 2 83 83 82 81 75 49 4 0 0 0 83 81 78 69 48 16 1 0 0 81 78 64 20 0 0 83 83 83 82 80 71 33 1 83 82 81 76 65 39 11 1 0 0 0 80 72 34 3 0 0 0 1 84 83 82 77 66 47 23 84 84 82 80 69 43 10 0 0 0 81 78 73 63 48 27 8 1 0 82 77 62 29 6 1 84 84 84 83 79 66 36 8 84 83 81 76 66 49 32 18 9 4 1 80 66 22 2 0 0 0 1 92 81 48 4 0 0 0 91 89 80 42 3 3 5 5 3 1 89 82 63 27 2 0 0 0 0 59 10 0 0 0 0 92 91 89 79 37 1 2 3 90 87 74 42 8 0 0 0 0 0 0 38 1 0 0 0 0 0 0 94 88 67 13 0 0 0 93 92 87 62 8 0 3 3 2 1 91 85 70 35 4 0 0 0 0 74 22 0 0 0 0 93 93 91 86 55 4 0 1 92 89 80 53 12 0 0 0 0 0 0 51 3 0 0 0 0 0 0 93 90 81 42 1 0 0 93 92 90 78 32 1 3 3 2 1 92 88 79 53 15 0 0 0 0 85 55 4 0 0 0 93 92 92 89 76 26 0 1 92 90 85 69 30 3 0 0 0 0 0 69 13 0 0 0 0 0 0 92 90 86 66 13 0 0 92 91 90 85 61 9 0 3 2 1 91 89 84 68 33 4 0 0 0 88 77 30 0 0 0 92 92 91 90 85 58 6 0 91 90 87 78 51 14 1 0 0 0 0 79 32 1 0 0 0 0 0 90 89 87 77 38 1 0 90 90 89 86 75 33 0 0 1 0 89 88 86 77 52 16 1 0 0 88 84 60 7 0 0 90 90 90 89 87 75 25 0 90 89 87 82 64 31 5 0 0 0 0 82 55 5 0 0 0 0 0 87 86 85 80 60 11 0 87 86 86 85 80 57 8 0 0 0 86 86 84 80 67 36 5 0 0 85 84 76 37 1 0 86 86 86 86 85 81 56 6 86 86 85 83 76 58 24 3 0 0 0 82 66 18 1 0 0 0 0 80 79 79 77 68 34 1 80 80 80 79 77 67 32 0 0 0 80 79 79 77 73 57 24 1 0 79 79 75 62 14 0 79 79 80 79 79 77 69 30 79 79 79 78 76 69 51 23 4 0 0 77 70 38 6 1 0 0 1 73 73 73 71 66 49 14 73 73 73 73 71 66 50 15 0 0 73 73 72 71 69 62 47 21 1 73 72 70 64 47 14 73 73 73 73 72 72 67 54 73 73 73 72 71 68 61 50 37 25 16 72 67 51 23 6 2 2 3 69 69 68 67 63 49 33 69 68 68 68 67 62 50 32 17 4 68 68 68 67 65 58 46 34 24 68 68 66 61 48 34 69 69 68 68 68 67 64 52 68 68 68 68 67 64 58 49 40 35 32 68 64 52 34 21 11 6 7 77 76 74 68 55 30 7 77 76 75 74 67 54 24 3 0 0 76 76 74 71 64 50 31 16 5 76 74 69 52 16 1 77 77 76 76 75 72 61 26 76 76 76 74 71 64 47 27 11 4 2 73 64 33 4 1 0 2 4 93 84 55 6 0 0 0 92 90 84 52 5 1 4 4 3 1 91 88 75 44 8 0 0 0 0 68 17 0 0 0 0 93 92 91 84 50 4 0 2 91 89 81 55 15 1 0 0 0 0 0 41 2 0 0 0 0 0 0 94 86 62 13 0 0 0 93 92 85 60 10 0 3 3 3 1 92 90 83 63 28 3 0 0 0 74 29 1 0 0 0 93 93 91 86 62 11 0 1 92 91 87 73 38 7 0 0 0 0 0 47 3 0 0 0 0 0 0 93 73 31 2 0 0 0 92 87 70 25 1 1 2 3 2 1 92 90 83 66 31 3 0 0 0 55 10 0 0 0 0 93 92 89 76 35 2 0 1 92 90 84 67 29 3 0 0 0 0 0 34 1 0 0 0 0 0 0 94 56 14 1 0 0 0 91 82 48 8 0 0 1 2 2 1 92 89 83 68 37 8 0 0 0 40 4 0 0 0 0 92 91 86 64 19 1 0 1 91 89 83 69 43 11 0 0 0 0 0 25 0 0 0 0 0 0 0 94 49 15 1 0 0 0 88 74 40 8 0 0 1 2 2 1 90 87 81 70 45 14 0 0 0 38 5 0 0 0 0 92 91 85 61 20 1 0 0 91 87 77 58 28 6 0 0 0 0 0 26 1 0 0 0 0 0 0 93 54 13 1 0 0 0 89 78 43 7 0 0 0 1 2 1 90 86 79 62 35 9 1 0 0 39 6 0 0 0 0 92 90 84 61 19 1 0 0 90 87 79 63 38 14 2 0 0 0 0 35 1 0 0 0 0 0 0 92 66 20 1 0 0 0 90 82 55 12 0 0 0 0 1 1 90 87 82 70 45 15 1 0 0 44 9 0 0 0 0 91 90 84 62 22 2 0 0 89 86 77 57 30 10 2 0 0 0 0 46 3 0 0 0 0 0 0 89 77 48 12 1 0 0 88 84 71 39 8 0 0 0 1 0 88 86 83 74 55 27 4 0 0 67 34 7 1 0 0 89 88 85 78 53 17 2 0 88 86 82 71 52 28 10 2 0 0 0 63 19 1 0 0 0 0 0 46 46 47 44 37 20 2 45 45 46 46 42 34 16 2 0 0 46 46 46 47 47 47 42 26 4 48 49 45 39 25 3 47 48 48 49 49 48 45 35 50 52 54 57 57 56 54 46 30 14 6 40 34 23 8 1 0 0 1 92 88 75 31 1 0 0 91 90 86 64 14 0 3 3 2 1 88 81 62 27 3 0 0 0 0 90 83 50 2 0 0 91 91 91 90 87 70 17 0 91 90 88 82 63 29 5 0 0 0 0 57 5 0 0 0 0 0 1 91 88 79 46 3 0 0 90 89 87 74 30 1 1 2 2 1 88 83 68 38 7 0 0 0 0 90 87 76 28 0 0 90 90 90 90 89 84 56 5 90 90 89 87 80 61 30 5 0 0 0 66 11 0 0 0 0 0 0 89 87 81 59 10 0 0 88 88 86 78 44 4 0 2 1 1 87 83 73 48 13 1 0 0 0 88 87 83 63 5 0 89 89 89 88 88 86 77 37 89 88 88 87 85 78 63 32 5 0 0 71 21 0 0 0 0 0 0 86 84 80 65 22 1 0 85 85 83 78 56 12 0 1 1 0 84 81 74 56 23 2 0 0 0 85 85 83 75 30 0 86 86 86 85 85 84 80 63 86 85 85 85 84 81 73 54 24 4 0 72 33 1 0 0 0 0 0 83 81 79 70 39 4 0 82 82 81 78 64 27 1 0 0 0 82 80 75 63 37 7 0 0 0 82 82 82 79 61 3 83 83 83 82 82 82 80 74 83 83 82 82 82 80 77 69 52 28 11 73 47 5 0 0 0 0 0 80 79 77 69 44 6 0 79 79 78 75 65 30 2 0 0 0 79 78 74 62 38 9 0 0 0 79 79 78 77 70 35 80 80 80 79 79 79 77 73 80 80 80 79 79 78 76 74 69 62 53 72 51 8 0 0 0 0 0 94 74 23 0 0 0 0 92 87 63 11 0 2 2 2 2 1 86 70 35 5 0 0 0 0 0 50 4 0 0 0 0 93 93 90 75 25 0 1 1 91 86 71 36 4 0 0 0 0 0 0 14 0 0 0 0 0 0 1 93 77 30 0 0 0 0 92 88 70 18 0 2 3 3 2 0 86 72 40 7 0 0 0 0 0 69 15 0 0 0 0 93 93 91 84 48 2 0 2 91 88 78 51 11 0 0 0 0 0 0 21 0 0 0 0 0 0 1 93 86 58 7 0 0 0 93 91 81 42 2 1 2 2 2 1 88 78 52 15 0 0 0 0 0 83 48 2 0 0 0 93 93 92 89 74 22 0 1 92 90 85 67 28 3 0 0 0 0 0 37 1 0 0 0 0 0 0 93 88 71 20 0 0 0 92 91 86 61 9 0 2 2 2 1 89 82 63 25 2 0 0 0 0 89 76 27 0 0 0 93 93 92 91 87 60 6 0 92 91 89 82 60 22 2 0 0 0 0 53 2 0 0 0 0 0 1 91 86 69 21 0 0 0 91 89 85 63 13 0 2 2 2 1 88 82 66 31 3 0 0 0 0 90 85 62 7 0 0 92 91 91 91 88 80 39 1 91 91 90 87 78 54 17 1 0 0 0 58 5 0 0 0 0 0 0 90 87 80 51 5 0 0 89 88 86 76 38 2 1 2 2 1 87 84 73 46 11 0 0 0 0 89 87 79 44 1 0 90 89 89 89 88 85 72 20 90 89 89 88 85 77 55 19 1 0 0 67 14 0 0 0 0 0 0 88 86 81 61 15 0 0 88 87 85 79 52 7 0 1 2 0 86 84 77 56 22 2 0 0 0 87 87 84 70 15 0 88 88 88 88 87 86 81 56 88 88 88 87 86 82 73 49 15 1 0 71 26 0 0 0 0 0 0 85 83 80 68 30 2 0 85 84 83 78 62 19 0 0 1 0 84 82 77 63 34 5 0 0 0 85 84 83 78 48 0 85 85 85 85 84 84 81 72 85 85 85 84 83 82 79 69 47 16 3 73 40 2 0 0 0 0 0 93 79 37 1 0 0 0 92 89 74 26 1 1 3 3 2 1 87 77 50 13 0 0 0 0 0 71 21 0 0 0 0 93 93 91 86 58 6 0 1 92 90 85 66 26 2 0 0 0 0 0 24 0 0 0 0 0 0 0 93 85 60 10 0 0 0 92 90 83 50 5 1 4 4 3 1 89 82 63 26 2 0 0 0 0 83 52 5 0 0 0 93 92 91 89 79 35 1 2 92 91 89 81 57 17 1 0 0 0 0 43 2 0 0 0 0 0 0 91 78 44 4 0 0 0 90 87 77 40 2 0 2 2 2 1 87 80 60 21 1 0 0 0 0 88 75 29 0 0 0 92 92 92 91 86 67 14 0 92 91 90 87 76 45 9 0 0 0 0 43 1 0 0 0 0 0 0 91 87 75 34 1 0 0 90 89 86 70 23 0 1 2 1 1 88 85 72 42 8 0 0 0 0 89 83 59 9 0 0 91 91 90 90 88 80 47 2 91 90 90 88 83 69 37 5 0 0 0 60 8 0 0 0 0 0 0 89 87 79 47 4 0 0 89 88 86 76 37 1 1 3 2 1 88 85 76 51 14 0 0 0 0 88 86 75 34 0 0 89 89 89 89 88 84 69 21 89 89 89 88 86 79 60 22 1 0 0 66 13 0 0 0 0 0 0 88 85 79 54 8 0 0 87 87 85 77 46 4 0 2 1 1 86 84 77 56 20 1 0 0 0 87 85 80 55 5 0 88 88 88 87 87 85 77 43 88 87 87 87 85 82 71 47 12 0 0 68 18 0 0 0 0 0 0 88 86 81 60 14 0 0 87 87 85 79 53 7 0 2 2 1 86 84 78 60 25 2 0 0 0 87 86 83 67 16 0 88 88 88 87 87 85 80 58 88 88 87 87 86 83 76 59 27 3 0 71 25 0 0 0 0 0 0 86 84 81 64 20 0 0 86 85 84 79 58 12 0 0 1 0 85 83 78 63 30 3 0 0 0 86 85 82 72 32 0 86 86 86 86 85 85 81 66 86 86 86 85 84 83 79 69 46 15 2 72 31 1 0 0 0 0 0 85 84 81 67 27 1 0 85 84 83 79 62 18 0 0 1 0 84 83 78 65 34 5 0 0 0 85 84 82 75 45 1 85 85 85 85 85 84 81 70 85 85 85 85 84 83 80 73 59 36 13 72 38 2 0 0 0 0 0 84 83 80 71 38 2 0 84 83 82 79 67 28 0 0 0 0 83 82 78 68 43 9 0 0 0 84 83 82 77 56 4 84 84 84 84 84 83 81 74 84 84 84 84 83 82 80 76 66 49 26 75 48 4 0 0 0 0 0 82 81 79 71 40 3 0 82 82 81 78 67 31 1 0 0 0 81 80 77 67 44 11 0 0 0 82 82 81 76 59 9 82 82 82 82 82 81 79 73 82 82 82 82 82 81 79 75 69 56 38 74 50 6 0 0 0 0 0 93 85 60 13 0 0 0 92 90 82 50 8 0 2 2 2 1 89 82 64 32 6 0 0 0 0 65 19 1 0 0 0 92 91 87 69 24 2 0 0 88 79 57 23 3 0 0 0 0 0 0 86 60 5 0 0 0 0 0 91 84 62 17 1 0 0 90 88 81 54 12 0 0 0 0 0 87 82 67 38 10 1 0 0 0 70 29 2 0 0 0 90 89 86 71 32 3 0 0 86 79 58 26 4 0 0 0 0 0 0 87 78 33 0 0 0 0 0 87 81 65 25 2 0 0 87 85 79 57 18 1 0 0 0 0 84 79 66 42 15 2 0 0 0 73 43 8 0 0 0 87 86 84 74 45 9 0 0 84 77 61 35 10 1 0 0 0 0 0 85 81 62 8 0 0 0 0 82 78 67 36 6 0 0 82 80 76 59 23 3 0 0 0 0 77 70 56 36 15 3 0 0 0 73 52 19 3 0 0 81 81 80 74 54 20 2 0 79 74 64 45 20 6 1 0 0 0 0 81 79 72 40 7 1 0 0 81 75 65 42 15 1 0 80 78 70 48 17 2 0 0 0 0 70 59 45 31 18 9 3 1 0 68 47 20 4 0 0 78 78 77 70 50 20 3 0 77 73 64 47 26 10 3 1 0 0 0 79 76 71 58 37 19 11 7 76 71 64 45 19 2 0 75 73 66 47 18 3 0 0 0 0 63 54 39 25 15 8 3 0 0 64 46 19 4 1 0 74 74 73 66 49 21 4 0 72 69 61 46 27 11 4 1 0 0 0 74 72 68 58 45 33 23 10 68 65 59 41 14 1 0 68 66 59 42 17 3 0 0 0 0 54 45 33 20 10 3 1 0 0 58 41 16 3 0 0 68 67 65 59 43 19 4 0 65 62 54 40 24 11 4 1 0 0 0 68 67 63 53 42 34 28 15 61 58 53 38 13 1 0 60 59 53 39 15 2 0 0 0 0 50 41 31 19 9 3 0 0 0 53 37 14 2 0 0 60 59 58 53 39 17 3 0 58 55 49 37 21 8 2 1 0 0 0 61 60 58 49 39 29 23 18\n0. 0 0. 2 0. 4 0. 6 0. 8 1. 0\nAdversarial accuracy\nFi gu\nre 8:\nA cc\nur ac\ny of\nad ve\nrs ar\nia la\ntta ck\n(c ol\num n)\nag ai\nns ta\ndv er\nsa ri\nal ly\ntr ai\nne d\nm od\nel (r\now )o\nn C\nIF A\nR -1\n0.\nL ∞ L 2 L 1\nL ∞\n-J P\nE G L 1 -J P E G E la st ic\nNormal Training\nL∞ ε = 1 L∞ ε = 2 L∞ ε = 4 L∞ ε = 8 L∞ ε = 16 L∞ ε = 32\nL∞-JPEG ε = 0.03125\nL∞-JPEG ε = 0.0625\nL∞-JPEG ε = 0.125\nL∞-JPEG ε = 0.25\nL∞-JPEG ε = 0.5\nL∞-JPEG ε = 1\n17 16 48 5 25 3\n110110110110110110\n51 49 69 31 37 21\n63 59 73 38 39 28\n74 67 76 47 40 36\n83 72 77 50 39 43\n89 75 78 55 40 51\n94 72 76 58 48 45\n110110110110110110\n48 43 61 51 42 17\n54 50 66 63 49 21\n59 55 69 74 58 25\n63 59 71 81 64 29\n68 64 73 88 79 34\n68 64 71 94 99 35\nL ∞ L 2 L 1\nL ∞\n-J P\nE G L 1 -J P E G E la st ic\nNormal Training\nL2 ε = 40 L2 ε = 80 L2 ε = 160 L2 ε = 320 L2 ε = 640 L2 ε = 2560\nL1-JPEG ε = 2 L1-JPEG ε = 8 L1-JPEG ε = 64 L1-JPEG ε = 256 L1-JPEG ε = 512 L1-JPEG ε = 1024\n17 16 48 5 25 3\n110110110110110110\n53 53 74 32 38 22\n64 63 80 44 40 30\n73 73 84 54 41 38\n80 81 88 64 46 45\n84 86 88 70 51 52\n87 85 86 78 71 66\n110110110110110110\n39 37 62 32 41 12\n49 47 68 54 51 18\n60 58 73 76 66 26\n65 62 75 84 85 30\n68 66 76 87 92 34\n68 66 75 88 96 35\nL ∞ L 2 L 1\nL ∞\n-J P\nE G L 1 -J P E G E la st ic\nNormal Training\nL1 ε = 195 L1 ε = 390 L1 ε = 780 L1 ε = 1560 L1 ε = 6240 L1 ε = 49920\nElastic ε = 0.125\nElastic ε = 0.25\nElastic ε = 0.5\nElastic ε = 1\nElastic ε = 2\nElastic ε = 8\n17 16 48 5 25 3\n110110110110110110\n36 38 70 19 34 12\n40 41 79 24 39 13\n26 26 79 15 37 9\n18 15 80 10 37 7\n17 13 77 10 36 10\n49 47 61 47 51 29\n110110110110110110\n40 37 63 19 24 41\n41 38 65 23 25 53\n43 40 65 28 27 64\n47 41 57 33 29 75\n49 35 51 32 29 89\n45 31 37 27 25 86\nAttack (evaluation)\nFigure 10: Replica of the first three block rows of Figure 6 with different random seeds. Deviations in results are minor.\nNo at\ntac k L\n= 1 L =\n2 L\n= 4 L =\n8 L\n= 16 L =\n32 L2 =\n15 0 L2 =\n30 0 L2 =\n60 0\nL2 =\n12 00\nL2 =\n24 00\nL2 =\n48 00 L1 =\n95 62\n.44\nL1 =\n19 12\n5\nL1 =\n38 25\n0.1\nL1 =\n76 50\n0\nL1 =\n15 30\n00\nL1 =\n30 60\n00\nL1 =\n61 20\n00\nL -JP\nEG =\n0.0 31\n25\nL -JP\nEG =\n0.0 62\n5\nL -JP\nEG =\n0.1 25\nL -JP\nEG =\n0.2 5\nL -JP\nEG =\n0.5\nL -JP\nEG =\n1\nL -JP\nEG =\n2 L2\n-JP EG\n= 2\nL2 -JP\nEG =\n4\nL2 -JP\nEG =\n8\nL2 -JP\nEG =\n16\nL2 -JP\nEG =\n32\nL2 -JP\nEG =\n64\nL2 -JP\nEG =\n12 8\nL2 -JP\nEG =\n25 6\nL1 -JP\nEG =\n12 8\nL1 -JP\nEG =\n25 6\nL1 -JP\nEG =\n51 2\nL1 -JP\nEG =\n10 24\nL1 -JP\nEG =\n20 48\nL1 -JP\nEG =\n40 96\nL1 -JP\nEG =\n81 92\nL1 -JP\nEG =\n16 38\n4\nL1 -JP\nEG =\n32 76\n8\nL1 -JP\nEG =\n65 53\n6\nL1 -JP\nEG =\n13 10 72 Ela\nstic =\n0.2 5\nEla stic\n= 0.5 Ela stic\n= 1\nEla stic\n= 2\nEla stic\n= 4\nEla stic\n= 8\nEla stic\n= 16 Fog\n= 12\n8 Fog =\n25 6 Fog =\n51 2\nFog =\n10 24\nFog =\n20 48\nFog =\n40 96\nFog =\n81 92\nFog =\n16 38\n4\nFog =\n32 76\n8\nFog =\n65 53 6 Ga bo r\n=6 .25\nGa bo\nr =1\n2.5\nGa bo\nr =2\n5\nGa bo\nr =5\n0\nGa bo\nr =1\n00\nGa bo\nr =2\n00\nGa bo\nr =4\n00\nGa bo\nr =8\n00\nGa bo\nr =1\n60 0\nGa bo\nr =3\n20 0\nSn ow\n= 0.0\n31 25\nSn ow\n= 0.0\n62 5\nSn ow\n= 0.1\n25\nSn ow\n= 0.2\n5\nSn ow\n= 0.5\nSn ow\n= 1\nSn ow\n= 2\nSn ow\n= 4\nSn ow\n= 8\nSn ow\n= 16\nNo rm\nal tr\nai ni\nng\nL\n= 1\nL\n= 2\nL\n= 4\nL\n= 8\nL\n= 16\nL\n= 32\nL 2\n= 15 0 L 2 = 30 0 L 2 = 60 0 L 2 = 12 00 L 2 = 24 00 L 2 = 48 00\nL 1\n= 95\n62 .4 4 L 1 = 19 12 5 L 1 = 38 25 0. 1 L 1 = 76 50 0 L 1 = 15 30 00 L 1 = 30 60 00 L 1 = 61 20 00\nL -JP\nEG\n= 0.\n03 12 5 L -JP EG = 0. 06 25 L -JP EG = 0. 12 5 L -JP EG = 0. 25 L -JP EG = 0. 5 L -JP EG = 1 L -JP EG = 2 L 2 -JP EG = 2 L 2 -JP EG = 4 L 2 -JP EG = 8 L 2 -JP EG = 16 L 2 -JP EG = 32 L 2 -JP EG = 64 L 2 -JP EG = 12 8 L 2 -JP EG = 25 6 L 1 -JP EG = 12 8 L 1 -JP EG = 25 6 L 1 -JP EG = 51 2 L 1 -JP EG = 10 24 L 1 -JP EG = 20 48 L 1 -JP EG = 40 96 L 1 -JP EG = 81 92 L 1 -JP EG = 16 38 4 L 1 -JP EG = 32 76 8 L 1 -JP EG = 65 53 6 L 1 -JP EG = 13 10 72 El as tic = 0. 25 El as tic = 0. 5 El as tic = 1 El as tic = 2 El as tic = 4 El as tic = 8 El as tic = 16 Fo g = 12 8 Fo g = 25 6 Fo g = 51 2 Fo g = 10 24 Fo g = 20 48 Fo g = 40 96 Fo g = 81 92 Fo g = 16 38 4 Fo g = 32 76 8 Fo g = 65 53 6 Ga bo r = 6. 25 Ga bo r = 12 .5 Ga bo r = 25 Ga bo r = 50 Ga bo r = 10 0 Ga bo r = 20 0 Ga bo r = 40 0 Ga bo r = 80 0 Ga bo r = 16 00 Ga bo r = 32 00 Sn ow = 0. 03 12 5 Sn ow = 0. 06 25 Sn ow = 0. 12 5 Sn ow = 0. 25 Sn ow = 0. 5 Sn ow = 1 Sn ow = 2 Sn ow = 4 Sn ow = 8 Sn ow = 16\n87 28\n2 0\n0 0\n0 58\n13 0\n0 0\n0 70\n44 13\n2 0\n0 0\n22 1\n0 0\n0 0\n0 72\n28 2\n0 0\n0 0\n0 59\n32 9\n1 0\n0 0\n0 0\n0 0\n80 50\n9 0\n0 0\n0 70\n37 6\n0 0\n0 0\n0 0\n0 18\n5 2\n1 0\n0 0\n0 0\n0 64\n37 10\n1 0\n0 0\n0 0\n0\n86 84\n70 15\n0 0\n0 85\n82 50\n3 0\n0 81\n70 43\n10 1\n0 0\n84 72\n16 0\n0 0\n0 86\n84 68\n13 0\n0 0\n0 77\n64 36\n10 1\n0 0\n0 0\n0 0\n84 75\n39 4\n0 0\n0 78\n60 23\n3 0\n0 0\n0 0\n0 75\n34 6\n1 0\n0 0\n0 0\n0 80\n67 33\n7 1\n0 0\n0 0 0 85 85 80 51 3 0 0 85 83 71 21 0 0 81 74 57 24 3 0 0 84 81 49 2 0 0 0 85 84 77 38 1 0 0 0 79 71 52 22 4 0 0 0 0 0 0 84 79 53 10 0 0 0 76 56 19 2 1 1 0 1 0 0 81 62 16 2 0 0 0 0 0 0 80 73 47 14 2 0 0 0 0 0 84 84 82 74 24 0 0 84 83 79 50 3 0 81 76 64 39 9 1 0 83 82 70 13 0 0 0 84 83 80 56 6 0 0 0 79 74 62 38 11 1 0 0 0 0 0 83 79 62 18 1 0 0 73 50 15 2 1 1 1 1 1 0 82 77 41 5 0 0 0 0 0 0 79 76 59 23 4 1 1 0 0 0 80 79 79 76 59 7 0 79 78 73 52 9 0 73 65 53 31 9 1 0 80 77 60 15 0 0 0 79 78 71 46 8 0 0 0 74 70 59 39 16 4 1 0 0 0 0 79 77 66 34 4 0 0 66 40 9 1 1 1 1 1 1 1 79 78 65 18 1 0 0 0 0 0 75 73 65 38 11 3 1 1 1 1 75 74 74 73 67 35 1 73 71 64 32 4 0 59 46 27 10 2 0 0 73 68 39 5 0 0 0 74 70 58 29 4 0 0 0 64 58 46 31 15 5 1 0 0 0 0 73 71 65 44 12 1 0 58 29 6 1 1 1 1 1 1 0 74 73 69 46 5 1 1 1 0 0 68 66 61 44 21 8 2 1 0 0 71 71 70 69 63 43 14 68 60 35 7 0 0 39 24 9 3 0 0 0 65 45 11 1 0 0 1 68 60 35 9 2 0 1 1 59 51 41 27 15 7 2 0 0 0 0 70 68 63 46 16 2 0 58 33 9 1 1 1 1 1 1 0 70 69 64 52 18 3 3 2 1 1 63 61 56 41 21 7 2 1 0 0 87 82 53 4 0 0 0 85 78 36 1 0 0 80 71 43 10 1 0 0 81 53 3 0 0 0 0 85 81 51 3 0 0 0 0 75 55 23 4 0 0 0 0 0 0 0 84 72 32 2 0 0 0 77 56 19 2 0 0 0 0 0 0 65 21 4 1 0 0 0 0 0 0 78 61 24 4 0 0 0 0 0 0 85 84 73 23 0 0 0 85 82 66 10 0 0 83 77 61 26 3 0 0 84 76 25 0 0 0 0 85 84 73 20 0 0 0 0 79 69 42 11 1 0 0 0 0 0 0 84 76 45 5 0 0 0 76 55 20 2 0 0 0 0 0 0 78 45 8 1 0 0 0 0 0 0 79 69 35 7 1 0 0 0 0 0 84 84 81 58 4 0 0 84 83 78 41 1 0 83 80 73 50 13 0 0 84 81 61 4 0 0 0 84 84 80 52 3 0 0 0 81 76 61 30 6 0 0 0 0 0 0 83 79 58 12 0 0 0 75 52 16 2 0 0 0 0 0 0 81 67 20 2 0 0 0 0 0 0 79 73 50 14 2 0 0 0 0 0 82 82 81 74 29 0 0 82 82 80 68 17 0 81 80 77 67 38 6 0 82 81 76 37 0 0 0 82 82 81 74 28 0 0 0 81 80 74 59 28 6 0 0 0 0 0 81 78 67 24 2 0 0 69 45 12 1 0 1 1 1 0 1 81 76 45 6 0 0 1 0 0 0 76 72 60 26 5 1 0 0 1 0 77 77 76 73 57 7 0 77 76 76 72 49 3 77 76 75 71 58 26 3 76 76 75 64 13 0 0 77 77 76 74 58 8 0 0 77 76 74 69 56 29 8 1 0 0 0 76 74 68 39 5 1 0 61 34 8 1 1 0 0 1 0 1 76 74 61 19 1 0 1 1 0 0 69 68 61 38 12 3 1 1 1 1 68 68 68 67 61 28 1 68 68 68 67 60 20 68 68 68 66 61 46 16 68 68 67 64 38 2 0 68 68 68 67 63 36 2 0 68 68 68 66 62 51 33 15 6 3 3 68 67 63 49 12 1 0 51 27 6 1 1 1 1 1 1 1 68 67 61 31 3 1 1 1 1 1 60 59 55 42 19 6 2 1 1 1 86 71 26 1 0 0 0 82 65 16 0 0 0 83 79 61 21 1 0 0 69 20 1 0 0 0 0 84 72 25 1 0 0 0 0 74 53 20 3 0 0 0 0 0 0 0 82 67 22 1 0 0 0 74 48 10 1 0 0 0 0 0 0 47 13 3 1 0 0 0 0 0 0 73 53 18 2 0 0 0 0 0 0 86 78 43 3 0 0 0 84 75 34 1 0 0 84 81 72 41 5 0 0 77 43 3 0 0 0 0 84 79 49 5 0 0 0 0 80 68 40 10 1 0 0 0 0 0 0 83 71 32 2 0 0 0 74 49 12 1 0 0 0 0 0 0 61 22 4 1 0 0 0 0 0 0 75 58 23 3 1 0 0 0 0 0 85 81 63 13 0 0 0 84 80 57 7 0 0 85 83 78 58 18 1 0 81 66 19 1 0 0 0 84 82 69 23 1 0 0 0 82 77 59 26 5 0 0 0 0 0 0 83 75 43 4 0 0 0 74 51 13 1 0 0 0 0 0 0 73 38 7 1 0 0 0 0 0 0 76 65 32 6 1 0 0 0 0 0 84 82 71 31 1 0 0 83 81 69 23 0 0 84 83 81 73 43 5 0 82 75 48 5 0 0 0 84 82 77 51 5 0 0 0 83 81 74 53 19 3 0 0 0 0 0 82 78 53 8 0 0 0 72 48 12 1 0 0 0 0 0 0 78 55 16 2 0 0 0 0 0 0 76 67 39 9 1 0 0 0 0 0 81 79 73 46 4 0 0 80 79 70 39 2 0 81 80 79 77 64 24 1 79 76 63 20 1 0 0 80 79 77 64 20 1 0 0 80 79 78 70 44 13 1 0 0 0 0 79 75 58 14 0 0 0 69 45 11 0 0 0 0 0 0 0 78 63 27 4 0 0 0 0 0 0 72 66 46 15 2 0 0 0 1 1 79 77 73 54 12 0 0 78 77 72 49 8 0 79 79 78 76 71 49 8 77 75 69 47 4 0 0 78 77 75 69 45 4 0 0 78 78 77 74 66 38 9 1 0 0 0 77 74 62 23 1 0 0 64 39 9 1 0 0 0 0 0 0 75 67 39 8 1 0 0 0 0 0 69 64 49 21 5 1 1 0 1 1 72 71 69 60 28 1 0 72 71 68 57 22 1 72 72 71 70 67 57 24 71 70 66 54 18 0 0 72 71 70 67 56 17 0 0 71 71 70 69 65 50 25 7 1 1 0 70 68 61 35 4 0 0 53 30 8 1 1 0 1 0 0 0 69 62 43 14 2 1 1 0 0 0 60 56 46 25 9 2 1 1 1 1 87 75 29 1 0 0 0 83 60 9 0 0 0 77 58 25 4 0 0 0 86 83 57 3 0 0 0 86 86 82 57 4 0 0 0 84 81 71 46 13 1 0 0 0 0 0 82 66 18 1 0 0 0 75 48 11 1 0 0 0 0 0 0 41 9 2 1 0 0 0 0 0 0 73 47 13 2 0 0 0 0 0 0 87 81 48 3 0 0 0 84 71 21 0 0 0 79 65 33 6 0 0 0 86 85 76 20 0 0 0 87 86 84 74 21 0 0 0 85 83 78 62 30 5 0 0 0 0 0 83 71 22 1 0 0 0 75 51 13 1 0 0 0 0 0 0 55 15 3 1 0 0 0 0 0 0 74 51 18 3 0 0 0 0 0 0 86 83 68 16 0 0 0 84 78 44 3 0 0 80 71 44 11 1 0 0 86 85 83 60 2 0 0 86 86 85 81 55 2 0 0 85 83 81 74 51 20 3 0 0 0 0 83 73 31 2 0 0 0 75 51 13 1 0 0 0 0 0 0 71 33 6 1 0 0 0 0 0 0 76 58 23 4 1 0 0 0 0 0 84 83 77 43 3 0 0 83 81 66 16 0 0 80 74 56 24 3 0 0 84 84 83 77 20 0 0 84 84 84 82 73 21 0 0 84 83 82 79 70 49 18 4 1 0 0 83 75 43 4 0 0 0 73 49 13 1 0 0 0 0 0 0 78 56 18 3 0 0 0 0 0 0 76 65 34 6 1 0 0 0 0 1 81 80 78 66 19 1 0 80 79 74 42 3 0 79 75 65 39 10 1 0 81 81 80 79 66 1 0 81 81 80 80 77 59 3 0 81 80 80 79 76 71 57 35 13 3 2 79 75 53 10 0 0 0 70 46 12 1 0 0 0 0 0 0 79 72 41 8 1 0 0 0 0 0 74 68 48 15 2 1 0 0 0 0 79 79 77 68 30 2 0 79 78 74 53 9 0 77 75 69 54 24 4 0 80 79 79 77 73 41 0 80 80 80 79 76 69 39 1 80 79 79 78 77 76 74 69 62 53 48 79 75 59 14 1 0 0 67 44 11 1 0 0 0 0 0 0 78 73 50 16 3 1 1 0 0 0 71 68 53 21 5 1 1 0 0 1 78 77 76 64 20 1 0 78 77 73 50 8 0 77 75 69 52 22 4 0 78 78 78 76 70 61 13 78 78 78 78 77 72 65 53 78 78 78 78 77 76 74 72 69 65 60 77 74 57 13 1 0 0 65 42 10 1 0 0 0 0 0 0 77 71 49 18 4 1 1 1 0 0 70 65 49 20 6 2 1 0 1 1 87 64 14 0 0 0 0 80 45 3 0 0 0 75 53 19 2 0 0 0 84 72 17 0 0 0 0 86 85 76 26 0 0 0 0 83 78 61 27 5 0 0 0 0 0 0 81 61 14 1 0 0 0 75 50 12 1 0 0 0 0 0 0 28 6 2 0 0 0 0 0 0 0 71 44 13 2 0 0 0 0 0 0 87 75 28 1 0 0 0 82 60 9 0 0 0 77 60 25 4 0 0 0 86 81 51 2 0 0 0 87 86 82 59 4 0 0 0 84 82 74 50 15 1 0 0 0 0 0 82 66 18 1 0 0 0 75 48 11 0 0 0 0 0 0 0 38 9 2 1 0 0 0 0 0 0 74 45 14 2 0 0 0 0 0 0 86 81 50 4 0 0 0 84 74 26 1 0 0 80 67 37 7 0 0 0 86 84 75 19 0 0 0 86 86 84 77 27 0 0 0 85 83 80 68 38 7 0 0 0 0 0 83 70 23 1 0 0 0 75 50 13 1 0 0 0 0 0 0 54 15 3 1 0 0 0 0 0 0 75 55 19 2 0 0 0 0 0 0 85 83 70 17 0 0 0 84 81 52 4 0 0 81 75 52 16 1 0 0 85 85 82 57 1 0 0 85 85 85 82 63 4 0 0 85 84 82 79 63 30 5 0 0 0 0 83 74 33 2 0 0 0 75 51 14 1 0 0 0 0 0 0 70 30 6 1 0 0 0 0 0 0 76 60 25 4 0 0 0 0 0 1 84 82 78 46 3 0 0 83 81 70 20 0 0 81 77 64 30 4 0 0 84 84 83 76 17 0 0 84 84 84 83 77 33 0 0 84 83 83 81 75 60 28 6 1 0 0 82 75 44 4 0 0 0 74 48 13 1 0 0 0 0 0 0 78 52 14 2 0 0 0 0 0 0 77 66 36 7 1 0 0 0 0 0 81 81 79 67 19 0 0 81 80 76 50 4 0 79 78 70 49 15 1 0 81 81 81 79 60 1 0 81 81 81 80 79 66 6 0 81 81 80 80 78 74 62 40 15 3 1 80 76 56 10 1 0 0 70 44 11 1 0 0 0 0 0 0 80 69 35 6 1 1 0 0 0 0 75 70 49 15 2 1 0 0 1 1 77 77 76 71 40 3 0 77 77 76 64 18 0 76 75 72 60 30 5 0 78 78 77 76 72 26 0 78 77 77 77 76 72 44 1 77 77 77 77 76 75 72 68 57 41 33 77 74 62 21 2 0 0 65 38 9 1 0 0 0 0 0 0 77 73 51 13 2 1 1 1 0 0 70 68 56 27 7 2 1 0 1 1 77 77 76 71 38 2 0 77 77 75 61 16 0 77 75 72 60 34 7 0 78 77 77 76 72 48 0 78 78 77 77 76 73 55 14 78 77 77 77 76 75 73 71 64 58 55 77 75 64 22 1 0 0 64 38 10 1 0 1 0 1 0 0 76 72 50 15 2 1 0 0 0 0 70 67 57 30 9 2 1 0 1 1 87 67 17 1 0 0 0 80 53 6 0 0 0 78 63 27 3 0 0 0 84 70 19 0 0 0 0 86 85 79 43 2 0 0 0 86 84 76 56 18 2 0 0 0 0 0 81 60 12 1 0 0 0 73 44 8 0 0 0 0 0 0 0 31 7 2 0 0 0 0 0 0 0 70 44 10 2 0 0 0 0 0 0 86 74 30 1 0 0 0 83 64 13 0 0 0 81 68 36 5 0 0 0 85 78 41 2 0 0 0 86 85 83 63 8 0 0 0 85 84 81 68 35 5 0 0 0 0 0 82 63 16 0 0 0 0 74 48 10 0 0 0 0 0 0 0 41 10 2 0 0 0 0 0 0 0 73 49 13 2 0 0 0 0 0 0 86 79 49 4 0 0 0 83 74 29 1 0 0 81 72 45 9 1 0 0 85 82 65 10 0 0 0 86 85 84 76 31 0 0 0 85 85 83 75 54 17 1 0 0 0 0 82 68 22 1 0 0 0 75 48 11 1 0 0 0 0 0 0 54 16 3 1 0 0 0 0 0 0 74 53 16 2 0 0 0 0 0 0 86 82 67 13 0 0 0 85 80 50 4 0 0 83 77 57 19 1 0 0 86 84 78 39 1 0 0 86 85 85 82 59 4 0 0 86 85 84 80 70 40 7 1 0 0 0 82 73 32 2 0 0 0 75 50 14 1 0 0 0 0 0 0 67 27 5 1 0 0 0 0 0 0 76 62 23 4 1 0 0 0 0 0 84 83 75 34 1 0 0 84 81 67 14 0 0 82 78 66 33 4 0 0 84 84 81 66 8 0 0 85 85 84 82 74 25 0 0 84 84 83 82 76 61 28 5 1 0 0 82 75 41 4 0 0 0 74 49 14 1 0 0 0 0 0 0 75 43 9 1 0 0 0 0 0 0 77 66 34 6 1 0 0 0 0 0 83 82 79 56 6 0 0 83 82 75 35 1 0 81 78 70 44 9 0 0 83 83 82 76 36 0 0 83 83 83 82 79 52 1 0 83 83 82 82 79 72 53 22 4 1 0 81 76 52 8 0 0 0 71 45 13 1 0 0 0 0 0 0 79 60 20 2 0 0 0 0 0 0 76 69 43 10 1 0 0 0 0 0 81 80 78 66 15 0 0 80 80 75 49 3 0 79 77 72 52 16 1 0 81 81 80 78 58 1 0 81 81 80 80 78 65 9 0 81 80 80 80 79 76 68 49 22 6 3 79 75 55 11 0 0 0 69 43 11 1 0 0 0 0 0 0 78 68 32 5 1 0 0 0 0 0 75 70 50 16 3 1 0 0 0 1 80 79 78 70 27 1 0 79 79 76 58 9 0 78 77 72 55 23 2 0 80 79 79 78 68 7 0 80 79 79 79 78 71 25 0 80 79 79 79 78 77 73 64 44 23 18 78 75 59 14 1 0 0 66 40 9 1 1 0 0 0 0 0 78 72 43 9 1 0 0 0 0 0 74 69 52 21 4 1 1 0 1 1 77 77 76 71 34 2 0 77 77 75 61 13 0 77 76 71 58 26 3 0 78 78 78 77 70 12 0 78 78 78 77 76 71 32 0 78 77 77 77 77 75 73 67 54 36 28 77 74 60 17 1 0 0 64 37 7 1 0 1 0 0 0 1 77 73 48 12 1 0 0 0 0 0 70 68 53 25 6 2 1 1 0 1 76 75 74 69 38 2 0 75 74 73 60 15 0 74 73 68 57 28 4 0 75 75 75 74 69 22 0 75 75 75 75 74 69 40 1 75 75 75 75 74 74 72 68 61 50 45 74 71 58 17 1 0 0 58 31 6 1 1 1 0 1 1 0 75 72 56 19 2 1 1 0 0 0 67 64 52 24 7 2 1 1 1 1 72 73 71 64 31 2 0 73 72 69 54 11 0 72 71 66 52 24 4 0 73 73 73 72 66 17 0 73 73 73 73 71 66 36 1 73 73 73 73 72 71 69 65 57 46 42 72 69 54 14 1 0 0 52 25 5 1 1 1 0 1 1 1 72 68 49 15 2 1 1 0 0 0 64 60 47 21 5 1 1 0 1 1 87 63 15 0 0 0 0 79 45 4 0 0 0 75 58 26 5 0 0 0 65 16 1 0 0 0 0 82 60 13 0 0 0 0 0 68 44 17 3 0 0 0 0 0 0 0 85 78 47 4 0 0 0 77 53 15 1 0 0 0 0 0 0 45 15 4 1 0 0 0 0 0 0 74 55 26 5 1 0 0 0 0 1 87 74 28 1 0 0 0 82 60 12 0 0 0 78 64 37 11 1 0 0 77 36 2 0 0 0 0 84 72 28 2 0 0 0 0 70 52 26 7 1 0 0 0 0 0 0 86 83 69 19 0 0 0 77 57 22 2 0 0 0 0 0 0 60 25 7 2 0 0 0 0 0 0 75 62 34 11 2 1 0 0 0 0 85 77 42 4 0 0 0 82 68 24 1 0 0 77 68 46 19 3 0 0 80 57 11 0 0 0 0 83 76 44 6 0 0 0 0 71 56 32 10 2 0 0 0 0 0 0 84 83 78 52 4 0 0 76 58 24 3 0 0 0 0 0 0 73 48 18 4 1 0 0 0 0 0 76 68 49 21 6 2 1 0 0 0 84 78 51 9 0 0 0 82 72 34 3 0 0 76 66 46 19 4 0 0 79 58 14 0 0 0 0 82 76 49 10 1 0 0 0 71 59 38 17 3 0 0 0 0 0 0 84 83 81 74 30 1 0 75 59 25 4 0 0 0 0 0 0 76 59 29 7 1 1 1 1 0 0 76 69 54 29 10 4 1 1 1 1 81 75 49 10 0 0 0 78 67 29 3 0 0 68 54 32 12 2 0 0 70 39 5 0 0 0 0 78 68 34 5 0 0 0 0 64 50 30 11 2 0 0 0 0 0 0 81 80 79 78 67 14 1 72 53 22 3 0 0 0 0 0 0 75 63 38 13 3 1 1 1 1 1 72 68 55 35 15 6 2 1 1 1 78 69 42 6 0 0 0 73 56 17 1 0 0 52 35 16 5 1 0 0 55 16 1 0 0 0 0 71 48 13 1 0 0 0 0 51 34 18 5 1 0 0 0 0 0 0 77 77 76 76 75 54 15 67 49 21 4 0 0 0 0 0 0 72 62 42 17 4 1 1 1 1 1 66 63 54 38 20 9 4 2 1 1 74 59 24 2 0 0 0 64 35 6 0 0 0 36 20 7 2 0 0 0 30 4 0 0 0 0 0 56 24 3 0 0 0 0 0 39 23 8 2 0 0 0 0 0 0 0 74 73 72 72 70 63 50 64 44 19 4 1 0 0 0 0 0 69 57 37 18 6 2 1 1 1 1 63 60 52 38 23 12 5 2 1 1 87 42 2 0 0 0 0 70 19 1 0 0 0 68 40 11 2 0 0 0 25 1 0 0 0 0 0 74 31 1 0 0 0 0 0 58 29 7 1 0 0 0 0 0 0 0 81 57 12 1 0 0 0 85 78 52 12 1 0 0 0 0 0 24 7 2 1 0 0 0 0 0 0 76 49 17 2 0 0 0 0 0 0 88 41 3 0 0 0 0 69 19 1 0 0 0 66 37 9 1 0 0 0 20 0 0 0 0 0 0 72 25 1 0 0 0 0 0 56 25 5 0 0 0 0 0 0 0 0 82 61 16 1 0 0 0 87 82 71 40 6 0 0 0 1 0 32 8 2 1 0 0 0 0 0 0 79 59 23 3 0 0 0 0 0 0 87 31 2 0 0 0 0 62 14 0 0 0 0 64 33 7 1 0 0 0 10 0 0 0 0 0 0 59 13 0 0 0 0 0 0 42 13 2 0 0 0 0 0 0 0 0 82 62 19 1 0 0 0 86 84 80 66 32 6 1 1 1 1 37 9 2 0 0 0 0 0 0 0 81 64 30 6 0 0 0 0 0 0 86 25 1 0 0 0 0 53 10 0 0 0 0 61 31 7 0 0 0 0 7 0 0 0 0 0 0 48 9 0 0 0 0 0 0 43 15 2 0 0 0 0 0 0 0 0 82 64 21 2 0 0 0 86 85 82 76 60 30 8 2 1 1 48 14 4 1 0 0 0 0 0 0 81 68 39 9 1 0 0 0 0 0 85 19 1 0 0 0 0 44 8 0 0 0 0 58 30 6 1 0 0 0 5 0 0 0 0 0 0 38 5 0 0 0 0 0 0 41 14 2 0 0 0 0 0 0 0 0 81 66 23 2 0 0 0 84 84 82 78 70 54 30 12 3 1 60 25 6 1 0 0 0 0 0 0 81 72 44 11 1 0 0 0 0 0 78 9 0 0 0 0 0 26 3 0 0 0 0 37 12 1 0 0 0 0 2 0 0 0 0 0 0 25 3 0 0 0 0 0 0 25 7 1 0 0 0 0 0 0 0 0 74 60 23 2 0 0 0 79 79 78 76 73 68 58 38 12 3 67 46 18 4 1 0 0 0 0 0 75 70 50 20 4 0 0 0 0 0 69 4 0 0 0 0 0 14 2 0 0 0 0 36 16 4 1 0 0 0 5 0 0 0 0 0 0 14 2 0 0 0 0 0 0 9 2 0 0 0 0 0 0 0 0 0 64 50 12 1 0 0 0 70 70 70 70 69 69 67 61 45 24 68 67 64 53 23 6 8 6 4 1 66 65 58 44 21 4 1 0 0 0 62 3 0 0 0 0 0 10 1 0 0 0 0 27 12 3 0 0 0 0 3 0 0 0 0 0 0 9 1 0 0 0 0 0 0 8 2 0 0 0 0 0 0 0 0 0 56 38 7 0 0 0 0 65 65 66 66 66 66 66 64 55 35 62 61 60 55 36 18 44 39 29 15 61 60 54 44 28 10 1 0 0 1 51 3 1 0 0 0 0 9 1 0 0 0 0 29 14 4 1 0 0 0 4 0 0 0 0 0 0 8 1 0 0 0 0 0 0 8 3 1 0 0 0 0 0 0 0 0 46 27 3 0 0 0 0 57 58 59 58 59 59 57 53 43 24 54 54 53 47 28 12 35 32 24 14 53 51 46 36 18 5 1 0 0 0 42 6 1 0 0 0 0 10 2 0 0 0 0 26 14 5 1 0 0 0 7 1 0 0 0 0 0 12 3 1 0 0 0 0 0 12 4 1 0 0 0 0 0 0 0 0 41 26 5 0 0 0 0 51 51 52 51 51 51 49 47 40 29 47 47 46 41 24 12 35 30 22 12 47 45 40 30 16 4 1 0 0 0 86 56 10 0 0 0 0 75 35 3 0 0 0 70 51 22 4 0 0 0 37 3 0 0 0 0 0 65 20 1 0 0 0 0 0 43 16 3 0 0 0 0 0 0 0 0 83 73 37 3 0 0 0 78 61 25 3 0 0 0 0 0 0 83 71 32 7 1 0 0 0 0 0 80 71 44 12 2 0 0 0 0 0 85 39 4 0 0 0 0 64 19 1 0 0 0 66 42 15 3 0 0 0 18 1 0 0 0 0 0 46 6 0 0 0 0 0 0 27 7 1 0 0 0 0 0 0 0 0 82 74 41 4 0 0 0 78 63 31 7 1 0 0 0 0 0 84 81 65 25 4 1 1 1 0 0 80 75 58 23 4 1 0 0 0 0 85 26 2 0 0 0 0 53 12 1 0 0 0 60 35 12 2 0 0 0 10 0 0 0 0 0 0 34 4 0 0 0 0 0 0 21 6 1 0 0 0 0 0 0 0 0 82 74 41 4 0 0 0 76 58 23 3 0 0 0 0 0 0 83 82 79 54 13 1 1 1 1 0 79 75 61 33 8 2 0 0 0 0 84 23 2 0 0 0 0 50 10 1 0 0 0 58 34 11 2 0 0 0 10 1 0 0 0 0 0 34 4 0 0 0 0 0 0 22 6 1 0 0 0 0 0 0 0 0 82 73 43 5 0 0 0 75 57 21 2 0 0 0 0 0 0 83 82 81 78 48 5 4 3 2 0 78 74 62 39 15 4 1 0 0 0 83 26 2 0 0 0 0 52 10 1 0 0 0 57 30 9 1 0 0 0 10 0 0 0 0 0 0 40 6 0 0 0 0 0 0 28 9 2 0 0 0 0 0 0 0 0 80 73 41 5 0 0 0 75 56 19 1 0 0 0 0 0 0 81 80 79 80 78 42 28 20 8 2 77 71 57 35 13 4 1 0 0 0 83 34 4 0 0 0 0 57 15 1 0 0 0 53 29 9 1 0 0 0 17 1 0 0 0 0 0 53 14 1 0 0 0 0 0 39 16 4 1 0 0 0 0 0 0 0 80 72 42 7 0 0 0 73 53 17 1 0 0 0 0 0 0 81 79 78 79 80 78 64 36 8 1 77 71 55 31 10 2 0 0 0 0 82 36 4 0 0 0 0 58 16 1 0 0 0 50 24 7 1 0 0 0 18 1 0 0 0 0 0 59 18 1 0 0 0 0 0 39 17 4 1 0 0 0 0 0 0 0 79 71 44 8 0 0 0 72 54 18 1 0 0 0 0 0 0 79 77 76 76 76 78 73 45 11 1 76 70 56 32 9 3 1 0 0 1 81 38 5 0 0 0 0 60 17 1 0 0 0 49 23 6 1 0 0 0 19 1 0 0 0 0 0 60 18 1 0 0 0 0 0 42 20 5 1 0 0 0 0 0 0 0 78 71 45 11 0 0 0 73 55 21 2 0 0 0 0 0 0 79 76 74 73 75 77 77 62 24 3 75 71 58 30 10 3 1 0 0 0 79 40 6 0 0 0 0 60 18 1 0 0 0 49 24 6 0 0 0 0 24 1 0 0 0 0 0 63 23 2 0 0 0 0 0 41 18 5 1 0 0 0 0 0 0 0 76 70 47 13 0 0 0 72 55 25 5 1 0 0 0 0 0 77 74 71 70 72 76 79 73 49 12 75 71 62 37 15 5 2 0 0 0 77 36 6 0 0 0 0 57 17 1 0 0 0 44 19 4 1 0 0 0 23 1 0 0 0 0 0 60 21 2 0 0 0 0 0 40 19 5 1 0 0 0 0 0 0 0 74 68 48 16 1 0 0 71 58 31 10 2 1 1 1 0 0 75 72 70 69 69 75 78 75 65 36 74 73 69 52 26 10 4 1 0 0 82 47 6 0 0 0 0 69 26 1 0 0 0 62 41 14 3 0 0 0 29 2 0 0 0 0 0 68 29 2 0 0 0 0 0 48 24 5 1 0 0 0 0 0 0 0 74 54 17 2 0 0 0 78 65 32 5 0 0 0 0 0 0 37 11 3 1 0 0 0 0 0 0 85 75 37 6 1 0 0 0 0 0 81 48 8 0 0 0 0 68 29 2 0 0 0 63 39 14 2 0 0 0 28 1 0 0 0 0 0 65 27 2 0 0 0 0 0 52 28 7 1 0 0 0 0 0 0 0 75 55 20 3 0 0 0 79 69 39 8 0 0 0 0 0 0 56 20 5 1 0 0 0 0 0 0 86 82 65 18 2 0 0 0 0 0 79 44 8 0 0 0 0 63 25 2 0 0 0 58 36 13 2 0 0 0 22 2 0 0 0 0 0 56 18 1 0 0 0 0 0 41 17 4 1 0 0 0 0 0 0 0 74 58 28 4 0 0 0 78 68 42 12 2 0 0 0 0 0 67 38 10 3 1 0 0 0 0 0 85 84 80 53 9 1 0 0 0 0 78 33 5 0 0 0 0 53 16 1 0 0 0 51 29 10 2 0 0 0 15 1 0 0 0 0 0 41 10 1 0 0 0 0 0 30 12 3 0 0 0 0 0 0 0 0 74 61 31 5 0 0 0 78 69 45 18 4 1 0 0 0 0 75 60 25 7 1 0 1 0 0 0 85 84 83 76 44 8 1 0 0 0 76 27 5 0 0 0 0 44 12 1 0 0 0 43 23 7 2 0 0 0 16 2 0 0 0 0 0 38 9 1 0 0 0 0 0 28 12 3 1 0 0 0 0 0 0 0 73 63 36 6 0 0 0 76 69 48 22 6 1 1 1 0 0 74 68 44 14 3 1 1 1 0 0 82 82 81 79 71 44 10 1 0 0 74 23 4 0 0 0 0 35 8 1 0 0 0 34 14 4 1 0 0 0 8 1 0 0 0 0 0 27 5 0 0 0 0 0 0 28 12 3 0 0 0 0 0 0 0 0 69 60 33 6 0 0 0 74 66 45 18 4 1 0 0 0 0 70 61 39 14 4 2 2 1 1 0 80 80 77 69 51 30 14 2 0 0 71 27 5 0 0 0 0 30 6 1 0 0 0 18 6 2 0 0 0 0 9 1 0 0 0 0 0 25 5 1 0 0 0 0 0 25 11 3 1 0 0 0 0 0 0 0 68 60 40 11 1 0 0 72 64 45 21 6 1 1 1 0 0 70 68 59 35 9 3 3 2 2 1 78 78 78 75 64 50 39 19 3 1 67 26 6 1 0 0 0 27 6 1 0 0 0 17 6 2 0 0 0 0 9 1 0 0 0 0 0 21 5 1 0 0 0 0 0 25 11 3 1 0 0 0 0 0 0 0 64 58 42 16 2 0 0 67 60 41 19 7 2 1 0 1 1 65 62 57 41 18 8 8 5 3 2 75 75 74 72 68 60 51 39 14 2 66 36 11 2 0 0 0 41 16 3 0 0 0 32 17 6 1 0 0 0 16 3 0 0 0 0 0 30 9 2 0 0 0 0 0 29 15 6 2 0 0 0 0 0 0 0 64 61 54 29 4 1 0 66 60 43 22 9 3 1 1 1 1 65 65 63 58 42 25 23 20 16 11 72 72 72 71 70 69 67 59 34 9 61 35 14 3 1 0 0 43 20 5 1 0 0 36 22 10 4 1 0 0 29 9 2 0 0 0 0 41 18 4 1 0 0 0 0 30 17 7 2 1 0 0 0 0 0 0 60 58 51 32 7 1 0 62 56 39 22 9 4 1 1 1 1 61 61 59 55 46 33 31 28 24 17 65 65 64 64 63 62 59 51 34 8\n0. 0 0. 2 0. 4 0. 6 0. 8 1. 0\nAdversarial accuracy\nFi gu\nre 11\n:R ep\nlic a\nof Fi\ngu re\n6 w\nith 50\nst ep\ns in\nst ea\nd of\n20 0\nat ev\nal ua\ntio n\ntim e.\nD ev\nia tio\nns in\nre su\nlts ar\ne m\nin or\n." } ]
2,020
null
SP:7d8d860da15b936e3976601cae537e18664c08e8
[ "This paper presents a theoretical analysis of regularization based approaches to the problem of continually learning a sequence of tasks. The point of the paper is to demonstrate shortcomings of these kinds of approaches, in the context of class-incremental learning where classes are observed once and one after another. The authors argue that these kinds of methods require task labels at test time to correctly distinguish classes from different tasks. " ]
In most machine learning algorithms, training data is assumed to be independent and identically distributed (iid). When it is not the case, the algorithms performances are challenged, leading to the famous phenomenon of catastrophic forgetting. Algorithms dealing with it are gathered in the “Continual Learning” research field. In this paper, we study the regularization based approaches to continual learning and show that those approaches can not learn to discriminate classes from different tasks in an elemental continual benchmark, the class-incremental setting. We make theoretical reasoning to prove this shortcoming and illustrate it with experiments. Moreover, we show that it can have some important consequences on multi-tasks reinforcement learning or in pre-trained models used for continual learning. We believe this paper to be the first to propose a theoretical description of regularization shortcomings for continual learning.
[]
[ { "authors": [ "Rahaf Aljundi", "Eugene Belilovsky", "Tinne Tuytelaars", "Laurent Charlin", "Massimo Caccia", "Min Lin", "Lucas Page-Caccia" ], "title": "Online continual learning with maximal interfered retrieval", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Eden Belouadah", "Adrian Popescu" ], "title": "Deesil: Deep-shallow incremental learning", "venue": "CoRR, abs/1808.06396,", "year": 2018 }, { "authors": [ "Lucas Caccia", "Eugene Belilovsky", "Massimo Caccia", "Joelle Pineau" ], "title": "Online Learned Continual Compression with Stacked Quantization Module", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with A-GEM", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature", "venue": null, "year": 2018 }, { "authors": [ "Robert M. French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in Cognitive Sciences,", "year": 1999 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Saihui Hou", "Xinyu Pan", "Chen Change Loy", "Zilei Wang", "Dahua Lin" ], "title": "Learning a unified classifier incrementally via rebalancing", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proc. of the national academy of sciences,", "year": 2017 }, { "authors": [ "Matthias De Lange", "Rahaf Aljundi", "Marc Masana", "Sarah Parisot", "Xu Jia", "Ales Leonardis", "Gregory Slabaugh", "Tinne Tuytelaars" ], "title": "Continual learning: A comparative study on how to defy forgetting in classification tasks, 2019", "venue": "URL https://arxiv.org/abs/1909.08383", "year": 1909 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Timothée Lesort" ], "title": "Continual learning data former, January 2020", "venue": "URL https://doi.org/10", "year": 2020 }, { "authors": [ "Timothée Lesort", "Hugo Caselles-Dupré", "Michael Garcia-Ortiz", "Jean-François Goudou", "David Filliat. Generative Models from the perspective of Continual Learning. In IJCNN - International Joint Conference on Neural Networks", "Budapest", "Hungary", "July" ], "title": "URL https://hal", "venue": "archives-ouvertes.fr/hal-01951954.", "year": 2019 }, { "authors": [ "Timothée Lesort", "Alexander Gepperth", "Andrei Stoian", "David Filliat" ], "title": "Marginal replay vs conditional replay for continual learning. In Artificial Neural Networks and Machine Learning - ICANN 2019", "venue": "28th International Conference on Artificial Neural Networks,", "year": 2019 }, { "authors": [ "Timothée Lesort", "Vincenzo Lomonaco", "Andrei Stoian", "Davide Maltoni", "David Filliat", "Natalia Díaz-Rodríguez" ], "title": "Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges. working paper or preprint, November 2019c. URL https: //hal.archives-ouvertes.fr/hal-02381343", "venue": null, "year": 2019 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online structured laplace approximations for overcoming catastrophic forgetting", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Jonathan Schwarz", "Wojciech Czarnecki", "Jelena Luketina", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "René Traoré", "Hugo Caselles-Dupré", "Timothée Lesort", "Te Sun", "Guanghang Cai", "Natalia Díaz Rodríguez", "David Filliat" ], "title": "Discorl: Continual reinforcement learning via policy distillation", "venue": "URL http://arxiv.org/abs/1907.05855", "year": 1907 }, { "authors": [ "Chenshen Wu", "Luis Herranz", "Xialei Liu", "yaxing wang", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Memory replay gans: Learning to generate new categories without forgetting", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Yue Wu", "Yinpeng Chen", "Lijuan Wang", "Yuancheng Ye", "Zicheng Liu", "Yandong Guo", "Yun Fu" ], "title": "Large scale incremental learning", "venue": "CoRR, abs/1905.13260,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "∼ P (f(x" ], "title": "The diagonal approximation allows to save only card(θ) values in Ft. - K-FAC Fisher approximation Ritter et al. (2018) is very similar to EWC but approximates the Fisher matrices with a Kronecker factorization (K-FAC) Martens & Grosse (2015) to improve the expressiveness of the posterior over the diagonal approximation", "venue": null, "year": 2015 }, { "authors": [ "Zenke" ], "title": "The original idea is to imitate synapse biological activity. Therefore, each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual Learning is a sub-field of machine learning dealing with non-iid (identically and independently distributed) data French (1999); Lesort et al. (2019c). Its goal is to learn the global optima to an optimization problem where the data distribution changes through time. This is typically the case in databases that get regularly augmented with new data or when data is streamed to the algorithms with limited storage possibilities.\nContinual learning (CL) looks for alternative methods to the iid training to avoid the complete retraining with all data each time new data is available. CL algorithms propose different memory storage approaches to collect information from past learning experiences and learning algorithms to continue to learn with this memory and new data.\nIn this paper, we propose to study the class-incremental setting with regularization based methods. The class-incremental setting consists of learning sets of classes incrementally. Each task is composed of new classes. As the training ends, the model should classify data from all classes correctly. Without task labels for inferences, the model needs to both learn the discrimination of intra-task classes and the trans-task classes discrimination (i.e. distinctions between classes from different tasks). On the contrary, if the task label is available for inferences, only the discrimination of intra-task classes needs to be learned. The discrimination upon different tasks is given by the task label. Learning without access to task labels at test time is then much more complex since it needs to discriminate data that are not available at the same time in the data stream.\nIn such setting, we would like to demonstrate that regularization does not help to learn the discrimination between tasks. For example, if a first task is to discriminate white cats vs black cats and the second is the same with dogs, a regularization based method does not provide the learning criteria to learn features to distinguish white dogs from white cats.\nWe consider as regularization methods, those who aim at protecting important weights learned from past tasks without using a buffer of old data or any replay process. Those methods are widely used for continual learning Kirkpatrick et al. (2017); Zenke et al. (2017); Ritter et al. (2018); Schwarz et al. (2018). In this paper, we show that in the classical setting of class-incremental tasks, this approach has theoretical limitations and can not be used alone. Indeed, it does not provide any learning criteria to distinguish classes from different tasks. Therefore, in practice, regularization algorithms need external information to make an inference in class-incremental settings. It is provided by the task\nlabel at test time. However, relying on the task label to make inferences is an important limitation for algorithms’ autonomy, i.e. its capacity to run without external information, in most application scenarios.\nWe believe this paper presents important results for a better understanding of CL which will help practitioners to choose the appropriate approach for practical settings." }, { "heading": "2 RELATED WORKS", "text": "In continual learning, algorithms protect knowledge from catastrophic forgetting French (1999) by saving them into a memory. The memory should be able to incorporate new knowledge and protect existing knowledge from modification. In continual learning, we distinguish four types of memorization categories dynamic architecture Rusu et al. (2016); Li & Hoiem (2017), rehearsal Chaudhry et al. (2019); Aljundi et al. (2019); Belouadah & Popescu (2018); Wu et al. (2019); Hou et al. (2019); Caccia et al. (2019), generative replay Shin et al. (2017); Lesort et al. (2019a); Wu et al. (2018) and regularization Kirkpatrick et al. (2017); Zenke et al. (2017); Ritter et al. (2018); Schwarz et al. (2018).\nIn this paper, we are interested in the capacity of making inferences without task labels at test time (test task label). The task label t (typically a simple integer) is an abstract representation built to help continual algorithms to learn. It is designed to index the current task and notify if the task changes Lesort et al. (2019c). Dynamic architecture is a well-known method that needs the task label at test time for an inference. Indeed, since the inference path is different for different tasks, the task test label is needed to use the right path through the neural network Rusu et al. (2016); Li & Hoiem (2017). Rehearsal and Generative Replay methods generally need the task label at training time but not for inferences Lesort et al. (2019a;b). Finally, Regularization methods are often assumed as methods that need task labels only at training time. In this article, we show that in class-incremental settings, it is also necessary at test time.\nTest task labels have been used in many continual learning approaches, in particular in those referred to as “multi-headed” Lange et al. (2019). However, the need for task labels for inferences makes algorithms unable to make autonomous predictions and therefore we believe that this requirement is not in the spirit of continual learning. Continual learning is about creating autonomous algorithms that can learn in dynamic environments Lesort et al. (2019c)." }, { "heading": "3 REGULARIZATION APPROACH", "text": "In this section, we present the formalism we use and we present the class-incremental learning problem with a regularization based approach." }, { "heading": "3.1 FORMALISM", "text": "In this paper, we assume that the data stream is composed of N disjoint tasks learned sequentially one by one (with N >= 2). Task t is noted Tt and Dt is the associated dataset. The task label t is a simple integer indicating the task index. We refer to the full sequence of tasks as the continuum, noted CN . The dataset combining all data until task t is noted Ct. While learning task Tt, the algorithm has access to data from Dt only.\nWe study a disjoint set of classification tasks where classes of each task only appear in this task and never again. We assume at least two classes per task (otherwise a classifier cannot learn).\nLet f be a function parametrized by θ that implement the neural network’s model. At each task t the model learn an optimal set of parameters θ∗t optimizing the task loss `Dt(·). Since we are in a continual learning setting, θ∗t should also be an optima for all tasks Tt′ , ∀t′ ∈ J0, tK. We consider the class-incremental setting with no test task label. It means that an optima θ∗1 for T1 is a set of parameters which at test time will, for any data point x from D0 ∪ D1, classify correctly without knowing if x comes from T0 or T1. Therefore, in our continual learning setting, the loss to optimize when learning a given task t is augmented with a remembering loss:\n`Ct(f(x;θ), y) = `Dt(f(x;θ), y) + λΩ(Ct−1) (1) where `Ct(.) is the continual loss, `Dt(.) is the current task loss, Ω(Ct−1) is the remembering loss with Ct−1 represents past tasks, λ is the importance parameter." }, { "heading": "3.2 PROBLEM", "text": "In continual learning, the regularization approach is to define Ω(·) as a regularization term to maintain knowledge fromCt−1 in the parameters θ such as while learning a new task Tt, f(x;θ∗t−1) ≈ f(x;θ), ∀x ∈ Ct−1. In other words, it aims to keep `Ct−1(f(x;θ), y) low ∀x ∈ Ct−1 while learning Tt. The regularization term Ωt−1 act as a memory of θ∗t−1. This memory term depends on the learned parameters θ∗t−1, on `Ct−1 the loss computed on Tt−1 and the current parameters θ. Ωt−1 memorizes the optimal state of the model at Tt−1 and generally the importance of each parameter with regard to the loss `Ct−1 . We note ΩCt−1 the regularization term memorizing past tasks optimal parameters.\nWhen learning the task Tt, the loss to optimize is then:\n`Ct(f(x;θ), y) = `Dt(f(x;θ), y) + λΩCt−1(θ ∗ t−1, `Ct−1 ,θ) (2)\nEq. 2 is similar to eq. 1 but in this case the function Ω(·) is a regularization term depending on past optimal parameters θ∗t−1, loss on previous tasks `Ct−1 and the vector of current model parameters θ only. It could be, for example, a matrix pondering weights importance in previous tasks Kirkpatrick et al. (2017); Ritter et al. (2018); Zenke et al. (2017)." }, { "heading": "4 PROPOSITIONS", "text": "In this section, we present the proposition concerning the shortcomings of regularization methods in class-incremental settings. We first present definitions and lemmas to prepare for the proposition." }, { "heading": "4.1 PRELIMINARY DEFINITION / LEMMA", "text": "Definition 1. Linear separability Let S and S′ be two sets of points in an n-dimensional Euclidean space. S and S′ are linearly separable if there exists n+ 1 real numbers ω1, ω2, ..., ωn, k such that ∀x ∈ S, ∑n\ni=1 ωixi > k and ∀x ∈ S′, ∑n i=1 ωixi < k\nwhere xi the ith component of x. This means that two classes are linearly separable in an embedded space if there exists a hyperplane separating both classes of data points.\nThis property can also be written, ∀x ∈ S and ∀x′ ∈ S′, (q · x + q0) · (q · x′ + q0) < 0. With q = [ω1, ω2, ..., ωn] and q0 = −k respectively the normal vector and position vector of a hyperplane Q. In the case of learning a binary classification with linear model, the model is a hyperplane separating two dataset. As soon as this equation can be solved, then it is possible to define a function f(x, θ) and a loss `(.) to learn a hyperplane that will separate S and S′ perfectly. Definition 2. Interferences In machine learning, interferences are conflicts between two (or more) objective functions leading to prediction errors. There are interferences when optimizing one objective function degrades the optimization of, at least, another one.\nAs such, optimizing one of the objective function increases the error on the other one. In continual learning, interferences happen often after a drift in the data distribution. The loss on previous data is increased with the optimization of the loss for the new data leading to interferences and catastrophic forgetting. Lemma 4.1. ∀(S, S′) bounded set of discrete points in Rn and linearly separable by a hyperplane Q. For any algorithm, it is impossible to assess Q as a separation hyperplane without access to S′ set.\nThe proof is in appendix B, but in an insightful way, for any bounded set of points S, there is a infinite number of linearly separable set of points. Thus, there exists an infinite number of potential\nseparating hyperplanes. If the second set of points S′ is not known, then it is not possible to choose among the infinite number of potential separating hyperplane which one is a correct one. And even if one is chosen, there is no way to tell if it is better or not than another.\nIn the context of machine learning, without an assessment criterion for a classification problem, it is not possible to learn a viable solution. Hence, we can not optimize the parameters. For binary classification, the Lemma 4.1 can be interpreted as: “The decision boundary between two classes can not be assessed nor learned if there is no access to data from both simultaneously”. Lemma 4.2. ∀(S, S′) two bounded datasets not linearly separable. For any algorithm, it is impossible to assess a function g(.) as a projection of S and S′ into a space were they are linearly separable without access to S′ set.\nThe proof is in appendix C, but in an insightful way, for any bounded set of points, there is an infinite number of projections of the initial set of point in a space where it could be linearly separable from another set of points. Then, If you don’t know the second set of points S′ you can not choose among the infinite number of potential projections which one is a good one. And if you ever choose one, you have no way to tell if it is better or not than another. In the context of binary classification, the previous lemma can be interpreted as: “Two classes representation cannot be disentangled if there is no access to data from both simultaneously”.\nIn those lemma, the concept of “not having access to” a certain dataset can both be applicable to not being able to sample data point from the distributions and to not have a model of the dataset. It can be generalized to not having access to any representative data distribution of a dataset." }, { "heading": "4.2 SHORTCOMINGS IN CLASS-INCREMENTAL TASKS", "text": "We now prove that in incremental-class tasks, it is not possible to discriminate classes from different tasks using only a regularization based memory. The main point is that, to correctly learn to discriminate classes over different tasks the model needs access to both data distributions simultaneously.\nIn regularization methods, the memory only characterizes the model and the important parameters as explained in Section 3.2. This memorization gives insight on some past data characteristics but it is not a model of their distributions. If we take the cat vs dog example, a model that needs to discriminate white cats from black cats will learn to discriminate black features from white features. This “knowledge” can be saved in Ω but Ω will not save the full characteristics of a cat because the model never has to learn it. We bring then the following proposition: Proposition 4.3. While learning a sequence of disjoint classification tasks, if the memory Ω of the past tasks is only dependent on trained weights and learning criterion of previous task and does not model the past distribution, it is not possible for deep neural networks to learn new tasks without interference.\nProof. The proof is organized in the following way: first, we present material necessary for the demonstration, then in a second part, we demonstrate that at any moment the classification task can be reduced to a binary classification task and in a third part we will show that we can not learn to solve this binary classification correctly.\nFirst part: In the context of learning with a deep neural network, we can decompose the model into a non-linear feature extractor g(·) and an output layer to predict a class y = argmax(softmax(A · g(x) + b)). With A and b, respectively the matrix of projection and the bias of the linear layer. softmax(.) is the output function that for a given class i in a logits output z gives softmax(zi) = e\nzi∑N−1 j=0 e zj . The softmax(.) function does not change the argmax result and\nonly regularize the output values and the gradient for later back propagation. We can thus remove it for our demonstration purposes.\nThe non-linear projection g(.) should, therefore, disentangle classes and the linear output layer learns to predict the good class. The output layer allows for all classes i to learn hyperplanes A[:, i] with bias b[i] such as: ∀i ∈ J1, NK\n∀(x, y) ∈ Ct, argmax i (A[:, i]h+ b[i]) = y (3)\nwith h = g(x).\nSecond part For the sake of the demonstration, we would like to reduce the multi-classes classification problem into a binary classification problem. Hence, we can artificially split classes into two groups: classes from the past YCt−1 and current classes YTt .\nWe can then ∀(x, y) ∈ Ct compute which class ŷCt−1 upon the past classes YCt−1 is the most probable and compute which class ŷTt upon the current classes YTt is the most probable.\nŷCt−1 = argmax i∈YCt−1 (A[:, i]h+ b[i]) and ŷTt = argmax i∈YTt (A[:, i]h+ b[i]) (4)\nHence, the equation 3 can be rewritten into a binary operation:\n∀(x, y) ∈ Ct, argmax i∈{ŷCt−1 ,ŷTt} (A[:, i]h+ b[i]) = y (5)\ny = argmax(A[:, ŷCt−1 ] · h+ b[ŷCt−1 ] , A[:, ŷTt ] · h+ b[ŷTt ]) = argmax(0, (A[:, ŷTt ]−A[:, ŷCt−1 ]) · h+ b[ŷTt ]− b[ŷCt−1 ])\n(6)\nEquation 6 can directly be rewritten into the linear separability equation from definition 1. To make a proper decision, we should have ∀(x, y) ∈ Ct, with g(x) = h and y = ŷCt−1 and ∀(x′, y′) ∈ Dt, with g(x′) = h′ and y′ = ŷTt .\n(q · h+ q0) · (q · h′ + q0) < 0 (7)\nThen, by identification, the classes ŷCt−1 and ŷTt need to be separated by the hyperplane Q defined by a normal vector q = A[:, ŷTt ]−A[:, ŷCt−1 ] and a position vector q0 = −(b[ŷTt ]− b[ŷCt−1 ]). This binary classification description highlight that it is essential to be able to discriminate any class ŷCt−1 from the past from any class ŷTt from the present for accurate predictions.\nThird part: In this part, to prove proposition 4.3, we show that the model cannot learn the hyperplane Q from eq. 7. To learn new tasks Tt for 0 < t < N , there are two different cases: first g(·) is already a good projection for Ct tasks, i.e. classes are already disentangled in the embedded space. We assume that if classes are already disentangled, only the output layer has to be trained to solve Ct tasks. Secondly, g(·) needs to be adapted, i.e. classes are not yet disentangled in the embedded space and new features need to be learned by g(·) to fix it. We refer as features, intrinsic characteristics of data that a model needs to detect to distinguish a class from another. We will show that it is not possible to learn to discriminate correctly the classes ŷCt−1 from ŷTt from previous part.\nFirst case: Classes are disentangled Since we are in a regularization setting, at task Tt, we have access to Ωt−1 which contains classification information from previous tasks (Ct−1 tasks). However, by hypothesis, Ωt−1 does not model the data distribution from Ct−1 and therefore it does not model data distribution from Ct−1 classes.\nFollowing from the second part of the proof, ∀x ∈ Ct tasks, to make an accurate prediction, we need the right hyperplane Q that distinguish the most probable class from Ct−1, ŷCt−1 and the most probable class from Tt, ŷTt .\nŷCt−1 and ŷTt classes images are a bounded set of points and ŷCt−1 points are, by definition, not accessible, consequently following Lemma 4.1, it is impossible to assess a boundary between ŷTt and ŷCt−1 even if by hypothesis this boundary exists. Therefore, we can not learn the hyperplane that discriminate ŷCt−1 from ŷTt and ensure an accurate prediction.\nSecond case: g(·) needs to be updated with new features. Let δt−1 be the set features already learned by gt−1(·) the feature extractor from previous task. Ωt−1 should keep δt−1 unchanged while learning Tt. The goal is to make ŷCt−1 and ŷTt linearly separable ∀x ∈ Ct. Then, either δt−1 already solve the problem and we are in first case, or a new set of features δt needs to be learned while learning Tt. In the second case, the set δt contains features to solve Tt, but features δt−1:t that distinguish classes from Tt−1 to classes from Tt should also be learned. Then two cases raise, δt−1:t 6⊂ δt or δt−1:t ⊂ δt. • if δt−1:t 6⊂ δt, then supplementary features δt−1:t need to be learned. ŷCt−1 and ŷTt classes images are a bounded set of points not linearly separable and since Ωt−1 does not give access to Ct−1 data\npoints, from Lemma 4.2 we can not assess a projection that put images from ŷTt and ŷCt−1 into a linearly separable space, i.e. we can not learn the set of features δt−1:t to discriminate ŷCt−1 images from ŷTt images and solve the continual problem.\n• δt−1:t ⊂ δt is possible, however, since data from Ct−1 are not available anymore, there is no way to project them in the new latent space with δt features. Therefore, without access of classes from both Ct−1 and Tt tasks at time t we can not identify δt−1:t features which are in δt features. It is also impossible to know if δt−1:t ⊂ δt. In other words, this case is not detectable and even if detected the features δt−1:t can not be used without data from Tt−1 (which is by definition prohibited).\nIn these two cases, there will be in any way conflict between losses leading to interference in the decision boundaries either because classes are not linearly separable or because a separation hyperplane cannot be found. In other words, the regularization methods can not discriminate classes from different tasks and they are then not suited to class-incremental settings.\nWe can note that proposition 4.3, still holds if tasks are only partially disjoint, i.e. only some classes appear only once in the continual curriculum.\nIndeed, in partially disjoint settings, several classes pairs are never in the same task. If we define two set of disjoint classes Y and Y ′, that will never be in the same task, the demonstration of proposition 4.3 can be applied on Y and Y ′. Then, classes Y and Y ′ will suffer from interference showing a shortcoming of regularization methods for this case too.\nTherefore, if there is a class-incremental setting hidden into another setting, the regularization approach will not be able to solve it perfectly either. We could note that in many applications there are latent class-incremental problem to address in the learning curriculum. We mention some applications in Section 6.\nA simple trick used in some regularization approaches to compensate their shortcomings is to use the task label for inferences, it gives a simple way to distinguish tasks from each other. However, it assumes the algorithms rely on a supervision signal for inferences. In the next section, we show that regularization shortcoming is easily highlighted with simple experiments." }, { "heading": "5 EXPERIMENTS", "text": "To support the limitations presented earlier, we experiment with the “MNIST-Fellowship” dataset proposed in Lesort (2020). This dataset is composed of three datasets (Fig. 1): MNIST LeCun & Cortes (2010), Fashion-MNIST Xiao et al. (2017) and KMNIST Clanuwat et al. (2018), each composed of 10 classes, which should be learned sequentially one by one. We choose this dataset because it gathers three easy datasets for prototyping machine learning algorithms but solving those three quite different datasets is still harder than solving only one.\nOur goal is to illustrate the limitation of regularization based methods in disjoint settings. In particular that they can not distinguish classes from different tasks. We would like also to show that the shortcoming happens both in the output layer and in the feature extractor. Thus, we propose three different settings with the MNIST-Fellowship dataset.\n1. Disjoint setting: all tasks have different classes (i.e. from 0 to 9, 10 to 19 and 20 to 29, the output size is 30). This setting highlights shortcomings of regularization methods without test task labels.\n2. Joint setting: all tasks have the same classes ( i.e. from 0 to 9 for each task and the output size is 10) but different data. This scenario is designed as an instance incremental scenario Lesort et al. (2019c). This setting shows that they are interferences even when only the data instances change and not the class labels. Theoretically, this setting requires only the feature extractor to be adapted while learning a new task.\n3. Disjoint setting with test task label: All tasks have different classes but at inference time, we know from which task a data-point is coming from. The output in this setting is a multi-head output with one head of size 10 for each task. This setting shows that regularization methods work when they have access to the test task labels.\nWith those settings, we present two experiments, the first one (Fig. 2) compares disjoint setting with and without a label for inferences. The goal is to bring to light that regularization fails in disjoint settings if the task label is not provided. Secondly, we experiment with the joint setting (Fig. 3), to show that even if the feature extractor only needs to be learned the approach still struggles to learn continually and forget.\nWe present EWC results with diagonal Fisher Matrix Kirkpatrick et al. (2017) and with Kronecker Factorization of the Fisher matrix Ritter et al. (2018). We add an expert model which learned all the datasets at once and a baseline model who learn without any memorization process. All models are trained with stochastic gradient descent with a learning rate of 0.01 and a momentum of 0.9. Even if continual learning does not support a-posteriori hyper-parameter selection, for fairness in comparison, the parameter lambda has been tuned. The best lambda upon [0.1; 1; 2; 5; 10; 20; 100; 1000] is selected for each model. Then the model is trained on 5 different seeds.\nThe first experiment (Fig. 2), exhibits that regularization methods performances are significantly reduced when there is no test task label in the disjoint settings. The experiment also shows that without labels for inferences, the model forgets almost instantaneously the first task when switching to the\nsecond one. Those results support the proposition 4.3. Indeed, the low performance of regularization methods without test task labels in disjoint settings illustrates the output layer shortcomings in continual learning (task separability problem example in appendix A).\nIn Experiment 2 (Fig. 3), since the classes are the same in all tasks, only the feature extractor needs to be learned continually. The low performance of the proposed models illustrates the shortcomings in the continual learning of the feature extractor (the latent features problem example in appendix A).\nThese two experiments show that learning continually with regularization is only efficient in the setting with task labels and maintains performance on task 0. The two other settings seem to either have interference in the output layer and in the feature extractor." }, { "heading": "6 APPLICATIONS", "text": "In this section, we point out supplementary shortcomings of regularization in other types of learning situations, namely a classification task with one only class and multi-task continual reinforcement learning. We also use proposition 4.3 for the case of pre-trained models.\n- Learning from one class only: A classification task with only one class might look foolish, however, in a sequence of tasks with varying number of classes, it makes more sense and it seems evident that a CL algorithm should be able to handle this situation. Nevertheless, a classification neural network needs at least two classes to learn discriminative parameters. Hence, in a one-class task, the model learns no useful parameters, a regularization term can then a fortiori not protect any knowledge. As noted in Lesort et al. (2019b), the regularization method is not suited for such setting. It is worth noting that in a real-life settings it is mandatory to be able to learn only one concept at a time.\n- Multi-task Continual Reinforcement Learning: Results from section 4.2 can also be generalized to continual multi-tasks reinforcement learning settings Traoré et al. (2019). In this setting, a model has to learn several policies sequentially to solve different tasks. At test time with no task label, the model needs to both be able to run the policies correctly but also to infer which policy to run. However, since policies are learned separately inferring which one to run is equivalent to a class incremental task. Therefore, following proposition 4.3, the regularization based method will not be able to learn the implicit classification correctly. Hence, in continual multi-tasks RL a regularization method alone will fail if task label is not enabled at test time.\n- Using pre-trained models for continual learning: We showed in Section 4.2 that, in a class incremental classification scenario, regularization methods are not sufficient to learn continually. In the case of a pre-trained classification model on N classes that we want to train on new classes without forgetting, if the training data are not available for some reasons, then we don’t even have a regularization term Ω to protect some. Following the proposition 4.3 and a fortiori without the regularization term, the model will forget past knowledge while learning new classes. Using pretrained models can be useful to converge faster to a new task solution but it will undoubtedly forget what it has learn previously." }, { "heading": "7 DISCUSSION AND CONCLUSION", "text": "Regularization is a widespread method for continual learning. However, we prove that for classincremental classification, no regularization method can continually learn properly the decision boundaries. At test time, this shortcoming makes them dependant on the task label for prediction.\nThe class-incremental scenarios are benchmarks measuring the ability of algorithms to learn sequentially different classes. However, being unable to deal with this setting implies that in a more complex learning environment, all sub-tasks assimilable to a class-incremental task will be failed.\nIt is fundamental for continual learning to produce algorithms that can autonomously learn and be deployed. Algorithms that rely on the test task label are not autonomous and therefore should be avoided. This paper shows that in any setting where a class-incremental setting may be hidden, regularization methods alone should be avoided. A fortiori, in continual learning the future is unknown, therefore future tasks might add new classes or not. Then, in order to deal with new tasks whatever their nature, regularization methods alone should always be avoided on continual learning applications." }, { "heading": "A PRACTICAL EXAMPLES", "text": "To illustrate the proposition from section 4.2, we present two insightful examples of regularization limitations.\n- The Task Separability Problem:\nIn the first case of proposition 4.3 proof, we already have a perfect feature extractor. Classes are already linearly separable and only the output layer needs to be learned continually.\nIf we have only two classes in the first task, the model will learn one hyperplane Q0 separating the instances of these two classes (See Figure 4). For the second task, we have two new classes and a regularization protecting Q0. Then, we can learn a hyperplane Q1 that separates our two new classes. In the end, we have learned the hyperplanes Q0 and Q1 to distinguish classes from T0 and classes from T1. But none of those hyperplanes helps to discriminate T0 classes from T1 classes, as illustrated Figure 4. This will lead to error in the neural networks predictions.\n- The Latent Features Problem:\nIn the second case of Proposition 4.3 proof, the feature extractor needs to be updated to learn new features extractors.\nIf we have only two classes in the first task, the model will learn to separate classes instances into two groups with the features extractor g0 and one hyper-plan Q0 separating the two classes instances (See Figure 5).\nFor the second task, we have two new classes and a regularization protectingQ0 and g0. Then, we can learn a features extractor g1 to disentangle new class instances in the latent space and a hyperplane Q1 that separates them. In the end, we can disentangle classes from T0 and classes from T1 and we have two hyperplanes Q0 and Q1 to distinguish classes from T0 and classes from T1. But we can not disentangle T0 classes from T1 classes and none of the learned hyperplanes helps to discriminate T0 classes from T1 classes (See Fig. 6). It leads to errors in the neural network predictions. At test time, it will not be possible for the model to discriminate between classes correctly.\nHowever, with the task label for inferences, we could potentially perfectly use g0, g1, Q0 and Q1 to make correct predictions. Nevertheless, assuming that the task label is available for prediction is a strong assumption in continual learning and involves a need of supervision at test time." }, { "heading": "B PROOF OF LEMMA 4.1", "text": "lemma. ∀(S, S′) bounded set of discrete points in Rn and linearly separable by a hyperplane Q. For any algorithm, it is impossible to assess Q as a separation hyperplane without S′ set.\nProof. Let S and S′ be two bounded and linearly separable set of discrete points in Rn. Let Q be a potential linear separation between S and S′. The hyperplane Q can not be assessed as a linear separation between S and S′ if there exists at least one hyperplane indistinguishable from Q and which is not a separation boundary between S and S′. Let P be a hyperplane, defined as a normal vector p and position vector p0, is a separation boundary between S and S′ if all the point of S are on one side of P and all point of S′ are on the other side. It can be formalized as follows: ∀x ∈ S & ∀x′ ∈ S′:\n(p · x+ p0) · (p · x′ + p0) < 0 (8) Where < · > is the scalar product. Without the access of S′, eq. 8 can not be evaluated. However, we can evaluate it, if all the point of S are on the same side of P Eq. 8, verify that S and S′ are each entirely on different side of the P . By definition if all the point of S are above P then: ∀x ∈ S\n(p · x+ p0) > 0 (9)\nIf all the point are under P then: (p · x+ p0) < 0 (10)\nAnd if neither eq. 9 nor eq. 10 are verified then all the points of S are not on the same side of P . Finally, we can merge both 9 and eq. 10 and verify only:\n∀x ∈ S sign(p · x+ p0) = constant (11)\nWhere sign(.) is the function which returns the sign of any real value.\nThe Lemma 4.1 is proven if ∃ P such as eq. 11 is true but not eq. 8, because P would not be a linear separation of S and S′ and would not be distinguishable from Q without access to S′. Now, we will build an hyper-plan P that is unquestionably respect eq. 11 and not eq. 8.\nWe know that S is bounded, then it has both upper and lower bounds in all the direction of Rn. If eq. 11 is respected, then Q is a bound of S in the direction of its normal vector q. If we move Q along the direction of q (i.e. if we change q0 the position vector), we can find at least one other plane P respecting eq. 11: the opposing bound of S along the direction q.\nSince, P and Q are two opposing bounds of S in the same direction q, then:\n∀x ∈ S sign(p · x+ p0) 6= sign(q · x+ q0) (12)\nIf Q is a lowerbound of S in the direction q and an upperbound of S′ in the same direction then, a lowerbound of S′ in the direction q is a lowerbound of S in the same direction and an upperbound of S in the direction q is an upperbound of S′ in the same direction. (We leave the demonstration to the reader).\nTherefore, Q and P are both upperbounds or both lowerbounds of S′ in the direction of q. ∀x′ ∈ S′: sign(p · x′ + p0) = sign(q · x′ + q0) (13)\nThen with 12 and eq. 13:\n(p · x+ b) · (p · x′ + b) > 0 (14)\nConsequently, from eq 11 and eq 14, ∃ a hyperplane P which respects eq. 11 and not eq 8, P is indistinguishable from Q and is not a separation boundary between S and S′." }, { "heading": "C PROOF OF LEMMA 4.2", "text": "lemma. ∀(S, S′) two bounded datasets not linearly separable. For any algorithm, it is impossible to assess a function g(.) as a projection of S and S′ into a space were they are linearly separable without S′ set.\nProof. g(.) is a projection of S and S′ into a space where they are linearly separable means:\n∀x ∈ S & ∀x′ ∈ S′, then g(x) and g(x′) respect eq. 8. Without access to S′ this condition can not be verified. However, we can verify eq. 11 with g(x).\nThe Lemma 4.2 is proven if ∀x ∈ S & ∀x′ ∈ S′, ∃ a projection f , that respect eq. 11 with f(x) but not eq. 8 with f(x) and f(x′), because then f and g are indistinguishable without access to S′.\nLet f be the identity function, ∀z ∈ R f(z) = z. We define Sf and S′f , the set of point S and S′ after projection by f . Since f is the identity function, S and S′ are respectively identical to Sf and S′f . Since S is bounded, Sf is also bounded. Hence there exists a hyperplane P that verify eq. 11 with f(x) ∀x ∈ S. By hypothesis, S and S′ are not linearly separable so Sf and S′f is also not linearly separable. Then ∃! hyperplane P which respect eq. 8 with f(x) and f(x′). Thus, f exists and therefore it is impossible to assess any function as a projection of S and S′ into a space were they are linearly separable without S′ set." }, { "heading": "D REGULARIZATION METHODS", "text": "To illustrate the previous section, we present several famous regularization methods in our formalism.\n- Elastic Weight Consolidation (EWC) Kirkpatrick et al. (2017) is one of the most famous regularization approaches for continual learning. The loss augmented with a regularization term is at task t:\n`Ct(θ) = `Dt(f(x;θ), y) + λ\n2 ∗ Ft−1(θ∗t−1 − θ)2 (15)\nwith (.)2 the element-wise square function.\nWe can then by identification, extract our function Ωt(θ∗, `D,θ)\nΩt(θ ∗, `Ct−1 ,θ) =\n1 2 ∗ Ft−1(θ∗t−1 − θ)2 (16)\nFt is a tensor of size card(θ)2, specific to task t, characterizing the importance of each parameter θk. Ft is computed at the end of each task and will protect important parameters to learn without forgetting. In EWC, the Ft tensor is implemented as a diagonal approximation of the Fisher Information Matrix:\nFt = E(x,y)∈Dt\n[( ∂log p(ŷ)\n∂θ\n)2] (17)\nwhere ŷ ∼ P (f(x;θ)). The diagonal approximation allows to save only card(θ) values in Ft. - K-FAC Fisher approximation Ritter et al. (2018) is very similar to EWC but approximates the Fisher matrices with a Kronecker factorization (K-FAC) Martens & Grosse (2015) to improve the expressiveness of the posterior over the diagonal approximation. However, the Kronecker factorization saves more values than the diagonal approximation.\n- Incremental Moment Matching (IMM) Lee et al. (2017) proposes two regularization approaches for continual learning which differ in the computation of the mean θ0:t and the variance σ0:t of the parameters on all tasks.\nThe idea is to regularize parameters such that the moments of their posterior distributions are matched in an incremental way. It means that each parameter is approximated as a normal distribution and their mean or standard deviation should match from one task to another. This regularization, on the parameters’ low-order moments, helps to protect the model from forgetting.\n- Mean based Incremental Moment Matching (mean-IMM)\nθ0:t = t∑ i=0 αiθ ∗ i and σ0:t = t∑ i=0 αi(σi + (θ ∗ i − θ0:t)2) (18)\nαi are importance hyper-parameters to balance past task weight into the loss function. They sum up to one.\n- Mode based Incremental Moment Matching (mode-IMM)\nθ0:t = σ0:t · t∑\ni=0\n(αiσ −1 i θ ∗ i ) and σ0:t = ( t∑ i=0 αiσ −1 i ) −1 (19)\nσi is computed as the Fisher matrix (eq. 17) at task i.\nThen at task t, with θ0:t−1 and σ0:t−1 we can compute:\nΩt(θ ∗, `Ct−1 ,θ) =\n1 2 σ0:t−1(θ0:t−1 − θ)2 (20)\n- Synaptic Intelligence: (SI) Zenke et al. (2017) The original idea is to imitate synapse biological activity. Therefore, each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. In this approach, we can identify Ωt as:\nΩt(θ ∗, `Ct−1 ,θ) = Mt(θ ∗ t−1 − θ)2 (21)\nMt is a tensor of size card(θ) specific to task t characterizing the importance of each parameter θk over the all past tasks such as:\nMt = ∑\n0<i<t\nmi ∆2i + ξ\n(22)\nMt is the sum over mi which characterizes the importance of each parameter on task i, with ∆i = θ ∗ i − θ∗i−1. ξ is a supplementary parameter to avoid null discriminator.\nmi = ∫ Ti Ti−1 ∇θδθ(t)dt (23)\nWith δθ(t) the parameter update at time step t.\nE IMPLEMENTATION DETAILS\nE.1 DATA PREPROCESSING\nAll data points were preprocessed to be between 0 and 1 by a division by 255.\nE.2 DATASETS SPLITTING\nFor all datasets used, we selected randomly 20% of the train set for validation and used the original split test/train of the datasets for the test sets and the train sets.\nE.3 COMPUTING INFRASTRUCTURE\nThe experiments were run with a GPU GeForce GTX 1080 Ti with a CPU Intel Core i7-7700K @ 4.2 GHZ x 8.\nE.4 NUMBER OF EVALUATION RUNS\nA single evaluation run have been executed after trainning." }, { "heading": "F MODEL ARCHITECTURE", "text": "" } ]
2,020
null
SP:dae92debea3e0d59d4b74385540ee6f827cfa37e
[ "This paper proposes a new building block for GNNs, called GNN$^+$. This building block trades of depth for width and involves multiple parallel regular GNN processing units. Using the GNN$^+$ architecture, the authors establish bounds for the required network depth (and total parameters) for several combinatorial problems over graphs." ]
Despite their popularity in learning problems over graph structured data, existing Graph Neural Networks (GNNs) have inherent limitations for fundamental graph problems such as shortest paths, k-connectivity, minimum spanning tree and minimum cuts. In all these instances, it is known that one needs GNNs of high depth, scaling at a polynomial rate with the number of nodes n, to provably encode the solution space. This in turn affects their statistical efficiency thus requiring a significant amount of training data in order to obtain networks with good performance. In this work we propose a new hybrid architecture to overcome this limitation. Our proposed architecture that we call as GNN networks involve a combination of multiple parallel low depth GNNs along with simple pooling layers involving low depth fully connected networks. We provably demonstrate that for many graph problems, the solution space can be encoded by GNN networks using depth that scales only poly-logarithmically in the number of nodes. This significantly improves the amount of training data needed that we establish via improved generalization bounds. Finally, we empirically demonstrate the effectiveness of our proposed architecture for a variety of graph problems.
[]
[ { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "arXiv preprint arXiv:1611.01491,", "year": 2016 }, { "authors": [ "László Babai", "Peter Frankl", "Janos Simon" ], "title": "Complexity classes in communication complexity theory", "venue": "In 27th Annual Symposium on Foundations of Computer Science (sfcs", "year": 1986 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Filippo Maria Bianchi", "Daniele Grattarola", "Cesare Alippi" ], "title": "Mincut pooling in graph neural networks", "venue": "arXiv preprint arXiv:1907.00481,", "year": 2019 }, { "authors": [ "Jean Bourgain" ], "title": "On lipschitz embedding of finite metric spaces in hilbert space. Israel", "venue": "Journal of Mathematics,", "year": 1985 }, { "authors": [ "Ashok K Chandra", "Prabhakar Raghavan", "Walter L Ruzzo", "Roman Smolensky", "Prasoon Tiwari" ], "title": "The electrical resistance of a graph captures its commute and cover times", "venue": "Computational Complexity,", "year": 1996 }, { "authors": [ "Atish Das Sarma", "Sreenivas Gollapudi", "Marc Najork", "Rina Panigrahy" ], "title": "A sketch-based distance oracle for web-scale graphs", "venue": "In Proceedings of the third ACM international conference on Web search and data mining,", "year": 2010 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Simon S Du", "Kangcheng Hou", "Russ R Salakhutdinov", "Barnabas Poczos", "Ruosong Wang", "Keyulu Xu" ], "title": "Graph neural tangent kernel: Fusing graph neural networks with graph kernels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Federico Errica", "Marco Podda", "Davide Bacciu", "Alessio Micheli" ], "title": "A fair comparison of graph neural networks for graph classification", "venue": null, "year": 2020 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen", "Frank Weichert", "Heinrich Müller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sebastian Forster", "Danupon Nanongkai" ], "title": "A faster distributed single-source shortest paths algorithm", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 }, { "authors": [ "Vikas K. Garg", "Stefanie Jegelka", "Tommi Jaakkola" ], "title": "Generalization and representational limits of graph neural networks", "venue": null, "year": 2020 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "arXiv preprint arXiv:1704.01212,", "year": 2017 }, { "authors": [ "Evarist Giné", "Armelle Guillou" ], "title": "On consistency of kernel density estimators for randomly censored data: rates holding uniformly over adaptive intervals", "venue": "In Annales de l’IHP Probabilités et statistiques,", "year": 2001 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Stephan Holzer", "Roger Wattenhofer" ], "title": "Optimal distributed all pairs shortest paths and applications", "venue": "In Proceedings of the 2012 ACM symposium on Principles of distributed computing,", "year": 2012 }, { "authors": [ "David R. Karger", "Clifford Stein" ], "title": "A new approach to the minimum cut problem", "venue": "Journal of the ACM,", "year": 1996 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Nathan Linial", "Eran London", "Yuri Rabinovich" ], "title": "The geometry of graphs and some of its algorithmic applications", "venue": null, "year": 1995 }, { "authors": [ "Philip M. Long", "Hanie Sedghi" ], "title": "Generalization bounds for deep convolutional neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Andreas Loukas" ], "title": "What graph neural networks cannot learn: depth vs width", "venue": "ICLR,", "year": 2020 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Danupon Nanongkai" ], "title": "Distributed approximation algorithms for weighted shortest paths", "venue": "In Proceedings of the forty-sixth annual ACM symposium on Theory of computing,", "year": 2014 }, { "authors": [ "Rina Panigrahy", "Marc Najork", "Yinglian Xie" ], "title": "How user behavior is related to social affinity", "venue": "In Proceedings of the fifth ACM international conference on Web search and data mining,", "year": 2012 }, { "authors": [ "David Peleg" ], "title": "Distributed computing: a locality-sensitive approach", "venue": null, "year": 2000 }, { "authors": [ "Atish Das Sarma", "Stephan Holzer", "Liah Kor", "Amos Korman", "Danupon Nanongkai", "Gopal Pandurangan", "David Peleg", "Roger Wattenhofer" ], "title": "Distributed verification and hardness of distributed approximation", "venue": "SIAM Journal on Computing,", "year": 2012 }, { "authors": [ "Ryoma Sato", "Makoto Yamada", "Hisashi Kashima" ], "title": "Approximation ratios of graph neural networks for combinatorial problems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Boris Weisfeiler", "Andrei A Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno-Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks? ICLR, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Keyulu Xu", "Jingling Li", "Mozhi Zhang", "Simon S Du", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "What can neural networks reason about", "venue": "arXiv preprint arXiv:1905.13211,", "year": 2019 }, { "authors": [ "Pinar Yanardag", "S.V.N. Vishwanathan" ], "title": "Deep graph kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD", "year": 2015 }, { "authors": [ "Zhitao Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "Will Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years Graph Neural Networks (GNNs) have become the predominant paradigm for learning problems over graph structured data (Hamilton et al., 2017; Kipf & Welling, 2016; Veličković et al., 2017). Computation in GNNs is performed by each node sending and receiving messages along the edges of the graph, and aggregating messages from its neighbors to update its own embedding vector. After a few rounds of message passing, the computed node embeddings are aggregated to compute the final output (Gilmer et al., 2017). The analogy to message passing leads to a simple and elegant architecture for learning functions on graphs. On the other hand, from a theoretical and practical perspective, we also need these architectures to be sample efficient, i.e., learnable from a small number of training examples, where each training example corresponds to a graph. Recent works have shown that generalization in GNNs depends upon the depth of the architecture, i.e., the number of rounds of message passing, as well as the embedding size for each node in the graph (Garg et al., 2020). However, this requirement is in fundamental conflict with the message passing framework. In particular, using GNNs to compute several fundamental graph problems such as shortest paths, minimum spanning tree, min cut etc., necessarily requires the product of the depth of the GNN and the embedding size to scale as √ n where n is the size of the graph (Loukas, 2020). This in turn places a significant statistical burden when learning these fundamental problems on large scale graphs. The above raises the the following question: Can one develop sample efficient architectures for graph problems while retaining the simplicity of the message passing framework?\nSeveral recent works have tried to address the above question by proposing extensions to the basic GNN framework by augmenting various pooling operations in conjunction with message passing rounds to capture more global structure (Ying et al., 2018; Simonovsky & Komodakis, 2017; Fey et al., 2018). While these works demonstrate an empirical advantage over GNNs, we currently do not know of a general neural architecture that is versatile enough to provably encode the solution space of a variety of graph problems such as shortest paths and minimum spanning trees, while being significantly superior to GNNs in terms of statistical efficiency. In this work we propose a theoretically principled architecture, called GNN+ networks for learning graph problems. While the basic GNN framework is inspired from classical message passing style models studied in distributed computing,\nwe borrow from two fundamental paradigms in graph algorithm design namely, sketching and parallel computation, to design GNN+ networks. As a result of combining these two powerful paradigms, we get a new neural architecture that simultaneously achieve low depth and low embedding size for many fundamental graph problems. As a result our proposed GNN+ architecture have a significantly smaller number of parameters that provably leads to better statistical efficiency than GNNs. Before we present our improved architecture, we briefly describe the standard GNN framework.\nModel for GNNs. In this work we will study GNNs that fall within the message passing framework and using notation from previous works we denote such networks as GNNmp (Loukas, 2020). A GNNmp network operates in the AGGREGATE and COMBINE model (Gilmer et al., 2017) that captures many popular variants such as GraphSAGE, Graph Convolutional Networks (GCNs) and GIN networks (Hamilton et al., 2017; Kipf & Welling, 2016; Xu et al., 2019a). Given a graph G = (V,E), let x(k)i denote the feature representation of node i at layer k. Then we have\na (k−1) i = AGGREGATE({x (k−1) j : j ∈ N(i)}) (1)\nx (k) i = COMBINE(x (k−1) i , a (k−1) i ). (2)\nHere N(i) is the set of neighbors for node i. Typically the aggregation and combination is performed via simple one or two layer full connected networks (FNNs), also known as multi layer perceptrons (MLPs). In the rest of the paper we will use the two terms interchangeably.\nGNN+ Networks. Our proposed GNN+ networks consist of one or more layers of a GNN+ block shown in Figure 1. The GNN+ block comprises of r parallel GNNmp networks follows by s parallel fully connected network modules for pooling where r and s are the hyperparameters of the architecture. More importantly we restrict the r GNNmp modules to share the same set of weights. Hence the parallel GNNmp modules only differ in the way the node embeddings are initialized. Furthermore, we restrict each GNNmp to be of low depth. In particular, for degree-d graphs of diameter D, over n nodes, we will restrict the GNNmp to be of depth O((d + D) · polylog(n)). Similarly, we require the s fully connected networks to be of depth O((d + D) · polylog(n)) and share the network weights. We connect the outputs of the GNNmp modules to the fully connected pooling networks in a sparse manner and restrict the input size of each fully connected network to be O((d + D) · polylog(n)). Stacking up L layers of GNN+ blocks results in a GNN+ network that is highly parameter efficient and in total has O((d + D)L · polylog(n)) parameters. For such a network we call the depth as the total number of message passing rounds and the number of MLP layers used across all the L stacks. Since we restrict our MLPs and GNNmp blocks inside a GNN+ network to be of low depth, we will often abuse notation and refer to a GNN+ architecture with L stacks of GNN+ blocks as a depth L architecture. Our proposed design lets us alternate between local computations involving multiple parallel GNN blocks and global post-processing stages, while still being sample efficient due to the enforced parameter sharing. We will show via several applications that optimal or near-optimal solutions to many popular graph problems can indeed be computed via a GNN+ architecture. Below we briefly summarize our main results.\nOverview of Results. To demonstrate the generality of our proposed GNN+ architecture, we study several fundamental graph problems and show how to construct efficient GNN+ networks to compute optimal or near optimal solutions to these problems. In particular, we will focus on degree-d graphs, i.e., graphs of maximum node degree d, with n nodes and diameter D and will construct GNN+ networks of depth polylog(n) and O ( (D + d)polylog(n) ) total parameters.\nShortest Paths. The first problem we consider is the fundamental graph problem of computing (approximate) all pairs shortest paths in undirected graphs. Given a graph G = (V,E), let dG(u, v) be the shortest path between nodes u and v. We say that an output {d̃G(u, v) : u, v ∈ V } is an\nα-approximate solution if for all u 6= v it holds that\ndG(u, v) ≤ d̃G(u, v) ≤ αdG(u, v).\nWe construct efficient GNN+ networks for all pairs shortest paths with the following guarantee.\nTheorem 1 (Informal Theorem). For any constant c > 1, there is a depth O(D log d+ log n) GNN+ network with O ( (n 2 c + d)polylog(n) ) parameters that computes (4c − 2)-approximate all pairs shortest paths in the undirected unweighted degree-d graphs over n nodes. On the other hand, computing a c-approximate shortest paths using GNNmp networks requires a network of depth Ω(n).\nFrom the above theorem we see that by setting c = O(log n) we can encode a c-approximate solution using an O(D log d+ log n) GNN+ network with only O(d ·polylog(n)) parameters. This is in stark contrast with the depth requirement of Ω(n) for the traditional GNNmp networks.\nConnectivity Measures. Next we consider computing various graph connectivity measures. We first study the popular measure based on graph effective resistances (Chandra et al., 1996).\nDefinition 1 (Effective Resistance). Let G be a weighted undirected graph G with adjacency matrix A and the associated Laplacian L = D − A. Given an edge u, v, the effective resistance between u, v is defined as\nRu,v = ξ > u,vL †ξu,v.\nHere ξu,v is an n dimensional vector with +1 at position u, −1 at position v and zeros everywhere. L† refers to the matrix pseudo-inverse.\nWe also study the following connectivity measure that was proposed by Panigrahy et al. (2012) in the context of web graphs. Given an undirected graph G, let Gp be the random graph obtained by sampling each edge with probability p.\nDefinition 2 (Affinity). For any two vertices u, v and for p ∈ [0, 1], define Ap(u, v) to be the probability that u, v are connected in Gp. Then the affinity between u and v is defined as\nA(u, v) = Ep[Ap(u, v)]\nwhere the expectation is taken over p drawn from the uniform distribution in [0, 1].\nFor the above measures we show the following\nTheorem 2 (Informal Theorem). There exists a GNN+ architecture with O(D log(nd)) parameters, and depth O(D log(nd)) on graphs of diameter D with n nodes and maximum degree d, that approximate the above connectivity measures up to constant factors. On the other hand using GNNmp networks to compute the above measures, even approximately, necessarily requires a network of depth Ω( √ n).\nClustering, Minimum Cuts and Minimum Spanning Trees. Finally, we showcase the power of a GNN+ architecture for computing other fundamental graph problems. Given an undirected graph G, the spectral clustering of G corresponds to the cut obtained by taking the sign of the eigenvector v corresponding to the second smallest eigenvalue λ2(L), where L is the graph Laplacian. For computing the spectral clustering via GNN+ networks we show the following\nTheorem 3 (Informal Theorem). There is a GNN+ network of depth ` = O( 1λ2(L) 2 log n), with O(d) parameters that computes an -approximate spectral clustering on graphs of degree d. On the other hand, using GNNmp networks to even approximately compute the spectral clustering requires depth Ω( √ n).\nNext we consider the classical problems of computing a global minimum cut and minimum spanning trees in undirected graphs.\nTheorem 4 (Informal Theorem). There exist GNN+ networks of of depth O((D+ log n) log n), and O(d) parameters for computing a global minimum cut (MINCUT ) and minimum spanning tree (MST) in degree d graphs of diameter D. Furthermore, using GNNmp networks to compute these primitives (even approximately) necessarily requires depth Ω( √ n).\nGeneralization Bounds. Our final result concerns the generalization properties of a depth L GNN+ architecture. For ease of exposition, we state here the results for the case when the GNN+ architecture produces a one dimensional output. More general results are presented in Appendix D. Our generalization bounds depend on the depth L and the total number of parameters P in the GNN+ network. Following recent work on providing generalization bounds for fully connected and convolutional neural networks (Bartlett et al., 2017; Long & Sedghi, 2020) that are based on distance to initialization, we consider the class Fβ of depth L GNN+ networks with P parameters that are at a distance β from a reference parameter configuration (typically the parameters at random initialization). Let y ∈ R denote the output of the network and consider a Lipschitz loss function `(y, ŷ). Then, we provide following guarantee.\nTheorem 5 (Informal Theorem). Let `(ŷ, y) be a Lipschitz loss function bounded in [0, B]. Then, given m i.i.d. samples (G1, y1), (G2, y2), . . . (Gm, ym) generated from a distribution D, with probability at least 2/3, it holds that for all f ∈ Fβ ,∣∣∣ÊD[`f ]− ED[`f ]∣∣∣ ≤ O(B√P (β + L)\nm\n) .\nWe refer the reader to Theorem 16 in Appendix D for a formal statement and the proof. Notice that the above theorem implies that our proposed GNN+ architecture for the above graph problems can indeed be trained using very few samples as opposed to the traditional GNNmp networks since the GNN+ network requires much fewer parameters and depth. Furthermore, since a GNNmp network is a special case of a GNN+ architecture, our analysis also leads to an improved bound on the generalization guarantees for GNNmp networks as well. In particular, the above theorem improves upon the recent work of Garg et al. (2020) that provides generalization guarantees for training GNNs that scale with the branching factor of the graph. Using our improved analysis we are able to remove this dependence on the branching factor. See Appendix D for details." }, { "heading": "2 RELATED WORK", "text": "GNNs operate primarily in the message passing framework where nodes aggregate and combine messages from their neighbors to update their embeddings. Several variants of this basic paradigm have been proposed, with each differing in how the aggregation and combination is performed. Popular variants include GraphSAGE (Hamilton et al., 2017), Graph Convolutions Networks (Kipf & Welling, 2016), GIN networks (Xu et al., 2019a), and graph pooling (Ying et al., 2018).\nVarious recent works have also studied the representation power of GNNs. The work of Xu et al. (2019a) demonstrates that the GNNs as considered in equation 1 are as powerful as the WeisfeilerLehman test for graph isomorphism (Weisfeiler & Lehman, 1968). The recent work of Xu et al. (2019b) compares the message passing framework of GNNs in representing computations involving dynamic programming. GNN networks that can capture higher order variants of the WL test have also been proposed recently (Maron et al., 2019).\nSeveral works have also explored the limitations of GNNs for computing graph primitives. The work of Loukas (2020) established a correspondence between the message passing GNN framework and the well studied CONGEST model of distributed computing (Peleg, 2000). Based on the above correspondence it follows that in order to represent several important graph problems such as shortest paths, minimum cuts and minimum spanning tree, either the depth of the GNN or the embedding size of the nodes has to scale with the graph size at a polynomial rate. Notice that these lower bounds apply to any form of message passing framework and as a result recent work in incorporating non-symmetric node messages (Sato et al., 2019) in GNNs also run into the same barriers.\nIn order to address the above limitations recent works have proposed combining the GNN architecture with pooling mechanisms for aggregating more global information (Ying et al., 2018; Defferrard et al., 2016; Simonovsky & Komodakis, 2017; Fey et al., 2018; Bianchi et al., 2019; Du et al., 2019). For example the work of Ying et al. (2018) proposes a hierarchical approach where a GNN network is followed by a clustering step to compute higher level “nodes” to be used in the subsequent GNN operation. While these approaches show empirical promise, ours is the first work to design a principled architecture with theoretical guarantees that merges local distributed computations with global postprocessing stages.\nFinally, the question of generalization for GNNs has also been studied in recent works. The most relevant to us is the recent work of Garg et al. (2020) that analyzes the Rademacher complexity of GNNs with the aggregate mechanism being a simple addition and the combine mechanism being a one layer neural network. Via analyzing the Rademacher complexity the authors show that the generalization for GNNs depends on the depth, the embedding size and the branching factor of the graph. Our improved analysis in Section D extends the result of Garg et al. (2020). Not only does our generalization bound apply to the more general GNN+ networks, for the case of GNNs considered in (Garg et al., 2020) our analysis shows that the dependence on the branching factor can be eliminated in the generalization bounds. Generalization bounds have also been proved recently for GNN based networks that use the Neural Tangent Kernel (NTK) during the aggregation and combination operations (Du et al., 2019)." }, { "heading": "3 SHORTEST PATHS", "text": "In this section we provide a proof sketch of Theorem 1 showing how to construct an efficient GNN+ architecture for the Shortest Paths problem. In particular we study all pairs shortest paths. All Pairs Shortest Paths. The input is a graph G = (V,E) with n nodes. The desired output is an( n 2 ) dimensional vector containing (approximate) shortest path values between each pair of vertices. Given a graph G, let dG(u, v) be the shortest path between nodes u and v. We say that an output {d̃G(u, v) : u, v ∈ V } is an α-approximate solution if for all u 6= v it holds that\ndG(u, v) ≤ d̃G(u, v) ≤ αdG(u, v).\nWe first show that the GNNmp networks are highly inefficient for learning this problem.\nTheorem 6. Consider a GNNmp N of depth L over n nodes where each node has a representation size of B. If N encodes α-approximate all pairs shortest paths for graphs of diameter bounded by D, and for α < 3/2, then it must hold that B · L ≥ Ω(n). Furthermore, for any GNNmp that encodes α(n)-approximate all pairs shortest paths it must hold that B · L ≥ Ω ( n\nα(n) logn\n) . The lower bound\nholds for undirected unweighted graphs as well.\nProof. The recent work of Loukas (2020) established that computation in GNNmp networks is equivalent to the CONGEST model of computation popularly studied in the design of distributed algorithms (Peleg, 2000). In particular, a lower bound on the product of depth (L) and representation size (B) can be obtained by establishing the corresponding lower bound on the product of the number of rounds and the size of messages in the CONGEST model of computing. Furthermore, the result of Holzer & Wattenhofer (2012) shows that in the CONGEST model approximating all pairs shortest paths, even on unweighted undirected graphs requires the product of the number of rounds and the message size to be Ω(n). This was improved in the work of Nanongkai (2014) to show that for any α(n)-approximation, the product of the number of rounds and the message size to be Ω ( n\nα(n) logn\n) .\nHence the corresponding lower bound on B · L follows.\nCircumventing Lower Bounds via GNN+. Next we detail our proposed GNN+ architecture that can encode approximate shortest paths with significantly smaller depth and parameter requirements.\nUnweighted Graphs. To illustrate the main ideas we study the case of undirected unweighted graphs. See Appendix A for the more general case of weighted graphs. The starting point of our construction is the following fundamental theorem of Bourgain (1985) regarding metric embeddings.\nTheorem 7 ((Bourgain, 1985)). Any n-point metric (X, d) can be embedded into the Euclidean metric of dimensionality O(log n) and distortion O(log n).\nThe above theorem suggests that in principle, if we only want to estimate shortest paths up to an approximation of O(log n), then we only need node embeddings of size O(log n). If there were a GNNmp network that could produce such embeddings, then one could simply compute the Euclidean distance between each pair of points to get the approximate shortest path. Furthermore, computing the Euclidean distance given the node embeddings can be done easily via a low depth full connected network. Unfortunately, producing the necessary low dimensional embeddings is exactly the task for\nwhich GNNmp networks require large depth as proved in Theorem 6 above. While there do exist semidefinite programming based algorithms (Linial et al., 1995) for computing the embeddings required for Bourgain’s theorem, they are not suitable for implementation via efficient neural architectures. Instead we rely on sketching based algorithms for computing shortest path distances.\nIn particular, for the unweighted case we adapt the sketch based approximate shortest path algorithms of Das Sarma et al. (2010) for designing an efficient network architecture. The sketch proposed in the work of Das Sarma et al. (2010) computes, for each node u, the distance of u from a random subset S of the nodes. This can be done via a simple breadth first search (BFS). Repeating this process k-times provides a k-dimensional embedding for each vertex and for an appropriate choice of k, these embeddings can be used to compute approximate shortest paths. Notice that this sketch based procedure is highly amenable to be implemented in a message passing framework. Overall, the algorithm performs multiple parallel BFS subroutines to compute the embeddings. It is also well known that BFS on diameter D can be implemented by a GNNmp of depth O(D).\nBased on the above intuition, our proposed architecture is shown in Figure 2. It consists of k parallel breadth first search (BFS) modules for k = Θ(n 1 c log n) for a constant c > 1. Module i computes the shortest path from each vertex in G to any vertex in the set Si. The sets S1, S2, . . . , Sk are randomly chosen subsets of the vertex set V of various sizes. In particular there are Θ(n 1 c ) subsets of size 1, Θ(n 1 c ) subsets of size 2, Θ(n 1 c ) subsets of size 22, and so on up to Θ(n 1 c ) subsets of size 2blognc. The BFS module i produces n distance values v(i)1 , . . . , v (i) n . These modules are followed by ( n 2\n) fully connected networks where each module is responsible for computing the approximate shortest path distance between a pair of vertices. In particular we have d̃G(s, t) = maxi |v(i)s − v(i)t |.\nNotice from the discussion in Section 1 that the architecture in Figure 2 is a GNN+ network with a single GNN+ block. In the next section we will show how we can generalize to a suite of graph problems by stacking up multiple GNN+ blocks. For our proposed network we have the following guarantee.\nTheorem 8. For any integer c > 1, and for a fixed graph topology over n nodes with maximum degree d and diameter D, there exists a neural network as shown in Figure 2 of size O(n2+1/c), Õ(n 2 c ) parameters, and depth O(D log d + log n), that encodes (2c− 1)-approximate all pairs shortest paths in G.\nBefore proving the theorem above we establish two supporting lemmas concerning the implementation of the BFS modules and the MLP module in the network architecture described in Figure 2.\nLemma 1. The BFS module in Figure 2 can be implemented by a GNN of depth O(D), O(1) total parameters and with each node having a representation size of O(1).\nLemma 2. For any k, the MLP module in Figure 2 can be implemented by a network of depth O(log k), O(k2) total parameters.\nProof of Theorem 8. The correctness of the network architecture follows from the work of Das Sarma et al. (2010). Next we establish bounds on the total depth, size and the number of parameters. We have k = Θ(n 1 c log n) copies of the BFS module. Each BFS module is of size O(nd log d) since there are n nodes and each node implements a min function of size O(d log d). Hence, in total the BFS modules have size O(n1+1/cd log d log n). Next we have ( n 2 ) MLP modules each of size O(k log k) for a total size of O(n2+1/c log n). Hence the total size of the neural network is bounded by O(n2+1/c log n).\nNext we bound the depth and the total number of parameters. The BFS module has O(D) rounds with each requiring a depth of O(log d) for a total depth of O(D log d). The MLP module has a depth bounded by O(log k) = O(log n). Hence the total depth is O(D logD + log n). Finally, the BFS module requires O(1) parameters and the MLP module requires O(k2) parameters. Hence, the total number of parameters in our architecture are bounded by O(k2) = O(n2/c)." }, { "heading": "4 MINIMUM CUTS", "text": "To illustrate another application, in this section we design an efficient GNN+ based architecture for computing the minimum cut in an undirected graph. We first argue in Appendix C that even computing an approximate mincut using traditional GNNmp networks requires Ω( √ n) rounds. Our efficient GNN+ based architecture is based on the parallel algorithm for computing mincut (Karger & Stein, 1996) and is shown in Figure 3. More importantly the architecture comprises of multiple layers of GNN+ blocks in contrast to a single GNN+ block in the case of shortest paths.\nThe algorithm of Karger & Stein (1996) relies on the following lemma.\nLemma 3 ((Karger & Stein, 1996)). Let G = (V,E) be an undirected unweighted graph with m edges and n vertices. Then with probability at least 1n2 , a random ordering L of the edges contains a prefix L′ of L such that the graph G′ = (V,L′) contains exactly two connected components corresponding to the global minimum cut in the graph.\nUsing the above, Karger & Stein (1996) proposed a Monte-Carlo randomized algorithm to compute the global minimum cut. For a given ordering L, the algorithm estimates the length of the prefix L′ corresponding to the cut by using a binary search procedure. This provides the set of active edges, i.e., edges in L′. Then one can run a connected components algorithm using edges in L′. If the prefix is too small, it results in more than two connected components; if it is too large it produces one connected component. If the number of connected components is two then the algorithm stops. Otherwise it recurses on the appropriate side of L′.\nWe can implement the above algorithm using the GNN+ architecture of depth O(logm) where the computation in each pair of (GNN,Update Prefix) blocks corresponds to executing one call of the above binary search procedure. During each call one needs to perform a connected component subroutine. This can be done via BFS and is implemented in the GNN block as shown in Figure 3. The GNN is followed by the UpdatePrefix module that is an MLP implementing the logic of selecting the appropriate side of the permutation to recurse on.\nMore formally, at each of the O(logm) = O(log n) stages, each vertex in the GNNmp maintains a list of which of its connecting edges are active. This requires O(d) representation size. The goal next is to infer whether the number of connected components induced by the active edges is one, two, or more than two. This in turn decides the part of the ordering the next stage will focus on. The computation of connected components can be carried out using at most two breadth first searches and hence via O(D) rounds of a GNNmp network. Following this intuition we arrive at the proposed architecture in Figure 3. Formally, we have the following guarantee.\nTheorem 9. For a fixed graph topology over n nodes with maximum degree d and diameter D, the network in Figure 3 produces the minimum cut. Furthermore, the network is of depth ` = O(D log2 n), size O(n`), and has O(d+ log n) parameters.\nProof. Each vertex maintains an O(d) sized representation indicating which of its edges are currently active plus additional constant number of values to indicate its component id during different runs of BFS. Given a list of active edges, the GNN module simply performs a procedure to compute whether the number of connected components is one, two, or more than two. This can be done with at most two BFS runs over the active edges. As we saw before in Section 3, this requires O(D) depth.\nAt the end of the GNN module each vertex gets an integer value specifying its component id. The UpdatePrefix module then takes this information and is required to perform two computations: a) check if the number of connected components is one, two, or more than two. This requires checking the number of distinct elements in a list of n numbers and can be done with an MLP O(log n) parameters and depth O(log n), b) update the set of active edges for each vertex depending on the number of connected components. This requires taking in the O(d) sized representation and producing a new O(d) sized representation for each vertex. This can be achieved again by an MLP using O(d) parameters and depth O(log d). Once a given round of GNN and UpdatePrefix ends, the computations proceeds to the next layer. Importantly, the set of model parameters are shared across the different layers of the architecture as each time the computation required is the same. Hence overall we get O(D log n) depth and O(d+ log2 n) parameters." }, { "heading": "5 EXPERIMENTS", "text": "We show the efficacy of GNN+ on the aforementioned graph problems: Shortest Paths, Effective Resistance, Affinity, MINCUT and MST, and compare to a state-of-the-art GNNmp model (Xu et al., 2019a).\nDataset. We generated synthetic random graphs between 500 and 1000 nodes. For the affinity measure, we used graphs with 250 nodes because of the need for using very dense graphs to have a reasonable number of alternate paths between any two end points. In general, we generated the data sets as follows: we fix the number of nodes n in the graph to take values in [250, 1000]. For each value of n we generate graphs from the Erdos-Renyi model G(n, p) with edge sampling probability p = αn . We vary α depending on the problem. Specifically, we set α to be a constant in [1, 100] to capture varying degrees of sparsity. For each n, p we generate 30, 000 training examples consisting of tuples of the form (G, s, t, d(s, t)) where G is a random graph drawn from G(n, p), s, t are two vertices uniformly drawn at random and d(s, t) is one of shortest path value, effective resistance, or affinity between the two vertices. In the case of min cut and minimum spanning tree, we generate tuples (g, vG) where vG corresponds to the value of the minimum spanning tree or the global minimum cut.\nModels and Configurations. For our baseline GNNmp implementation, we used the GIN model proposed in Xu et al. (2019a). This has been empirically shown (Xu et al., 2019a; Loukas, 2020; Errica et al., 2020) to be a state-of-the-art GNNmp model on several datasets. GIN updates feature representations x(k)v of each node v at iteration k as: x (k) v = MLP ( (1 + (k)) · x(k−1)v + ∑ u∈N(v) x (k−1) u ) ,\nwhere MLP refers to a Multi-Layer Perceptron, N(v) is the set of neighbors of v, and is a learnable parameter. For problems that involved weighted graphs (e.g. MST), we incorporated edge weights into the GIN update equation by replacing the sum of neighbor representations by a weighted sum.\nOur GNN+ implementation also used the same GIN implementation as its internal GNNmp block. All graphs in our experiments were undirected. For both baseline and GNN+, we used node degree as the input node features for MINCUT and MST. For Shortest Paths, Effective Resistance and Affinity, we set input node features to be Booleans indicating if the node is a source/destination node or not.\nFollowing Xu et al. (2019a), we performed 10-fold cross-validation for each of our experiments (corresponding to the two models and five problems), and report the average validation mean squared error(MSE) across the 10 folds. We run each 10-fold cross-validation experiment 10 times to compute confidence intervals. We apply batch normalization at each layer, and use an Adam optimizer and decay the learning rate by 0.5 every 50 epochs, and train for up to 600 epochs. For both the baseline GNNmp and the GNN+ model, we tune the following parameters: initial learning rate ∈ {0.001, 0.003, 0.005, 0.007, 0.01, 0.03, 0.05}, number of hidden units ∈ {8, 16, 32, 64}, batchsize ∈ {32, 64}, dropout ∈ {0, 0.5}. For GNNmp we also tuned the depth (number of layers) ∈ {2, 4, 8, 12}. For the GNN+ model, we tuned the number of parallel GNNs in each GNN+ block to ∈ {1, 2, 3} with GNN depth ∈ {2, 4}. We also tuned the number of GNN+ layers ∈ {1, 2, 3}. We fixed the depth of each MLP block in GNNmp and GNN+ to 2.\nResults. To validate our theory regarding the better generalization bounds for GNN+ models compared to GNNmp, we compare the test mean squared errors for the two models across the five datasets. For all the five problem, Table 1 lists the test MSEs and corresponding standard deviations\nfor the two models. As a sanity check, we also plot the variance of the labels in our datasets, which corresponds to the MSE obtained by a naive model that predicts the mean label. We observe significant gains in accuracy of anywhere between 15% relative MSE improvement over the GNNmp baseline (for Shortest Paths) to as much as 108% relative MSE improvement (for Effective Resistance). Note that the naive mean predictor’s MSE is at least an order of magnitude larger than all the MSE values for GNNmp and GNN+ (except for the MSTdataset, where it is around five times larger - we suspect that the weighted graphs involved in this dataset make this a harder problem).\nWe posit that these accuracy gains directly stem from the sample-efficiency of the GNN+ models as captured in Theorems 1,2 and 4 - the most compact GNN+ networks that can represent these problems are smaller than the corresponding most compact GNNmp networks. Hence, by Theorem 5, such networks will have smaller generalization errors. In the appendix, we also plot the test accuracy as a function of number of epochs that suggest that our models also converge faster than the baseline GNNmp models, though we do not have any theoretical justification supporting this observation.\nExperiments on Real World Data. We further demonstrate the applicability of our proposed GNN+ architecture for solving classification tasks involving real world graphs. We experiment with the following real world datasets (Yanardag & Vishwanathan, 2015) that have been used in recent works for evaluating various GNN architectures (Xu et al., 2019a): 1) IMDB-BINARY and 2) IMDB-MULTI datasets: These are movie collaboration datasets with nodes as actors and the class label being the genre. 3) COLLAB: This is a scientific collaboration dataset with three classes. 4) PROTEINS: This is a bioinformatics dataset with 3 class labels. 5) PTC, 6) NCI1 and 7) MUTAG: These are various datasets of chemical compounds with two class labels each.\nWe train our GNN+ proposed architecture on these graphs using the cross-entropy loss and as before compare with the GIN architecture of Xu et al. (2019a). We use the same input node features as in Xu et al. (2019a) and use the same experimental methodology as that for synthetic graphs above. In particular, when tuning hyperparameter tuning we allow the GNNmp architecture to explore depth upto 9 whereas the GNN+ architecture is tuned by restricting the depth upto 3. The results are summarized in Table 2 below. As can be seen, in each instance GNN+ either outperforms or matches the performance of the GNNmp architecture in terms of final test accuracy." } ]
2,020
null
SP:ab6c0eee6eebb90361fa87f9beeaf1722e4ec983
[ "This paper introduces variational dynamic mixtures (VDM), a new variational family, and demonstrates that using VDM to model the approximate posterior in sequential latent variable models can better capture multi-modality in data. VDM includes a distribution over recurrent states in the inference model, such that a sampling-based marginalization of this distribution reduces the approximate posterior to a mixture model. Setting the weights such that only the most probable mixture component is selected allows other mixture components to capture other modes. The authors validate VDM on both synthetic and real multimodal datasets, which outperform baselines with respect to negative log-likelihood and a new empirical Wasserstein distance." ]
Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Modeaveraging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.
[]
[ { "authors": [ "Ienkaran Arasaratnam", "Simon Haykin" ], "title": "Cubature kalman filters", "venue": "IEEE Transactions on automatic control,", "year": 2009 }, { "authors": [ "Marie Auger-Méthé", "Chris Field", "Christoffer M Albertsen", "Andrew E Derocher", "Mark A Lewis", "Ian D Jonsen", "Joanna Mills Flemming" ], "title": "State-space models’ dirty little secrets: even simple linear gaussian models can have estimation problems", "venue": "Scientific reports,", "year": 2016 }, { "authors": [ "Philipp Becker", "Harit Pandya", "Gregor Gebhardt", "Cheng Zhao", "James Taylor", "Gerhard Neumann" ], "title": "Recurrent kalman networks: Factorized inference in high-dimensional deep feature spaces", "venue": "In Thirty-sixth International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Philip Becker-Ehmck", "Jan Peters", "Patrick Van Der Smagt" ], "title": "Switching linear dynamics for variational bayes filtering", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Apratim Bhattacharyya", "Bernt Schiele", "Mario Fritz" ], "title": "Accurate and diverse sampling of sequences based on a “best of many", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Apratim Bhattacharyya", "Michael Hanselmann", "Mario Fritz", "Bernt Schiele", "Christoph-Nikolas Straehle" ], "title": "Conditional flow variational autoencoders for structured sequence prediction", "venue": null, "year": 1908 }, { "authors": [ "Christopher M Bishop" ], "title": "Pattern recognition and machine learning", "venue": "springer,", "year": 2006 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Edward De Brouwer", "Jaak Simm", "Adam Arany", "Yves Moreau" ], "title": "Gru-ode-bayes: Continuous modeling of sporadically-observed time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andreas Doerr", "Christian Daniel", "Martin Schiegg", "Duy Nguyen-Tuong", "Stefan Schaal", "Marc Toussaint", "Sebastian Trimpe" ], "title": "Probabilistic recurrent state-space models", "venue": "In Thirty-fifth International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marco Fraccaro", "Søren Kaae Sønderby", "Ulrich Paquet", "Ole Winther" ], "title": "Sequential neural models with stochastic layers", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Marco Fraccaro", "Simon Kamronn", "Ulrich Paquet", "Ole Winther" ], "title": "A disentangled recognition and nonlinear dynamics model for unsupervised learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Daniel Gedon", "Niklas Wahlström", "Thomas B Schön", "Lennart Ljung" ], "title": "Deep state space models for nonlinear system identification", "venue": "arXiv preprint arXiv:2003.14162,", "year": 2020 }, { "authors": [ "Mevlana Gemici", "Chia-Chun Hung", "Adam Santoro", "Greg Wayne", "Shakir Mohamed", "Danilo J Rezende", "David Amos", "Timothy Lillicrap" ], "title": "Generative temporal models with memory", "venue": "arXiv preprint arXiv:1702.04649,", "year": 2017 }, { "authors": [ "Anirudh Goyal Alias Parth Goyal", "Alessandro Sordoni", "Marc-Alexandre Côté", "Nan Rosemary Ke", "Yoshua Bengio" ], "title": "Z-forcing: Training stochastic recurrent networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Aditya Grover", "Manik Dhar", "Stefano Ermon" ], "title": "Flow-gan: Combining maximum likelihood and adversarial learning in generative models", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Marcel Hirt", "Petros Dellaportas" ], "title": "Scalable bayesian learning for state space models using variational inference with smc samplers", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Rudolph Emil Kalman" ], "title": "A new approach to linear filtering and prediction problems", "venue": null, "year": 1960 }, { "authors": [ "Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick Van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Vineet Kosaraju", "Amir Sadeghian", "Roberto Martı́n-Martı́n", "Ian Reid", "Hamid Rezatofighi", "Silvio Savarese" ], "title": "Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Rahul G Krishnan", "Uri Shalit", "David Sontag" ], "title": "Structured inference networks for nonlinear state space models", "venue": "In Thirty-first aaai conference on artificial intelligence,", "year": 2017 }, { "authors": [ "Alex M Lamb", "Anirudh Goyal Alias Parth Goyal", "Ying Zhang", "Saizheng Zhang", "Aaron C Courville", "Yoshua Bengio" ], "title": "Professor forcing: A new algorithm for training recurrent networks", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Tuan Anh Le", "Maximilian Igl", "Tom Rainforth", "Tom Jin", "Frank Wood" ], "title": "Auto-encoding sequential monte carlo", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Namhoon Lee", "Wongun Choi", "Paul Vernaza", "Christopher B Choy", "Philip HS Torr", "Manmohan Chandraker" ], "title": "Desire: Distant future prediction in dynamic scenes with interacting agents", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Yingzhen Li", "Stephan Mandt" ], "title": "Disentangled sequential autoencoder", "venue": "In Thirty-fifth International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yunzhu Li", "Jiaming Song", "Stefano Ermon" ], "title": "Infogail: Interpretable imitation learning from visual demonstrations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Scott Linderman", "Matthew Johnson", "Andrew Miller", "Ryan Adams", "David Blei", "Liam Paninski" ], "title": "Bayesian learning and inference in recurrent switching linear dynamical systems", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Thomas Lucas", "Konstantin Shmelkov", "Karteek Alahari", "Cordelia Schmid", "Jakob Verbeek" ], "title": "Adaptive density estimation for generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Christian Naesseth", "Scott Linderman", "Rajesh Ranganath", "David Blei" ], "title": "Variational sequential monte carlo", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Josue Nassar", "Scott Linderman", "Monica Bugallo", "Il Memming Park" ], "title": "Tree-structured recurrent switching linear dynamical systems for multi-scale modeling", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Syama Sundar Rangapuram", "Matthias W Seeger", "Jan Gasthaus", "Lorenzo Stella", "Yuyang Wang", "Tim Januschowski" ], "title": "Deep state space models for time series forecasting", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Joeri Rogelj", "Michel Den Elzen", "Niklas Höhne", "Taryn Fransen", "Hanna Fekete", "Harald Winkler", "Roberto Schaeffer", "Fu Sha", "Keywan Riahi", "Malte" ], "title": "Meinshausen. Paris agreement climate proposals need a boost to keep warming well below 2 c", "venue": null, "year": 2016 }, { "authors": [ "Amir Sadeghian", "Vineet Kosaraju", "Ali Sadeghian", "Noriaki Hirose", "Hamid Rezatofighi", "Silvio Savarese. Sophie" ], "title": "An attentive gan for predicting paths compliant to social and physical constraints", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ardavan Saeedi", "Tejas D Kulkarni", "Vikash K Mansinghka", "Samuel J Gershman" ], "title": "Variational particle approximations", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Florian Schmidt", "Thomas Hofmann" ], "title": "Deep state space models for unconditional word generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Florian Schmidt", "Stephan Mandt", "Thomas Hofmann" ], "title": "Autoregressive text generation beyond feedback loops", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Vae with a vampprior", "venue": "In 21st International Conference on Artifi-cial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Eric A Wan", "Rudolph Van Der Merwe" ], "title": "The unscented kalman filter for nonlinear estimation", "venue": "In Proceedings of the IEEE", "year": 2000 }, { "authors": [ "Yuanxin Wu", "Dewen Hu", "Meiping Wu", "Xiaoping Hu" ], "title": "A numerical-integration perspective on gaussian filters", "venue": "IEEE Transactions on Signal Processing,", "year": 2006 }, { "authors": [ "Xun Zheng", "Manzil Zaheer", "Amr Ahmed", "Yuan Wang", "Eric P Xing", "Alexander J Smola" ], "title": "State space lstm models with particle mcmc inference", "venue": "arXiv preprint arXiv:1711.11179,", "year": 2017 }, { "authors": [ "Zachary M Ziegler", "Alexander M Rush" ], "title": "Latent normalizing flows for discrete sequences", "venue": "In Thirty-sixth International Conference on Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Making sense of time series data is an important challenge in various domains, including ML for climate change. One important milestone to reach the climate goals is to significantly reduce the CO2 emissions from mobility (Rogelj et al., 2016). Accurate forecasting models of typical driving behavior and of typical pollution levels over time can help both lawmakers and automotive engineers to develop solutions for cleaner mobility. In these applications, no accurate physical model of the entire dynamic system is known or available. Instead, data-driven models, specifically deep probabilistic time series models, can be used to solve the necessary tasks including forecasting.\nThe dynamics in such data can be highly multi-modal. At any given part of the observed sequence, there might be multiple distinct continuations of the data that are plausible, but the average of these behaviors is unlikely, or even physically impossible. Consider for example a dataset of taxi trajectories1. In each row of Fig. 1a, we have selected 50 routes from the dataset with similar starting behavior (blue). Even though these routes are quite similar to each other in the first 10 way points, the continuations of the trajectories (red) can exhibit quite distinct behaviors and lead to points on any far edge of the map. The trajectories follow a few main traffic arteries, these could be considered the main modes of the data distribution. We would like to learn a generative model of the data, that based on some initial way points, can forecast plausible continuations for the trajectories.\nMany existing methods make restricting modeling assumptions such as Gaussianity to make learning tractable and efficient. But trying to capture the dynamics through unimodal distributions can lead either to “over-generalization”, (i.e. putting probability mass in spurious regions) or on focusing only on the dominant mode and thereby neglecting important structure of the data. Even neural approaches, with very flexible generative models can fail to fully capture this multi-modality because their capacity is often limited through the assumptions of their inference model. To address this, we develop variational dynamic mixtures (VDM). Its generative process is a sequential latent variable model. The main novelty is a new multi-modal variational family which makes learning and inference multi-modal yet tractable. In summary, our contributions are\n• A new inference model. We establish a new type of variational family for variational inference of sequential latent variables. By successively marginalizing over previous latent states, the procedure can be efficiently carried-out in a single forward pass and induces a multi-modal posterior\n1https://www.kaggle.com/crailtap/taxi-trajectory\napproximation. We can see in Fig. 1b, that VDM trained on a dataset of taxi trajectories produces forecasts with the desired multi-modality while other methods overgeneralize.\n• An evaluation metric for multi-modal tasks. The negative log-likelihood measures predictive accuracy but neglects an important aspect of multi-modal forecasts – sample diversity. In Section 4, we derive a score based on the Wasserstein distance (Villani, 2008) which evaluates both sample quality and diversity. This metric complements our evaluation based on log-likelihoods.\n• An extensive empirical study. in Section 4, we use VDM to study various datasets, including a synthetic data with four modes, a stochastic Lorenz attractor, the taxi trajectories, and a U.S. pollution dataset with the measurements of various pollutants over time. We illustrate VDM’s ability in modeling multi-modal dynamics, and provide quantitative comparisons to other methods showing that VDM compares favorably to previous work." }, { "heading": "2 RELATED WORK", "text": "Neural recurrent models. Recurrent neural networks (RNNs) such as LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Chung et al., 2014) have proven successful on many time series modeling tasks. However, as deterministic models they cannot capture uncertainties in their dynamic predictions. Stochastic RNNs make these sequence models non-deterministic (Chung et al., 2015; Fraccaro et al., 2016; Gemici et al., 2017; Li & Mandt, 2018). For example, the variational recurrent neural network (VRNN) (Chung et al., 2015) enables multiple stochastic forecasts due to its stochastic transition dynamics. An extension of VRNN (Goyal et al., 2017) uses an auxiliary cost to alleviate the KL-vanishing problem. It improves on VRNN inference by forcing the latent variables to also be predictive of future observations. Another line of related methods rely on particle filtering (Naesseth et al., 2018; Le et al., 2018; Hirt & Dellaportas, 2019) and in particular sequential Monte Carlo (SMC) to improve the evidence lower bound. In contrast, VDM adopts an explicitly multi-modal posterior approximation. Another SMC-based work (Saeedi et al., 2017) employs search-based techniques for multi-modality but is limited to models with finite discrete states. Recent works (Schmidt & Hofmann, 2018; Schmidt et al., 2019; Ziegler & Rush, 2019) use normalizing flows in the latent space to model the transition dynamics. A normalizing flow requires many layers to transform its base distribution into a truly multi-modal distribution in practice. In contrast, mixture density networks (as used by VDM) achieve multi-modality by mixing only one layer of neural networks. A task orthogonal to multi-modal inference is learning disentangled representations. Here too, mixture models are used (Chen et al., 2016; Li et al., 2017). These papers use discrete variables and a mutual information based term to disentangle different aspects of the data.\nVAE-like models (Bhattacharyya et al., 2018; 2019) and GAN-like models (Sadeghian et al., 2019; Kosaraju et al., 2019) only have global, time independent latent variables. Yet, they show good results on various tasks, including forecasting. With a deterministic decoder, these models focus on average dynamics and don’t capture local details (including multi-modal transitions) very well. Sequential latent variable models are described next.\nDeep state-space models. Classical State-space models (SSMs) are popular due to their tractable inference and interpretable predictions. Similarly, deep SSMs with locally linear transition dynamics enjoy tractable inference (Karl et al., 2017; Fraccaro et al., 2017; Rangapuram et al., 2018; Becker et al., 2019). However, these models are often not expressive enough to capture complex (or highly multi-modal) dynamics. Nonlinear deep SSMs (Krishnan et al., 2017; Zheng et al., 2017; Doerr et al., 2018; De Brouwer et al., 2019; Gedon et al., 2020) are more flexible. Their inference is often no longer tractable and requires variational approximations. Unfortunately, in order for the inference model to be tractable, the variational approximations are often simplistic and don’t approximate multi-modal posteriors well with negative effects on the trained models. Multi-modality can be incorporated via additional discrete switching latent variables, such as recurrent switching linear dynamical systems (Linderman et al., 2017; Nassar et al., 2018; Becker-Ehmck et al., 2019). However, these discrete states make inference more involved." }, { "heading": "3 VARIATIONAL DYNAMIC MIXTURES", "text": "We develop VDM, a new sequential latent variable model for multi-modal dynamics. Given sequential observations x1:T = (x1, . . . ,xT ), VDM assumes that the underlying dynamics are governed by latent states z1:T = (z1, . . . , zT ). We first present the generative process and the multi-modal inference model of VDM. We then derive a new variational objective that encourages multi-modal posterior approximations and we explain how it is regularized via hybrid-training. Finally, we introduce a new sampling method used in the inference procedure.\nGenerative model. The generative process consists of a transition model and an emission model. The transition model p(zt | z<t) describes the temporal evolution of the latent states and the emission model p(xt | z≤t) maps the states to observations. We assume they are parameterized by two separate neural networks, the transition network φtra and the emission network φdec.To give the model the capacity to capture longer range temporal correlations we parametrize the transition model with a recurrent architecture φGRU (Auger-Méthé et al., 2016; Zheng et al., 2017) such as a GRU (Chung et al., 2014). The latent states zt are sampled recursively from\nzt | z<t ∼ N (µ0,t, σ20,tI), where [µ0,t, σ20,t] = φtra(ht−1), ht−1 = φGRU(zt−1,ht−2), (1)\nand are then decoded such that the observations can be sampled from the emission model,\nxt | z≤t ∼ N (µx,t, σ2x,tI), where [µx,t, σ2x,t] = φdec(zt,ht−1). (2)\nThis generative process is similar to (Chung et al., 2015), though we did not incorporate autoregressive feedback due to its negative impact on long-term generation (Ranzato et al., 2016; Lamb et al., 2016). The competitive advantage of VDM comes from a more expressive inference model.\nInference model. VDM is based on a new procedure for multi-modal inference. The main idea is that to approximate the posterior at time t, we can use the posterior approximation of the previous\ntime step and exploit the generative model’s transition model φGRU. This leads to a sequential inference procedure. We first use the forward model to transform the approximate posterior at time t − 1 into a distribution at time t. In a second step, we use samples from the resulting transformed distribution and combine each sample with data evidence xt, where every sample parameterizes a Gaussian mixture component. As a result, we obtain a multi-modal posterior distribution that depends on data evidence, but also on the previous time step’s posterior.\nIn more detail, for every zt, we define its corresponding recurrent state as the transformed random variable st = φGRU(zt,ht−1), using a deterministic hidden state ht−1 = E [st−1]. The variational family of VDM is defined as follows:\nq(z1:T | x1:T ) = T∏ t=1 q(zt | x≤t) = T∏ t=1 ∫ q(zt | st−1,xt)q(st−1 | x≤t)dst−1. (3)\nChung et al. (2015) also use a sequential inference procedure, but without considering the distribution of st. Only a single sample is propagated through the recurrent network and all other information about the distribution of previous latent states z<t is lost. In contrast, VDM explicitly maintains st as part of the inference model. Through marginalization, the entire distribution is taken into account for inferring the next state zt. Beyond the factorization assumption and the marginal consistency constraint of Eq. (3), the variational family of VDM needs two more choices to be fully specified; First, one has to choose the parametrizations of q(zt | st−1,xt) and q(st−1 | x≤t) and second, one has to choose a sampling method to approximate the marginalization in Eq. (3). These choices determine the resulting factors q(zt | x≤t) of the variational family. We assume that the variational distribution of the recurrent state factorizes as q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t), i.e. it is the distribution of the recurrent state given the past observation2, re-weighted by a weighting function ω(st−1,xt) which involves only the current observations. For VDM, we only need samples from q̃(st−1 | x<t), which are obtained by sampling from the previous posterior approximation q(zt−1 | x<t) and transforming the sample with the RNN,\ns (i) t−1 ∼ q̃(st−1 | x<t) equiv. to s (i) t−1 = φ GRU(z (i) t−1,ht−2), z (i) t−1 ∼ q(zt−1 | x<t), (4)\nwhere i indexes the samples. The RNN φGRU has the same parameters as in the generative model.\nAugmenting the variational model with the recurrent state has another advantage; approximating the marginalization in Eq. (3) with k samples from q(st−1 | x≤t) and choosing a Gaussian parametrization for q(zt | st−1,xt) results in a q-distribution q(zt | x≤t) that resembles a mixture density network (Bishop, 2006), which is a convenient choice to model multi-modal distributions.\nq(zt | x≤t) = k∑ i ω (i) t N (µ (i) z,t, σ (i)2 z,t I), [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt). (5)\nWe assume q(zt | st−1,xt) to be Gaussian and use an inference network φinf to model the effect of the observation xt and recurrent state st−1 on the mean and variance of the mixture components.\nThe mixture weights ω(i)t := ω(s (i) t−1,xt)/k come from the variational distribution q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t) and importance sampling3. We are free to choose how to parametrize the weights, as long as all variational distributions are properly normalized. Setting\nω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)), (6)\nachieves this. In Appendix A, we explain this choice with importance sampling and in Appendix H, we compare the performance of VDM under alternative variational choices for the weights.\nIn the next time-step, plugging the variational distribution q(zt | x≤t) into Eq. (4) yields the next distribution over recurrent states q̃(st | x≤t). For this, the expected recurrent state ht−1 is required.\n2q̃(st−1 | x<t) is the distribution obtained by transforming the previous zt−1 ∼ q(zt−1|x<t) through the RNN. It can be expressed analytically using the Kronecker δ to compare whether the stochastic variable st−1 equals the output of the RNN: q̃(st−1 | x<t) ∝ ∫ δ(st−1 − φGRU(zt−1,ht−2))q(zt−1 | xt−1, λt−1)dzt−1.\n3the ω adjusts for using samples from q̃(st−1 | x<t) when marginalizing over ω(st−1,xt)q̃(st−1 | x<t)\nWe approximate the update using the same k samples (and therefore the same weights) as in Eq. (5).\nht−1 = E[st−1] = ∫ st−1 q(st−1 | x≤t)dst−1 ≈ k∑ i ω (i) t s (i) t−1. (7)\nA schematic view of the generative and inference model of VDM is shown in Fig. 2. In summary, the inference model of VDM alternates between Eqs. (4) to (7). Latent states are sampled from the posterior approximation of the previous time-step and transformed by Eq. (4) into samples of the recurrent state of the RNN. These are then combined with the new observation xt to produce the next variational posterior Eq. (5) and the expected recurrent state is updated (Eq. (7)). These are then used in Eq. (4) again. Approximating the marginalization in Eq. (3) with a single sample, recovers the inference model of VRNN (Chung et al., 2015), and fails in modeling multi-modal dynamics as shown in Fig. 3. In comparison, VDM’s approximate marginalization over the recurrent states with multiple samples succeeds in modeling multi-modal dynamics.\nVariational objective. We develop an objective to optimize the variational parameters of VDM φ = [φtra, φdec, φGRU, φinf ]. The evidence lower bound (ELBO) at each time step is\nLELBO(x≤t, φ) := 1\nk k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1\nk k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt)\n[ log\np(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt)\n]\n− 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (8)\nClaim 1. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t),\nlog p(xt | x<t) ≥ LELBO(x≤t, φ), (see proof in Appendix B) . (9)\nIn addition to the ELBO, the objective of VDM has two regularization terms,\nLVDM(φ) = T∑ t=1 Epdata [−LELBO(x≤t, φ)− ω1Lpred(x≤t, φ)] + ω2Ladv(x≤t, φ) . (10)\nIn an ablation study in Appendix E, we compare the effect of including and excluding the regularization terms in the objective. VDM is competitive without these terms, but we got the strongest results by setting ω1,2 = 1 (this is the only nonzero value we tried. This hyperparameter could be tuned even further.) The first regularization term Lpred, encourages the variational posterior (from the previous time step) to produce samples that maximize the predictive likelihood,\nLpred(x≤t, φ) = logEq(st−1|x<t) [p(xt | st−1,x<t)] ≈ log 1\nk k∑ i p(xt | s(i)t−1) . (11)\nThis regularization term is helpful to improve the prediction performance, since it depends on the predictive likelihood of samples, which isn’t involved in the ELBO. The second optional regularization term Ladv (Eq. (12)) is based on ideas from hybrid adversarial-likelihood training (Grover et al., 2018; Lucas et al., 2019). These training strategies have been developed for generative models of images to generate sharper samples while avoiding “mode collapse”. We adapt these ideas to generative models of dynamics. The adversarial term Ladv uses a forward KL-divergence, which enables “quality-driven training” to discourage probability mass in spurious areas.\nLadv(x≤t, φ) = DKL(p(xt | x<t)‖pD(xt | x<t)) = E [log p(xt | x<t)− log pD(xt | x<t)] (12)\nThe expectation is taken w.r.t. p(xt | x<t). The true predictive distribution pD(xt | x<t) is unknown. Instead, we can train the generator of a conditional GAN (Mirza & Osindero, 2014), while assuming an optimal discriminator. As a result, we optimize Eq. (12) in an adversarial manner, conditioning on x<t at each time step. Details about the discriminator are in Appendix G.\nStochastic cubature approximation (SCA). The variational family of VDM is defined by a number of modeling choices, including the factorization and marginal consistency assumptions of Eq. (3), the parametrization of the transition and inference networks Eqs. (4) and (5), and the choice of weighting function ω(·). It is also sensitive to the choice of sampling method which we discuss here. In principle, we could use Monte-Carlo methods. However, for a relatively small number of samples k, Monte-Carlo methods don’t have a mechanism to control the quality of samples. We instead develop a semi-stochastic approach based on the cubature approximation (Wan & Van Der Merwe, 2000; Wu et al., 2006; Arasaratnam & Haykin, 2009), which chooses samples more carefully. The cubature approximation proceeds by constructing k = 2d+1 so-called sigma points, which are optimally spread out on the d-dimensional Gaussian with the same mean and covariance as the distribution we need samples from. In SCA, the deterministic sigma points are infused with Gaussian noise to obtain stochastic sigma variables. A detailed derivation of SCA is in Appendix D.\nWe use SCA for various reasons: First, it typically requires fewer samples than Monte-Carlo methods because the sigma points are carefully chosen to capture the first two moments of the underlying distribution. Second, it ensures a persistence of the mixture components; when we resample, we sample another nearby point from the mixture component and not an entirely new location." }, { "heading": "4 EVALUATION AND EXPERIMENTS", "text": "In this empirical study, we evaluate VDM’s ability to model multi-modal dynamics and show its competitive forecasting performance in various domains. We first introduce the evaluation metrics and baselines. Experiments on synthetic data demonstrate that VDM is truly multi-modal thereby supporting the modeling choices of Section 3, especially for the inference model. Then, experiments on real-world datasets with challenging multi-modal dynamics show the benefit of VDM over stateof-the art (deep) probabilistic time-series models.\nEvaluation metrics. In the experiments, we always create a training set, a validation set, and a test set. During validation and test, each trajectory is split into two parts; initial observations (given to the models for inference) and continuations of the trajectories (to be predicted and not accessible to the models). The inference models are used to process the initial observations and to infer latent states. These are then processed by the generative models to produce forecasts.\nWe use 3 criteria to evaluate these forecasts (i) multi-steps ahead prediction p(xt+1:t+τ | x1:t), (ii) one-step-ahead prediction p(xt+1 | x1:t), and (iii) empirical Wasserstein distance. As in other work (Lee et al., 2017; Bhattacharyya et al., 2018; 2019), (i) and (ii) are reported in terms of negative log-likelihood. While the predictive distribution for one-step-ahead prediction is in closed-form, the long-term forecasts have to be computed using samples. For each ground truth trajectory x we generate n = 1000 forecasts x̂i given initial observations from the beginning of the trajectory\nNLL = − log\n( 1\nn n∑ i 1√ 2π exp ( − (x̂i − x) 2 2 )) , (13)\nThis evaluates the predictive accuracy but neglects a key aspect of multi-modal forecasts – diversity.\nWe propose a new evaluation metric, which takes both diversity and accuracy of predictions into account. It relies on computing the Wasserstein distance between two empirical distributions P , Q\nW (P,Q) = inf π\n( 1\nn n∑ i ‖(xi − yπ(i)‖2 ) , (14)\nwhere x and y are the discrete samples of P and Q, and π denotes all permutations (Villani, 2008). To use this as an evaluation measure for multi-modal forecasts, we do the following. We select n samples from the test set with similar initial observations. If the dynamics in the data are multimodal the continuations of those n trajectories will be diverse and this should be reflected in the forecasts. For each of the n samples, the model generates 10 forecasts and we get n groups of samples. With Eq. (14) the empirical W-distance between the n true samples, and each group of generated samples can be calculated. The averaged empirical W-distance over groups evaluates how well the generated samples match the ground truth. Repeating this procedure with different initial trajectories evaluates the distance between the modeled distribution and the data distribution.\nBaselines. We choose baselines from three classes of models. Two stochastic recurrent models are variational recurrent neural network (VRNN) (Chung et al., 2015) and auto-encoding sequential Monte Carlo (AESMC) (Le et al., 2018). VRNN has a similar but more powerful generative model than VDM, and AESMC uses SMC to achieve a tighter lower bound. But compared to VDM, both methods have a less powerful inference model which limits their capacity to capture multi-modal distributions. The third baseline is a deep SSM. The recurrent Kalman network (RKN) (Becker et al., 2019) models the latent space with a locally linear SSMs, which makes the prediction step and update step analytic (as for Kalman filters (Kalman, 1960)). A final baseline is the conditional flow variational autoencoder (CF-VAE) (Bhattacharyya et al., 2019), which uses conditional normalizing flows to model a global prior for the future continuations and achieves state-of-the-art performances.\nTo investigate the necessity of taking multiple samples in the VDM inference model, we also compared to VDM(k = 1) which uses only a single sample in Eq. (5). VDM(k = 1) has a simpler generative model than VRNN (it considers no autoregressive feedback of the observations x), but the same inference model. More ablations for the modeling choices of VDM are in Appendix H.\nFor fair comparison, we fix the dimension of the latent variables zt and ht to be the same for VDM, AESMC, and VRNN which have the same resulting model size (except for the additional autoregressive feedback in VRNN). AESMC and VDM always use the same number of particles/samples. RKN does not have recurrent states, so we choose a higher latent dimension to make model size comparable. In contrast, CF-VAE has only one global latent variable which needs more capacity and we make it higher-dimensional than zt. Details for each experiment are in Appendix G.\nSynthetic data with multi-modal dynamics. We generate synthetic data with two dimensions and four modes and compare the performance of VDM with 9 samples (Fig. 3, left), VDM with a single sample (Fig. 3, middle), and AESMC using 9 particles (Fig. 3, right). Since variational inference is known to try to match the aggregated posterior with the predictive prior (Tomczak & Welling, 2018), it is instructive to fit all three models and to look at their predictive prior p(z2|x≤1) and the aggregated posterior p(z2|D). Because of the multi-modal nature of the problem, all 3 aggregated posteriors are multi-modal, but only VDM(k = 9) learns a multi-modal predictive prior (thanks to its choice of variational family). Although AESMC achieves a good match between the prior and the aggregated posterior, the predictive prior does not clearly separate into different modes. In contrast, the inference model of VDM successfully uses the weights (Eq. (6)), which contain information about the incoming observation, to separate the latent states into separate modes.\nStochastic Lorenz attractor. The Lorenz attractor is a system governed by ordinary differential equations. We add noise to the transition and emission function to make it stochastic (details in Appendix F.1). Under certain parameter settings it is chaotic – even small errors can cause considerable differences in the future. This makes forecasting its dynamics very challenging. All models are trained and then tasked to predict 90 future observations given 10 initial observations. Fig. 4\nillustrates qualitatively that VDM (Fig. 4b) and AESMC (Fig. 4c) succeed in modeling the chaotic dynamics of the stochastic Lorenz attractor, while CF-VAE (Fig. 4d) and VRNN (Fig. 4e) miss local details, and RKN (Fig. 4f) which lacks the capacity for stochastic transitions does not work at all. VDM achieves the best scores on all metrics (Table 1). Since the dynamics of the Lorenz attractor are governed by ordinary differential equations, the transition dynamics at each time step are not obviously multi-modal, which explains why all models with stochastic transitions do reasonably well. Next, we will show the advantages of VDM on real-world data with multi-modal dynamics.\nTaxi trajectories. The taxi trajectory dataset involves taxi trajectories with variable lengths in Porto, Portugal. Each trajectory is a sequence of two dimensional locations over time. Here, we cut the trajectories to a fixed length of 30 to simplify the comparison (details in Appendix F.2). The task is to predict the next 20 observations given 10 initial observations. Ideally, the forecasts should follow the street map (though the map is not accessible to the models).\nThe results in Table 2 show that VDM outperforms the other sequential latent variable models in all evaluations. However, it turns out that for multi-step forecasting learning global structure is advantageous, and CF-VAE which is a global latent variable model, achieves the highest results. However, this value doesn’t match the qualitative results in Fig. 1. Since CF-VAE has to encode the entire structure of the trajectory forcast into a single latent variable, its predictions seem to average over plausible continuations but are locally neither plausible nor accurate. In comparison, VDM and the other models involve a sequence of latent variables. As the forecasting progresses, the methods update their distribution over latest states, and the impact of the initial observations becomes weaker and weaker. As a result, local structure is captured more accurately. While the forecasts are plausible and can be highly diverse, they potentially evolve into other directions than the ground truth. For this reason, their multi-step prediction results are worse in terms of log-likelihood. That’s why the empirical W-distance is useful to complement the evaluation of multi-modal tasks. It reflects that the forecasts of VDM are diverse and plausible. Additionally, we illustrate the predictive prior p(zt|x<t) at different time steps in Fig. 5. VDM(k = 13) learns a multi-modal predictive prior, which VDM(k = 1) and AESMC approximate it with an uni-modal Gaussian.\nU.S. pollution data. In this experiment, we study VDM on the U.S. pollution dataset (details in Appendix F.3). The data is collected from counties in different states from 2000 to 2016. Each observation has 12 dimensions (mean, max value, and air quality index of NO2, O3, SO2, and O3). The goal is to predict monthly pollution values for the coming 18 months, given observations of the previous six months. We ignore the geographical location and time information to treat the development tendency of pollution in different counties and different times as i.i.d.. The unknown context information makes the dynamics multi-modal and challenging to predict accurately. Due to the small size and high dimensionality of the dataset, there are not enough samples with very similar initial observations. Thus, we cannot evaluate empirical W-distance in this experiment. In multi-step predictions and one-step predictions, VDM outperforms the other methods.\nNBA SportVu data. This dataset4 of sequences of 2D coordinates describes the movements of basketball players and the ball. We extract the trajectories and cut them to a fixed length of 30 to simplify the comparisons (details in Appendix F.4). The task is to predict the next 20 observations given 10 initial observations. Players can move anywhere on the court and hence their movement is less structured than the taxi trajectories which are constrained by the underlying street map. Due to this, the initial movement patters are not similar enough to each other to evaluate empirical Wdistance. In multi-step and one-step predictions, VDM outperforms the other baselines (Table 4). Fig. 6 illustrates qualitatively that VDM (Fig. 6b) and CF-VAE (Fig. 6d) succeed in capturing the multi-modal dynamics. The forecasts of AESMC (Fig. 6c) are less plausible (not as smooth as data), and VRNN (Fig. 6e) and RKN (Fig. 6f) fail in capturing the multi-modality." }, { "heading": "5 CONCLUSION", "text": "We have presented variational dynamic mixtures (VDM), a sequential latent variable model for multi-modal dynamics. The main contribution is a new variational family. It propagates multiple samples through an RNN to parametrize the posterior approximation with a mixture density network. Additionally, we have introduced the empirical Wasserstein distance for the evaluation of multimodal forecasting tasks, since it accounts for forecast accuracy and diversity. VDM succeeds in learning challenging multi-modal dynamics and outperforms existing work in various applications.\n4A version of the dataset is available at https://www.stats.com/data-science/" }, { "heading": "A SUPPLEMENTARY TO WEIGHTING FUNCTION", "text": "In this Appendix we give intuition for our choice of weighting function Eq. (6). Since we approximate the integrals in Eqs. (3) and (7) with samples from q̃(st−1 | x<t) 5 instead of samples from q(st−1 | x≤t), importance sampling tells us that the weigths should be\nω(st−1,xt) = q(st−1 | x≤t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) q̃(st−1 | x<t) q̃(st−1 | x<t)\n= q(xt | st−1,x<t) q(xt | x<t) ∝ q(xt | st−1,x<t) (15)\nThis is consistent with out earlier definition of q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t). The weights are proportional to the likelihood of the variational model q(xt | st−1,x<t). We choose to parametrize it using the likelihood of the generative model p(xt | ht−1 = st−1) and get\nω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)). (16)\nWith this choice of the weighting function, only the mixture component with the highest likelihood is selected to be in charge of modeling the current observation xt. As a result, other mixture components have the capacity to focus on different modes. This helps avoid the effect of mode-averaging. An alternative weight function is given in Appendix H." }, { "heading": "B SUPPLEMENTARY TO LOWER BOUND", "text": "Claim. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t),\nlog p(xt | x<t) ≥ LELBO(x≤t, φ) . (17)\nProof. We write the data evidence as the double integral over the latent variables zt, and z<t. log p(xt | x<t) = log ∫∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)p(z<t | x<t)dztdz<t (18)\nWe multiply the posterior at the previous time step p(z<t | x<t) with the ratio of the approximated posterior q(z<t|x<t)q(z<t|x<t) and the ratio f(a,b) f(a,b) , where f is any suitable function of two variables a and b. The following equality holds, since the ratios equal to one.\nlog p(xt | x<t)\n= log\n∫ f(a,b)\nf(a,b) q(z<t | x<t) q(z<t | x<t)\np(z<t | x<t) ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dztdz<t (19)\nWe move the integral over z<t with respect to f(a,b)q(z<t | x<t) out of the log operation with applying the Jensen’s inequality.\nlog p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt ] (20)\n− Ef(a,b)q(z<t|x<t) [ log f(a,b) + log\nq(z<t | x<t) p(z<t | x<t) ] We introduce the variational posterior q(zt | z<t,x≤t), and apply Jensen’s inequality to replace the intractable integral log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt with its lower bound.\nlog p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ Eq(zt|z<t,x≤t) [ log\np(xt | z≤t,x<t)p(zt | z<t,x<t) q(zt | z<t,x≤t) ]] − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log\nq(z<t | x<t) p(z<t | x<t)\n] . (21)\n5The ∼ just helps to visually distinguish the two distributions that appear in the main text.\nThe expectation with respect to f(a,b)q(z<t | x<t) is approximated with samples. Instead of resampling the entire history, samples from previous time steps are reused (they have been aggregated by the RNN) and we sample according to Eq. (4). We plugg in the weighting function ω(s(i)t−1,xt) for f(a,b). The term log q(z<t|x<t)p(z<t|x<t) is not affected by the incoming observation xt and can be treated as a constant.\nIn this step, we plug in our generative model and inference model as they are described in the main text for p and q. The conditional independence assumptions can be read of Fig. 2. In the generative model ht−1 and in the inference model st−1 summarize the dependencies of zt on the previous latent variables z<t and observations x<t. In other words, we assume zt is conditionally independent on z<t and x<t given s (i) t−1 in the inference model (or given ht−1 in the generative model).\nlog p(xt | x<t) ≥ 1\nk k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1\nk k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt)\n[ log\np(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt)\n]\n− 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (22)" }, { "heading": "C ALGORITHMS OF GENERATIVE MODEL AND INFERENCE MODEL", "text": "Algorithm 1 Generative model Inputs: [µz,τ , σ2z,τ ],hτ−1 Outputs: xτ+1:T zτ ∼ N (µz,τ , σ2z,τ I) hτ = φ\nGRU(zτ ,hτ−1) for t = τ + 1 : T do\n[µ0,t, σ 2 0,t] = φ tra(ht−1) zt ∼ N (µ0,t, σ20,tI) ht = φ\nGRU(zt,ht−1) [µx,t, σ 2 x,t] = φ dec(zt,ht−1)\nxt ∼ N (µx,t, σ2x,tI) end for\nAlgorithm 2 Inference model Inputs: x1:τ ,h0 Outputs: [µz,1:τ , σ2z,1:τ ],hτ−1 [µz,1, σ 2 z,1] = φ\ninf (h0,x1) for t = 2 : τ do\nz (i) t−1 ∼ N (µz,t−1, σ2z,t−1I) s (i) t−1 = φ GRU(z (i) t−1,ht−2) [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt) ω (i) t := 1(i = argmaxj p(xt | ht−1 = s (j) t−1)) [µz,t, σ 2 z,t] = ∑k i ω (i) t N (µ (i) z,t, σ (i)2 z,t I)\nht−1 ≈ ∑k i ω (i) t s (i) t−1\nend for" }, { "heading": "D SUPPLEMENTARY TO STOCHASTIC CUBATURE APPROXIMATION", "text": "Cubature approximation. The cubature approximation is widely used in the engineering community as a deterministic method to numerically integrate a nonlinear function f(·) of Gaussian random variable z ∼ N (µz, σ2zI), with z ∈ Rd. The method proceeds by constructing 2d+1 sigma points z(i) = µz+σzξ(i). The cubature approximation is simply a weighted sum of the sigma points propagated through the nonlinear function f(·),\n∫ f(z)N (z | µz, σ2zI)dz ≈ 2d+1∑ i=1 γ(i)f(z(i)) . (23)\nSimple analytic formulas determine the computation of weights γ(i) and the locations of ξ(i).\nγ(i) =\n{ 1\n2(n+κ) , i = 1, ..., 2n κ\nn+κ , i = 0 ξ(i) = √ n+ κei , i = 1, ..., n − √ n+ κei−n , i = n+ 1, ..., 2n\n0 , i = 0 ,\n(24)\nwhere κ is a hyperparameter controlling the spread of the sigma points in the n-dimensional sphere. Further ei represents a basis in the n-dimensional space, which is choosen to be a unit vector in cartesian space, e.g. e1 = [1, 0, ..., 0].\nStochastic cubature approximation. In SCA, we adopt the computation of ξ(i) in Eq. (24), and infuse the sigma points with standard Gaussian noise ∼ N (0, I) to obtain stochastic sigma variables s(i) = µz + σz(ξ(i) + ). We choose κ = 0.5 to set the weights γ(i) equally." }, { "heading": "E SUPPLEMENTARY TO ABLATION STUDY OF REGULARIZATION TERMS", "text": "We investigate the effect of the regularization terms using the synthetic data from Fig. 3. We can see in Table 5, VDM(k = 9) can be trained successfully withLELBO only, and both regularization terms improve the performance (negative log-likelihood of multi-steps ahead prediction), while VDM(k = 1) doesn’t work whatever the regularization terms. Additionally, we tried to train the model only with the regularization terms (each separate or together) but these options diverged during training." }, { "heading": "F SUPPLEMENTARY TO EXPERIMENTS SETUP", "text": "" }, { "heading": "F.1 STOCHASTIC LORENZ ATTRACTOR SETUP", "text": "Lorenz attractor is a system of three ordinary differential equations:\ndx dt = σ(y − x), dy dt = x(ρ− z)− y, dz dt = xy − βz , (25)\nwhere σ, ρ, and β are system parameters. We set σ = 10, ρ = 28 and β = 8/3 to make the system chaotic. We simulate the trajectories by RK4 with a step size of 0.01. To make it stochastic, we add process noise to the transition, which is a mixture of two Gaussians 0.5N (m0,P) + 0.5N (m2,P), where\nm0 = [ 0 1 0 ] , m1 = [ 0 −1 0 ] , P = [ 0.06 0.03 0.01 0.03 0.03 0.03 0.01 0.03 0.05 ] . (26)\nBesides, we add a Gaussian noise with zero mean and diagonal standard deviation [0.6, 0.4, 0.8] as the observation noise. Totally, we simulate 5000 sequences as training set, 200 sequences as validation set, and 800 sequences as test set. For evaluation of Wasserstein distance, we simulate 10 groups of sequences additionally. Each group has 100 sequences with similar initial observations." }, { "heading": "F.2 TAXI TRAJECTORIES SETUP", "text": "The full dataset is very large and the length of trajectories varies. We select the trajectories inside the Porto city area with length in the range of 30 and 45, and only extract the first 30 coordinates of each trajectory. Thus we obtain a dataset with a fixed sequence length of 30. We split it into the training set of size 86386, the validation set of size 200, and the test set of size 10000." }, { "heading": "F.3 U.S. POLLUTION DATA SETUP", "text": "The U.S. pollution dataset consists of four pollutants (NO2, O3, SO2 and O3). Each of them has 3 major values (mean, max value, and air quality index). It is collected from counties in different states for every day from 2000 to 2016. Since the daily measurements are too noisy, we firstly compute the monthly average values of each measurement, and then extract non-overlapped segments with the length of 24 from the dataset. Totally we extract 1639 sequences as training set, 25 sequences as validation set, and 300 sequences as test set." }, { "heading": "F.4 NBA SPORTVU DATA SETUP", "text": "We use a sliding window of the width 30, and the stride 30 to cut the long sequences to short sequences of a fixed length 30. We split them into the training set of size 8324, the validation set of size 489, and the test set of size 980.\nG IMPLEMENTATION DETAILS\nHere, we provide implementation details of VDM models used across the three datasets in the main paper. VDM consists of\n• encoder: embed the first observation x0 to the latent space as the initial latent state z0. • transition network: propagate the latent states zt. • decoder: map the latent states zt and the recurrent states ht to observations xt. • inference network: update the latent states zt given observations xt. • latent GRU: summarize the historic latent states z≤t in the recurrent states ht. • discriminator: be used for adversarial training.\nThe optimizer is Adam with the learning rate of 1e − 3. In all experiments, the networks have the same architectures but different sizes. The model size depends on observation dimension dx, latent state dimension dz, and recurrent state dimension dh. The number of samples used at each time step in the training is 2dz +1. If the model output is variance, we use the exponential of it to ensure its non-negative.\n• Encoder: input size is dx; 3 linear layers of size 32, 32 and 2dz, with 2 ReLUs. • Transition network: input size is dh; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Decoder: input size is dh + dz; 3 linear layers of size 32, 32 and 2dx, with 2 ReLUs. • Inference network: input size is dh + dx; 3 linear layers of size 64, 64, and 2dz, with 3\nReLUs.\n• Latent GRU: one layer GRU of input size dz and hidden size dh • Discriminator: one layer GRU of input size dx and hidden size dh to summarize the pre-\nvious observations as the condition, and a stack of 3 linear layers of size 32, 32 and 1, with 2 ReLUs and one sigmoid as the output activation, whose input size is dh + dx.\nStochastic Lorenz attractor. Observation dimension dx is 3, latent state dimension dz is 6, and recurrent state dimension dh is 32.\nTaxi trajectories. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32.\nU.S. pollution data6 Observation dimension dx is 12, latent state dimension dz is 8, and recurrent state dimension dh is 48.\n6https://www.kaggle.com/sogun3/uspollution\nNBA SportVu data. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32.\nHere, we give the number of parameters for each model in different experiments in Table 6." }, { "heading": "H ADDITIONAL EVALUATION RESULTS", "text": "We evaluate more variants of VDM in the chosen experiments to investigate the different choices of sampling methods (Monte Carlo method, and SCA) and weighting functions (Eqs. (27) and (28)). In addition to Eq. (27) described in the main text, we define one other choice in Eq. (28).\nω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)) (27)\nω (i) t = ω(s (i) t−1,xt)/k := 1(i = j ∼ Cat(· | ω1, . . . , ωk)), ωj ∝ p(xt | ht−1 = s (j) t−1), (28)\nWe define the weighting function as an indicator function, in Eq. (27) we set the non-zero component by selecting the sample that achieves the highest likelihood, and in Eq. (28) the non-zero index is sampled from a categorical distribution with probabilities proportional to the likelihood. The first choice (Eq. (27)) is named with δ-function, and the second choice (Eq. (28)) is named with categorical distribution. Besides, in VDM-Net, we evaluate the performance of replacing the closed-\nform inference of the weighting function with an additional inference network. In Table 7, we show the choices in different variants. All models are trained with LELBO&Lpred." }, { "heading": "H.1 STOCHASTIC LORENZ ATTRACTOR", "text": "" }, { "heading": "H.2 TAXI TRAJECTORIES", "text": "H.3 U.S. POLLUTION DATA" } ]
2,020
null
SP:fd4240e0f2c6faa6783fe5e1d1e53d0d5f0945a0
[ "This paper tackles a timely problem of privacy leakage on the edge devices when applying deep neural networks. Instead of mitigating the leakage of a set of private attributes, the proposed method tries to remove the information irrelevant to the primary task. The proposed method does not need to identify the private attributes. The main contribution of this paper is the two proposed approaches for removing “null content” and “signal content.” The evaluations of the proposed approach are conducted on four image datasets." ]
Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks. Current methods for providing the privacy train the model to minimize information leakage for a given set of private attributes. In practice, however, the test queries might contain private attributes that are not foreseen during training. We propose an alternative solution, in which, instead of obfuscating the information corresponding to a set of attributes, the edge device discards the information irrelevant to the main task. To this end, the edge device runs the model up to a split layer determined based on its computational capacity and then removes the activation content that is in the null space of the next layer of the model before sending it to the server. It can further remove the low-energy components of the remaining signal to improve the privacy at the cost of reducing the accuracy. The experimental results show that our methods provide privacy while maintaining the accuracy and introducing only a small computational overhead.
[]
[ { "authors": [ "Sattam S Al-Riyami", "Kenneth G Paterson" ], "title": "Certificateless public key cryptography", "venue": "In International conference on the theory and application of cryptology and information security,", "year": 2003 }, { "authors": [ "Jianfeng Chi", "Emmanuel Owusu", "Xuwang Yin", "Tong Yu", "William Chan", "Patrick Tague", "Yuan Tian" ], "title": "Privacy partitioning: Protecting user data during the deep learning inference phase", "venue": "arXiv preprint arXiv:1812.02863,", "year": 2018 }, { "authors": [ "Julian Chokkattu" ], "title": "How to make face unlock more secure in the Samsung Galaxy S10", "venue": null, "year": 2019 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "Andre Van Schaik" ], "title": "EMNIST: Extending MNIST to handwritten letters", "venue": "In International Joint Conference on Neural Networks,", "year": 2017 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Clément Feutry", "Pablo Piantanida", "Yoshua Bengio", "Pierre Duhamel" ], "title": "Learning anonymized representations with adversarial neural networks", "venue": "arXiv preprint arXiv:1802.09386,", "year": 2018 }, { "authors": [ "Jihun Hamm" ], "title": "Minimax filter: learning to preserve privacy from inference attacks", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Chiraag Juvekar", "Vinod Vaikuntanathan", "Anantha Chandrakasan" ], "title": "GAZELLE: A low latency framework for secure neural network inference", "venue": "In USENIX Security Symposium,", "year": 2018 }, { "authors": [ "Yiping Kang", "Johann Hauswald", "Cao Gao", "Austin Rovinski", "Trevor Mudge", "Jason Mars", "Lingjia Tang" ], "title": "Neurosurgeon: Collaborative intelligence between the cloud and mobile edge", "venue": "ACM SIGARCH Computer Architecture", "year": 2017 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Ang Li", "Jiayi Guo", "Huanrui Yang", "Yiran Chen" ], "title": "Deepobfuscator: Adversarial training framework for privacy-preserving image classification", "venue": "In Advances in Neural Information Processing Systems Workshops,", "year": 2019 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Yitong Li", "Timothy Baldwin", "Trevor Cohn" ], "title": "Towards robust and privacy-preserving text representations", "venue": "In Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Sicong Liu", "Anshumali Shrivastava", "Junzhao Du", "Lin Zhong" ], "title": "Better accuracy with quantified privacy: representations learned via reconstructive adversarial network", "venue": null, "year": 1901 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Fatemehsadat Mireshghallah", "Mohammadkazem Taram", "Prakash Ramrakhyani", "Ali Jalali", "Dean Tullsen", "Hadi Esmaeilzadeh" ], "title": "Shredder: Learning noise distributions to protect inference privacy", "venue": "In International Conference on Architectural Support for Programming Languages and Operating Systems,", "year": 2020 }, { "authors": [ "Daniel Moyer", "Shuyang Gao", "Rob Brekelmans", "Aram Galstyan", "Greg Ver Steeg" ], "title": "Invariant representations without adversarial training", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hong-Wei Ng", "Stefan Winkler" ], "title": "A data-driven approach to cleaning large face datasets", "venue": "In IEEE international conference on image processing,", "year": 2014 }, { "authors": [ "Seyed Ali Osia", "Ali Taheri", "Ali Shahin Shamsabadi", "Kleomenis Katevas", "Hamed Haddadi", "Hamid R Rabiee" ], "title": "Deep private-feature extraction", "venue": "In IEEE Transactions on Knowledge and Data Engineering,", "year": 2018 }, { "authors": [ "M Sadegh Riazi", "Mohammad Samragh", "Hao Chen", "Kim Laine", "Kristin Lauter", "Farinaz Koushanfar" ], "title": "XONN: Xnor-based oblivious deep neural network inference", "venue": "In USENIX Security Symposium,", "year": 2019 }, { "authors": [ "Congzheng Song", "Vitaly Shmatikov" ], "title": "Overlearning reveals sensitive attributes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nishant Vishwamitra", "Bart Knijnenburg", "Hongxin Hu", "Yifang P Kelly Caine" ], "title": "Blur vs. block: Investigating the effectiveness of privacy-enhancing obfuscation for images", "venue": "In Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Yulun Du", "Eduard Hovy", "Graham Neubig" ], "title": "Controllable invariance through adversarial feature learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zhifei Zhang", "Yang Song", "Hairong Qi" ], "title": "Age progression/regression by conditional adversarial autoencoder", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": null, "text": "Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks. Current methods for providing the privacy train the model to minimize information leakage for a given set of private attributes. In practice, however, the test queries might contain private attributes that are not foreseen during training. We propose an alternative solution, in which, instead of obfuscating the information corresponding to a set of attributes, the edge device discards the information irrelevant to the main task. To this end, the edge device runs the model up to a split layer determined based on its computational capacity and then removes the activation content that is in the null space of the next layer of the model before sending it to the server. It can further remove the low-energy components of the remaining signal to improve the privacy at the cost of reducing the accuracy. The experimental results show that our methods provide privacy while maintaining the accuracy and introducing only a small computational overhead." }, { "heading": "1 INTRODUCTION", "text": "The surge in cloud computing and machine learning in recent years has led to the emergence of Machine Learning as a Service (MLaaS), where the compute capacity of the cloud is used to analyze the data that lives on edge devices. One shortcoming of the MLaaS framework is the leakage of the clients’ private data to the cloud server. To address this problem, several cryptographybased solutions have been proposed which provide provable security at the cost of increasing the communication cost and delay of remote inference by orders of magnitude (Juvekar et al. (2018); Riazi et al. (2019)). The cryptography-based solutions are applicable in use-cases such as healthcare where a few minutes of delay is tolerable, but not in scenarios where millions of clients request fast and low-cost responses such as in Amazon Alexa or Apple Siri applications. A light-weight alternative to cryptographic solutions is to manually hide private information on the edge device; For instance, sensitive information in an image can be blurred on the edge device before sending it to the service provider (Vishwamitra et al. (2017)). This approach, however, is task-specific and may not be viable for generic applications.\nThe objective of split inference framework, shown in Figure 1, is to provide a generic and computationally efficient data obfuscation scheme (Kang et al. (2017); Chi et al. (2018)). The service provider trains the model and splits it into two sub-models, M1 and M2, where M1 contains the first few layers of the model and M2 contains the rest. The client runs M1 on the edge device and sends the resulting feature vector z =M1(x) to the server, which computes the public label as ypub =M2(z). To preserve the privacy, the client desires z to only contain information related to the underlying task. For instance, when sending facial features for cell-phone authentication, the client does not want to disclose other information such as their mood. As seen in Figure 1, the privacy leakage is quantified by an adversary that trains the model M3 to extract private label ypri from feature vector z.\nCurrent methods of private split inference aim to censor the information corresponding to a list of known private attributes. For example, Feutry et al. (2018) utilize adversarial training to minimize the accuracy of M3 on the private attribute, and Osia et al. (2018) minimize the mutual information between the query z and the private label ypri at training time. The set of private attributes, however, can vary from one query to another. Hence, it is not feasible to foresee all types of attributes that could be considered private for a specific MLaaS application. Moreover, the need to annotate inputs with all possible private attributes significantly increases the cost of model training.\nFigure 1: Split inference setup. Client runs M1 locally and sends the features z = M1(x) to the server. The server predicts the intended attribute as ypub = M2(z). An adversary trains a separate model M3 to predict the private attribute as ypri = M3(z).\nIn this paper, we propose an alternative solution where, instead of censoring the information that is utilized to predict known private attributes, we discard the information that is not used by the main model for predicting the public attribute. Our contributions are summarized in the following.\n• We characterize the information that is not relevant to the prediction of the public attribute as part of the content of the feature vector z that will be removed by the server-side model. We then define the null content of the feature vector, zN , as the content in z that is in the null-space of the following linear layer. The remaining content is called signal content and is denoted by zS . We have M2(z) =M2(zS + zN ) =M2(zS).\n• We propose to remove zN from features, z, and show that it reduces the accuracy of the adversary (M3), while maintaining the accuracy of the main model (M2). To further discard the private information in z, we propose to remove the low-energy components of zS , through which we achieve higher privacy (lower accuracy of M3) at the cost of a small reduction in utility (lower accuracy of M2).\n• We show our methods provide tradeoffs between edge-computation efficiency, privacy, and accuracy. Specifically, with higher edge computation (more layers on the edge device), the client achieves better privacy at the same accuracy. Also, with the same edge computation (a given split layer), removing more components from the signal content provides better privacy at the cost of reduced accuracy.\n• We perform extensive experiments on several datasets and show that our methods provide better tradeoffs between accuracy and privacy compared to existing approaches such as adversarial training, despite having no knowledge of the private attribute at training or inference times." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "We consider the supervised learning setting of Figure 1, where the model M2 ◦M1 is trained with a set of examples {xi}Ni=1 and their corresponding public labels {y pub i }Ni=1. At inference phase, the client runs M1 on their data and sends the intermediate feature vector z =M1(x) to the server. The goal of private inference is to ensure that z does not contain information about private attributes." }, { "heading": "2.1 MEASURING PRIVACY", "text": "Several methods have been proposed to measure the privacy leakage of the feature vector. One approach is computing the mutual information between the query x and the feature vector z (Kraskov et al. (2004)). In practice, measuring the mutual information is not tractable for high-dimensional random variables, unless certain assumptions are made about the probability distribution of the random variables of interest. A more practical approach measures privacy based on the reconstruction error, ||x̃− x||, where x̃ is estimated based on z (Mahendran & Vedaldi (2015)). Finally, attribute privacy is defined based on the accuracy of an adversary model that takes z as input and predicts the private label.\nIn this paper, we use the attribute privacy notion as it applies to a wide range of applications. Assume each example {xi}Ni=1 has one or multiple private labels {y pri i }Ni=1. The adversary trains a separate model M3 with (zi, y pri i ) where zi = M1(xi), as shown in Figure 1. Note that M3 is used as an aftermath process to evaluate the privacy of the model M2 ◦M1. The split learning framework should achieve high utility, i.e., the server should be able to infer the public attribute from z accurately, while providing privacy, i.e., z should not contain information about ypri. We refer to the accuracy of M2 ◦M1 on ypub as public accuracy and the accuracy of M3 ◦M1 on ypri as private accuracy." }, { "heading": "2.2 THREAT MODEL", "text": "Honest-but-curious server. The server performs the inference of the public attribute but will potentially try to extract private information from the features, z, as well.\nClient capabilities. Upon providing the service, the server also provides a profile of the utility (accuracy on the public attributes), privacy (accuracy on several private attributes), and computation of the edge device. The client then decides on the best tradeoff based on the computational resources and also the desired level of privacy. Such mechanisms are already in use in ML-on-the-edge applications. For example, in the application of unlocking the phone by face recognition, the client can specify the required precision in face recognition, where a lower precision will provide higher utility at the cost of lower security (Chokkattu, 2019)." }, { "heading": "2.3 RELATED WORK", "text": "Prior work has shown that the representations learned by deep neural networks can be used to extract private information (Song & Shmatikov (2019)) or even reconstruct the raw data (Mahendran & Vedaldi (2015)). Current methods for private inference can be categorized as follows.\nCryptography-based solutions. Since the server is not trusted, solutions based on public key encryption (Al-Riyami & Paterson (2003)) are not applicable. We consider scenarios where the server provides service to millions of users (e.g., in cases of Amazon Alexa or Apple Siri), and users expect low communication and fast response. Therefore, classic two-party cryptographic solutions for secure function evaluation (Juvekar et al. (2018); Riazi et al. (2019)) are also not applicable to our scenario.\nNoise Injection. A line of work suggests obfuscating private attributes by adding noise to the features, i.e., instead of z, the client sends z + µ to the server, with the noise designed to maintain public accuracy while reducing private accuracy (Mireshghallah et al. (2020)). While noise addition improves privacy, it has been shown to reduce the public accuracy significantly (Liu et al. (2019)).\nInformation Bottleneck. The notion of mutual information can be used to train private models. Let I(a, b) denote the mutual information between random variables a and b. The idea is to train M1 that maximizes I(z, ypub) while minimizing I(z, ypri) (Osia et al. (2018); Moyer et al. (2018)). The optimization is formulated as follows:\nmax M1\nEx,ypub,ypri [I(M1(x), ypub)− γI(M1(x), ypri)− βI(M1(x), x)]. (1)\nThe use of mutual information for privacy, however, has been challenged by practical attacks that extract secret information even when I(z, ypri) is small (Song & Shmatikov (2019)).\nAdversarial Training. This defense solves the following min-max optimization problem:\nmax M1,M2 min M3\nEx,ypub,ypri [γL(ypri,M3 ◦M1(x))− L(ypub,M2 ◦M1(x))], (2)\nwhere L denotes the cross-entropy loss and γ is a scalar. The above objective can be achieved through adversarial training (Edwards & Storkey (2016); Hamm (2017); Xie et al. (2017); Li et al. (2018); Feutry et al. (2018); Li et al. (2019)). At convergence, the trained M1 generates z such that M3(z) is not an accurate estimation of ypri while M2(z) accurately describes ypub.\nExisting methods for private split inference have several limitations. First, the underlying assumption in above learning-based defenses is that a set of private attributes along with the public label are provided at training time. In practice, however, it might not be feasible to foresee and identify all possible private attributes and annotate the training data accordingly. It also contradicts deployment at-scale since whenever a new private attribute is identified, the model M1 needs to be retrained and re-distributed to all edge devices that use the service. Second, current approaches for private inference often provide a poor tradeoff between accuracy and privacy. Moreover, the tradeoff of accuracy and privacy with the client-side computation is not well studied in the split learning framework. In this paper, we characterize this tradeoff and propose an alternative approach that, instead of obfuscating the information related to the private attributes, the edge device removes the feature content that is irrelevant to the public task. We empirically show our method successfully reduces the accuracy on private attributes at a small or no cost to public accuracy." }, { "heading": "3 PROPOSED METHODS", "text": "" }, { "heading": "3.1 SIGNAL AND NULL CONTENTS OF FEATURE VECTOR", "text": "Let z ∈ Rn be a feature vector and W ∈ Rm×n be a matrix. The operation of fully-connected and convolutionals layers can be expressed as matrix-vector and matrix-matrix multiplication. Herein, we let z represent the vector in fully-connected layer and a column of the second matrix in convolution layer. Let the singular value decomposition (SVD) of W be W = U · S · V . Since the rows of V form an orthonormal basis, we can write the feature vector as\nz = n∑ i=1 αiv T i , αi =< v T i , z >, (3)\nwhere vi is the i-th row of V and the < ·, · > operator denotes inner-product. Definition 1. The signal content of z with respect to matrix W , or simply the signal content of z, denoted by zS is defined as\nzS = argmin h\n‖h‖2, s.t., W · (z − h) = 0. (4)\nThe null content of z is also defined as zN = z − zS . Lemma 1. We have\nzS = m∑ i=1 αiv T i and zN = n∑ i=m+1 αiv T i . (5)\nProof. We write h as the composition of orthonormal vectors vi’s as h = ∑n i=1 βiv T i . We have\nW (z − h) = n∑ i=1 (αi − βi)WvTi = n∑ i=1 (αi − βi)US V vTi︸︷︷︸ qi∈Rn\n(6)\nSince the rows of V are orthonormal, then qi = V vTi is a one-hot vector with its i-th element equal to 1. By substituting qi in (6) we obtain\nW (z − h) = n∑ i=1 (αi − βi)US[:,i] = m∑ i=1 (αi − βi)US[:,i] = m∑ i=1 si(αi − βi)U[:,i], (7)\nwhere S[:,i] and U[:,i] are the i-th columns of S and U , respectively. Note that, since S is a diagonal matrix, we have S[:,i] = 0,∀i ∈ {m + 1, · · · , n}, thus reducing the summation from n to m components. Also, for i ∈ {1, · · · ,m}, S[:,i] is a column vector with only one non-zero element, si, at the i-th dimension.\nAs a result, to obtain W (z − h) = 0, we must have βi = αi,∀i ∈ {1, · · · ,m}. Since vi’s are orthonormal, we have ‖h‖2 = √∑n i=1 β 2 i . Hence, to minimize ‖h‖2, we set βi = 0,∀i ∈\n{m + 1, · · · , n}. Therefore, zS = ∑m i=1 αiv T i . The null content zN can be then computed as\nzN = z − zS = ∑n i=m+1 αiv T i .\nDefinition 2. The normalized signal and null contents of z are defined as CS(z) = ||zS ||22 ||z||22 and CN (z) = ||zN ||22 ||z||22 , respectively. We have CS(z) + CN (z) = ∑m i=1 α 2 i∑n i=1 α 2 i + ∑n i=m+1 α 2 i∑n i=1 α 2 i = 1." }, { "heading": "3.2 DEFENSE 1: OBFUSCATING NULL CONTENT OF FEATURE VECTOR", "text": "We propose to remove all the content in feature vector that is irrelevant to the public attribute. To do so, given a feature vector z, we find a minimum-norm vector z′ that generates the same prediction for the public attribute as z, i.e., M2(z′) =M2(z). Formally,\nz′ = argmin h ‖h‖2, s.t., M2(h) =M2(z). (8)\nDue to the complex (nonlinear) nature of deep networks, finding such a vector would require doing multiple backpropagations on M2 for each given z. This is, however, not feasible for resourceconstrained edge devices. To address this problem, we relax (8) such that the constraint holds for the first layer of M2 (server-side model), i.e., we modify the constraint to Wz′ =Wz, where W is the weight matrix of the first layer of M2. As discussed in Section 3.1, the solution to this relaxed optimization problem is zS , the signal content of z. Removing or obfuscating the null content of z does not change the model prediction on public attribute. It, however, might harm the private accuracy since part of the null content of zN might fall into the signal content of the first linear layer of M3. The method is described in Figure 2 (left).\nAt inference time, to obfuscate zN , the client constructs zo using either of the following methods. • Client constructs µ = ∑n i=m+1 ηiv T i , with coefficients, ηi, chosen at random, and sends zo =\nz + µ to the server. The adversary can recover zS = V T1:m · V1:m · zo but cannot recover zN . • Client computes the signal content of z and sends zo = zS = ∑m i=1 αiv T i to the server.\nFor the first case, since µ is independent of z, the client can compute it offline, e.g., when the edge device is plugged in and not in use, and store it for later use. The second approach does not require storage on the edge device, but an extra computation, equal to the complexity of computing the first layer of M2, has to be done during inference to extract zS . We next propose a method that reduces the extra cost to only a fraction of the computation cost of the first layer of M2." }, { "heading": "3.3 DEFENSE 2: DISCARDING LOW-ENERGY SIGNAL CONTENT OF FEATURE VECTOR", "text": "In the first defense method, we proposed to discard the content of the feature vector that will be removed by the first layer of M2. The following layers of M2 will further remove more content from feature vector. Hence, we can potentially discard more content from z and still get similar prediction as the original feature vector. For a linear layer, following the same process in Section 3.1, the output is computed as:\nW · z =W · zS = m∑ i=1 αiU · S · qi = m∑ i=1 αiU · S[:,i] = m∑ i=1 siαiU[:,i], (9)\nwhere si is the i-th eigenvalue in S, αi is defined in (3), and U[:,i] denotes the i-th column of U . From (9) we observe that components with larger siαi are contributing more to the layer output since ||U[:,i]||2 = 1 for all columns of U . As such, we approximate z → z̃ by only keeping m′ < m components of the right-hand-side of (9) that have the largest coefficients. Unlike null content filtering, removing signal content will affect the public accuracy as it changes the main network output, but can further reduce the private accuracy. To improve public accuracy when removing signal content of features, the server fine-tunes M2 on z̃. The method is described in Figure 2 (right).\nSince si and U[:,i] are fixed at the inference time, the client only needs to send the selected αi values along with their indices to the server; the server knows si and U , and can reconstruct z̃ accordingly. The edge-computation cost of this process is m′/m times the computation cost of the first layer of the server model. We do experiments in settings where m′/m is about 1%. Hence, the computation cost of our method is only a small fraction of computing a single layer of the network. Moreover, since m′ m, our method also incurs a much smaller communication cost compared to sending the entire feature vector to the server." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 SETUP", "text": "Datasets. We perform our experiments on four visual datasets listed below.\n• EMNIST (Cohen et al. (2017)) is an extended version of the MNIST dataset where the labels are augmented with writer IDs. We selected 13000 samples from EMNIST written by 100 writers with 130 examples per writer. We then split this dataset into 10000, 1500, and 1500 training, validation, and test sets. We use the digit label and writer ID as the public and private attributes, respectively.\n• FaceScrub (Ng & Winkler (2014); FaceScrub (2020)) is a dataset of celebrity faces labeled with gender and identity. We use gender and identity as the public and private attributes, respectively. In experiments, we cropped the face region using the bounding boxes in the image annotations and resized images to 50× 50.\n• UTKFace (Zhang et al. (2017)) is a dataset of face images labeled with gender and race, which we use as the public and private attributes, respectively. We cropped the face region using the bounding boxes in the image annotations and resized images to 50× 50.\n• CelebA (Liu et al. (2015)) is a dataset of celebrity images. Each image is labeled with 40 binary attributes. Out of these, we select “Smiling” as the public attribute and {Male, Heavy Makeup, High Cheekbones, Mouth Slightly Open, Wearing Lipstick, Attractive} as private attributes. These attributes have near balanced distribution of positive and negative examples. In experiments, we cropped the face region using the bounding boxes in the image annotations and resized images to 73× 60.\nModel architecture. We present the experimental results on a model used in prior work (Song & Shmatikov (2019)). The model architecture and baseline test accuracy results are summarized in Table 1 and Table 2 in Appendix.\nAdversary capabilities. We use the same architecture for adversary’s model M3 as the server model M2. The model M3 is trained using the features extracted by M1 and the associated private labels. We also assume that the adversary knows the parameters of M1 and M2.\nTraining settings. We use Adam optimizer with an initial learning rate of 0.001 and drop the learning rate by a factor of 10 after 20 and 40 epochs. All models including the adversary and main models are trained for 50 epochs unless stated otherwise." }, { "heading": "4.2 EVALUATIONS", "text": "We start our analysis by computing the null and signal contents in every layer of M = M2 ◦M1. Figure 3 (left) shows the content of the input remained at each layer; for the i-th layer, this content is computed as ∏i j=1 CS(zj) where zj denotes the activation vector at the j-th layer and CS(·) is defined in Section 3.1. As the feature vector propagates through network layers, more content is gradually removed from z until the model outputs the prediction on the public task. We also split the network at different layers and measure the private accuracy of M3 trained with the feature vector. As seen in Figure 3 (right), the private accuracy also decreases as we get closer to the output layer, indicating that the discarded content contained information relevant to the private attribute.\nTo reduce the privacy leakage, we proposed to filter out the null content of the feature vector. Figure 4 shows the private accuracy for different split layers. Removing the null content reduces the private\naccuracy without affecting the public accuracy. Moreover, splitting the network in deeper layers improves the privacy. To further reduce the private accuracy, we discard the low-energy components of the signal content and only keep m′ features. Figure 5 illustrates the effect of m′ on the public and private accuracy, when the network is split at the CONV-3 layer. As seen, by setting m′ to a small value, our method achieves high privacy (low private accuracy) at the cost of a small reduction in public accuracy. In general, the privacy can be controlled using two factors:\n• The split layer: As we go deeper in the network, i.e., when the edge device performs more computation, better tradeoffs can be achieved. To show this effect, we perform signal-content removal at different layers, with m′ set such that the public accuracy is reduced by 1%. The corresponding private accuracy is shown in Figure 6.\n• Number of signal components sent to the server: For the same edge-computation (a given split layer), the number of preserved features (m′) can be tuned so as to achieve a desired tradeoff between utility (higher public accuracy) and privacy (lower private accuracy). Figure 5 shows the results for the setting that the network is split at the input of the CONV-3 layer.\nComparison to Pruning. Similar to our approach, pruning network layers can eliminate features that do not contribute to the public attributed. In the following, we compare our method with pruning in terms of public and private accuracy. We split the network from the middle layer, i.e., at the input of the FC-1 layer. For our method, we keep the top m′ components of z from its signal content and filter out the rest. For pruning, we keepm′ elements in z and set the rest to zero. We adopt the pruning algorithm proposed by (Li et al. (2016)) which works based on the L1 norm of the columns of the following layer’s weight matrix. After pruning, we fine-tune M2 to improve the public accuracy. Figure 7 shows the public and private accuracy for each dataset. As seen, with small m′, both our method and pruning achieve a low private accuracy. Pruning, however, significantly reduces the public accu-\nracy as well. For example, for the UTKFace dataset, with m′ = 1, both methods result in a private accuracy close to the random guess. However, pruning reduces the public accuracy from 92.25% to 53% (also close to random guess), whereas our method keeps the public accuracy at 88.63%.\nComparison with adversarial training. We implement adversarial training framework proposed by (Feutry et al. (2018)) and present the utility-privacy tradeoff in Figure 8. To achieve the best tradeoff for adversarial training, we train the models in multiple settings with different γ parameters (Eq. 2) in range of [0.1, 1]. Note that, unlike our method, adversarial training assumes that the private attribute is known at training time. Despite this, Figure 8 shows that our method achieves a better utility-accuracy tradeoff than adversarial training.\nWe also do experiments for the case with multiple (unseen) private labels. Specifically, we consider the CelebA model trained to detect “smiling” and evaluate two methods, 1) our method: we keep only m′ = 1 component from the signal content of feature vector and then train one adversary model per private attribute, and 2) adversarial training: we first adversarially train an M1 model to obfuscate “gender,” and then train one model for each private attribute.\nFor both of the above methods, the network is split at the input of the FC-1 layer. Figure 9 shows the results. Our method outperforms adversarial training method on both public and private accuracy. In our method, the accuracy on all private attributes are significantly lower than the baseline private accuracy. The only exceptions are “high cheekbones” and “mouth open” attributes, which have correlations with public attribute, that is, a smiling person likely has high cheekbones and their mouth open. The correlation between public and private attributes causes the signal content of server and adversary’s models to have large overlaps and, hence, results in high private accuracy. The adversarially trained model successfully hides the information that it has been trained to obfuscate (the “gender” attribute). Such a model, however, fails to remove information of other attributes such as “makeup” or “lipstick”. This highlights the applicability of our method in practical setting as a generic obfuscator compared to specialized techniques such as adversarial training.\nAblation study on CONV and FC layers. We compare the performance of our method on CONV and FC layers. To do so, we train two networks on the UTKFace task, (1) a network with 10 CONV\nlayers each with 16 output channels, and (2) a network with 10 FC layers each with 2304 neurons. Both networks have an extra FC layer at the end for classification. The number of channels/neurons are chosen such that the total number of output features at each layer is the same for the two networks. The public accuracy of the 10-CONV and 10-FC networks is 89.26% and 89.07%, respectively. Figure 10 shows the public and private accuracy when we remove low-energy components of the signal space at different layers. The number of preserved features, m′, at each layer is chosen such that the public accuracy is maintained. As seen, the 10-FC network achieves a lower private accuracy compared to the 10-CONV network. The reason is that CONV layers are known to be generic feature extractors, while FC layers are more specialized toward the public attribute.\n5 CONCLUSION\nWe proposed a private inference framework, in which edge devices run several layers locally and obfuscate the intermediate feature vector before sending it to the server to execute the rest of the model. For obfuscation, we proposed to remove information that is not relevant to the main task or does not significantly change the predictions. Specifically, we developed two methods of removing the content of the feature vector in the null space of the following linear layer and also removing the low-energy content of the remaining signal. We showed that, unlike existing methods, our methods improve privacy without requiring the knowledge of private attributes at training or inference times." }, { "heading": "A APPENDIX", "text": "" } ]
2,020
null
SP:0cb9035abb016fd549b5606e20e2229dace5033d
[ "This paper proposes a method BayesDICE to estimate posteriors over candidate policy values, which can be used for downstream policy selection. Specifically, the authors estimate the posteriors over the correction ratios for state-action pairs, which optimize a combined metric of a chance constraint from collected data and KL from the prior. Computationally, the authors demonstrate the advantages of their approach by having better performances in both coverage and power for policy evaluation and better downstream ranking with respect to different metrics for policy selection." ]
The presence of uncertainty in policy evaluation significantly complicates the process of policy ranking and selection in real-world settings. We formally consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset. While one can select or rank policies based on point estimates of their policy values or high-confidence intervals, access to the full distribution over one’s belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics. We propose BayesDICE for estimating this belief distribution in terms of posteriors of distribution correction ratios derived from stochastic constraints (as opposed to explicit likelihood, which is not available). Empirically, BayesDICE is highly competitive to existing state-of-the-art approaches in confidence interval estimation. More importantly, we show how the belief distribution estimated by BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric, and we empirically demonstrate that this selection procedure significantly outperforms existing approaches, such as ranking policies according to mean or high-confidence lower bound value estimates.
[]
[ { "authors": [ "Kamyar Azizzadenesheli", "Emma Brunskill", "Animashree Anandkumar" ], "title": "Efficient exploration through bayesian deep q-networks", "venue": "Information Theory and Applications Workshop (ITA),", "year": 2018 }, { "authors": [ "Aharon Ben-Tal", "Laurent El Ghaoui", "Arkadi Nemirovski" ], "title": "Robust optimization", "venue": null, "year": 2009 }, { "authors": [ "Léon Bottou", "Jonas Peters", "Joaquin Quiñonero-Candela", "Denis X. Charles", "D. Max Chickering", "Elon Portugaly", "Dipankar Ray", "Patrice Simard", "Ed Snelson" ], "title": "Counterfactual reasoning and learning systems: The example of computational advertising", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Bo Dai", "Niao He", "Hanjun Dai", "Le Song" ], "title": "Provable bayesian inference via particle mirror descent", "venue": "In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Bo Dai", "Niao He", "Yunpeng Pan", "Byron Boots", "Le Song" ], "title": "Learning from conditional distributions via dual embeddings", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Bo Dai", "Ofir Nachum", "Yinlam Chow", "Lihong Li", "Csaba Szepesvári", "Dale Schuurmans" ], "title": "Coindice: Off-policy confidence interval estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Daniela Pucci De Farias", "Benjamin Van Roy" ], "title": "The linear programming approach to approximate dynamic programming", "venue": "Operations research,", "year": 2003 }, { "authors": [ "Richard Dearden", "Nir Friedman", "David Andre" ], "title": "Model-based bayesian exploration", "venue": "arXiv preprint arXiv:1301.6690,", "year": 2013 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Thomas G Dietterich" ], "title": "The maxq method for hierarchical reinforcement learning", "venue": "In ICML,", "year": 1998 }, { "authors": [ "Shayan Doroudi", "Philip S Thomas", "Emma Brunskill" ], "title": "Importance sampling for fair policy selection", "venue": "Grantee Submission,", "year": 2017 }, { "authors": [ "Miroslav Dudı́k", "John Langford", "Lihong Li" ], "title": "Doubly robust policy evaluation and learning", "venue": "arXiv preprint arXiv:1103.4601,", "year": 2011 }, { "authors": [ "Yaakov Engel", "Shie Mannor", "Ron Meir" ], "title": "Bayes meets bellman: The gaussian process approach to temporal difference learning", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Amir-massoud Farahmand", "Csaba Szepesvári" ], "title": "Model selection in reinforcement learning", "venue": "Machine learning,", "year": 2011 }, { "authors": [ "Mahdi M Fard", "Joelle Pineau" ], "title": "Pac-bayesian model selection for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Yihao Feng", "Tongzheng Ren", "Ziyang Tang", "Qiang Liu" ], "title": "Accountable off-policy evaluation with kernel bellman statistics", "venue": "arXiv preprint arXiv:2008.06668,", "year": 2020 }, { "authors": [ "Dylan J Foster", "Akshay Krishnamurthy", "Haipeng Luo" ], "title": "Model selection for contextual bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Mohammad Ghavamzadeh", "Shie Mannor", "Joelle Pineau", "Aviv Tamar" ], "title": "Bayesian reinforcement learning: A survey", "venue": "arXiv preprint arXiv:1609.04436,", "year": 2016 }, { "authors": [ "Josiah P Hanna", "Peter Stone", "Scott Niekum" ], "title": "Bootstrapping with models: Confidence intervals for off-policy evaluation", "venue": "arXiv preprint arXiv:1606.06126,", "year": 2016 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Vime: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Alexander Irpan", "Kanishka Rao", "Konstantinos Bousmalis", "Chris Harris", "Julian Ibarz", "Sergey Levine" ], "title": "Off-policy evaluation via off-policy classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nan Jiang" ], "title": "A Theory of Model Selection in Reinforcement Learning", "venue": "PhD thesis,", "year": 2017 }, { "authors": [ "Nan Jiang", "Lihong Li" ], "title": "Doubly robust off-policy value evaluation for reinforcement learning", "venue": "arXiv preprint arXiv:1511.03722,", "year": 2015 }, { "authors": [ "Nan Jiang", "Alex Kulesza", "Satinder Singh" ], "title": "Abstraction selection in model-based reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Nathan Kallus", "Masatoshi Uehara" ], "title": "Double reinforcement learning for efficient off-policy evaluation in markov decision processes", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "J Zico Kolter", "Andrew Y Ng" ], "title": "Near-bayesian exploration in polynomial time", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Ilya Kostrikov", "Ofir Nachum" ], "title": "Statistical bootstrapping for uncertainty estimation in off-policy evaluation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ilja Kuzborskij", "Claire Vernade", "András György", "Csaba Szepesvári" ], "title": "Confident off-policy evaluation and selection through self-normalized importance weighting", "venue": "arXiv preprint arXiv:2006.10460,", "year": 2020 }, { "authors": [ "Michail G Lagoudakis", "Ronald Parr" ], "title": "Least-squares policy iteration", "venue": "Journal of machine learning research,", "year": 2003 }, { "authors": [ "Chandrashekar Lakshminarayanan", "Shalabh Bhatnagar", "Csaba Szepesvári" ], "title": "A linearly relaxed approximate linear program for markov decision processes", "venue": null, "year": 2017 }, { "authors": [ "Qiang Liu", "Lihong Li", "Ziyang Tang", "Dengyong Zhou" ], "title": "Breaking the curse of horizon: Infinitehorizon off-policy estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "S. Murphy", "M. van der Laan", "J. Robins" ], "title": "Marginal mean models for dynamic regimes", "venue": "Journal of American Statistical Association,", "year": 2001 }, { "authors": [ "Ofir Nachum", "Bo Dai" ], "title": "Reinforcement learning via fenchel-rockafellar duality", "venue": "arXiv preprint arXiv:2001.01866,", "year": 2020 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "DualDICE: Behavior-agnostic estimation of discounted stationary distribution corrections", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Arkadi Nemirovski", "Alexander Shapiro" ], "title": "Convex approximations of chance constrained programs", "venue": "SIAM Journal on Optimization,", "year": 2007 }, { "authors": [ "Brendan ODonoghue", "Ian Osband", "Remi Munos", "Volodymyr Mnih" ], "title": "The uncertainty bellman equation and exploration", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ian Osband", "Benjamin Van Roy", "Daniel J Russo", "Zheng Wen" ], "title": "Deep exploration via randomized value functions", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Aldo Pacchiano", "My Phan", "Yasin Abbasi-Yadkori", "Anup Rao", "Julian Zimmert", "Tor Lattimore", "Csaba" ], "title": "Szepesvari. Model selection in contextual stochastic bandit problems", "venue": "arXiv preprint arXiv:2003.01704,", "year": 2020 }, { "authors": [ "Tom Le Paine", "Cosmin Paduraru", "Andrea Michi", "Caglar Gulcehre", "Konrad Zolna", "Alexander Novikov", "Ziyu Wang", "Nando de Freitas" ], "title": "Hyperparameter selection for offline reinforcement learning", "venue": null, "year": 2007 }, { "authors": [ "Paavo Parmas", "Carl Edward Rasmussen", "Jan Peters", "Kenji Doya" ], "title": "Pipps: Flexible model-based policy search robust to the curse of chaos", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Doina Precup", "Richard S. Sutton", "Satinder P. Singh" ], "title": "Eligibility traces for off-policy policy evaluation", "venue": "In Proceedings of the 17th International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes: Discrete Stochastic Dynamic Programming", "venue": null, "year": 1994 }, { "authors": [ "Alexander Shapiro", "Darinka Dentcheva", "Andrzej Ruszczyński" ], "title": "Lectures on stochastic programming: modeling and theory", "venue": null, "year": 2014 }, { "authors": [ "Alex Smola", "Arthur Gretton", "Le Song", "Bernhard Schölkopf" ], "title": "A hilbert space embedding for distributions", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2007 }, { "authors": [ "Bharath K Sriperumbudur", "Kenji Fukumizu", "Gert RG Lanckriet" ], "title": "Universality, characteristic kernels and rkhs embedding of measures", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Philip Thomas", "Emma Brunskill" ], "title": "Data-efficient off-policy policy evaluation for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Philip Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High confidence policy improvement", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Philip S Thomas", "Georgios Theocharous", "Mohammad Ghavamzadeh" ], "title": "High-confidence offpolicy evaluation", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Masatoshi Uehara", "Nan Jiang" ], "title": "Minimax weight and q-function learning for off-policy evaluation", "venue": null, "year": 2020 }, { "authors": [ "Tengyang Xie", "Nan Jiang" ], "title": "Batch value-function approximation with only realizability, 2020", "venue": null, "year": 2020 }, { "authors": [ "Mengjiao Yang", "Ofir Nachum", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "Off-policy evaluation via the regularized lagrangian", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Arnold Zellner" ], "title": "Optimal Information Processing and Bayes’s Theorem", "venue": "The American Statistician,", "year": 1988 }, { "authors": [ "Ruiyi Zhang", "Bo Dai", "Lihong Li", "Dale Schuurmans" ], "title": "GenDICE: Generalized offline estimation of stationary values", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jun Zhu", "Ning Chen", "Eric P. Xing" ], "title": "Bayesian inference with posterior regularization and applications to infinite latent svms", "venue": "J. Mach. Learn. Res.,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Off-policy evaluation (OPE) (Precup et al., 2000) in the context of reinforcement learning (RL) is often motivated as a way to mitigate risk in practical applications where deploying a policy might incur significant cost or safety concerns (Thomas et al., 2015a). Indeed, by providing methods to estimate the value of a target policy solely from a static offline dataset of logged experience in the environment, OPE can help practitioners determine whether a target policy is or is not safe and worthwhile to deploy. Still, in many practical applications the ability to accurately estimate the online value of a specific policy is less of a concern than the ability to select or rank a set of policies (one of which may be the currently deployed policy). This problem, related to but subtly different from OPE, is offline policy selection (Doroudi et al., 2017; Paine et al., 2020; Kuzborskij et al., 2020), and it often arises in practice. For example, in recommendation systems, a practitioner may have a large number of policies trained offline using various hyperparameters, while cost and safety constraints only allow a few of those policies to be deployed as live experiments. Which policies should be chosen to form the small subset that will be evaluated online?\nThis and similar questions are closely related to OPE, and indeed, the original motivations for OPE were arguably with offline policy selection in mind (Precup et al., 2000; Jiang, 2017), the idea being that one can use estimates of the value of a set of policies to rank and then select from this set. Accordingly, there is a rich literature of approaches for computing point estimates of the value of the policy (Dudı́k et al., 2011; Bottou et al., 2013; Jiang & Li, 2015; Thomas & Brunskill, 2016; Nachum et al., 2019; Zhang et al., 2020; Uehara & Jiang, 2020; Kallus & Uehara, 2020; Yang et al., 2020). Because the offline dataset is finite and collected under a logging policy that may be different from the target policy, prior OPE methods also estimate high-confidence lower and upper bounds on a target policy’s value (Thomas et al., 2015a; Kuzborskij et al., 2020; Bottou et al., 2013; Hanna et al., 2016; Feng et al., 2020; Dai et al., 2020; Kostrikov & Nachum, 2020). These existing approaches may be readily applied to our recommendation systems example, by using either mean or lower-confidence bound estimates on each candidate policy to rank the set and picking the top few to deploy online.\nHowever, this naı̈ve approach ignores crucial differences between the problem setting of OPE and the downstream evaluation criteria a practitioner prioritizes. For example, when choosing a few policies out of a large number of available policies, a recommendation systems practitioner may\nhave a number of objectives in mind: The practitioner may strive to ensure that the policy with the overall highest groundtruth value is within the small subset of selected policies (akin to top-k precision). Or, in scenarios where the practitioner is sensitive to large differences in achieved value, a more relevant downstream metric may be the difference between the largest groundtruth value within the k selected policies compared to the groundtruth of the best possible policy overall (akin to top-k regret). With these or other potential offline policy selection metrics, it is far from obvious that ranking according to OPE estimates is ideal (Doroudi et al., 2017).\nThe diversity of potential downstream metrics in offline policy selection presents a challenge to any algorithm that yields a point estimate for each policy. Any one approach to computing point estimates will necessarily be sub-optimal for some policy selection criteria. To circumvent this challenge, we propose to compute a belief distribution over groundtruth values for each policy. Specifically, with the posteriors for the distribution over value for each policy calculated, one can use a straightforward procedure that takes estimation uncertainty into account to rank the policy candidates according to arbitrarily complicated downstream metrics. While this belief distribution approach to offline policy selection is attractive, it also presents its own challenge: how should one estimate a distribution over a policy’s value in the pure offline setting?\nIn this work, we propose Bayesian Distribution Correction Estimation (BayesDICE) for off-policy estimation of a belief distribution over a policy’s value. BayesDICE works by estimating posteriors over correction ratios for each state-action pair (correcting for the distribution shift between the off-policy data and the target policy’s on-policy distribution). A belief distribution of the policy’s value may then be estimated by averaging these correction distributions over the offline dataset, weighted by rewards. In this way, BayesDICE builds on top of the state-of-the-art DICE point estimators (Nachum et al., 2019; Zhang et al., 2020; Yang et al., 2020), while uniquely leveraging posterior regularization to satisfy chance constraints in a Markov decision process (MDP). As a preliminary experiment, we show that BayesDICE is highly competitive to existing frequentist approaches when applied to confidence interval estimation. More importantly, we demonstrate BayesDICE’s application in offline policy selection under different utility measures on a variety of discrete and continuous RL tasks. Among other findings, our policy selection experiments suggest that, while the conventional wisdom focuses on using lower bound estimates to select policies (due to safety concerns) (Kuzborskij et al., 2020), policy ranking based on the lower bound estimates does not always lead to lower (top-k) regret. Furthermore, when other metrics of policy selection are considered, such as top-k precision, being able to sample from the posterior enables significantly better policy selection than only having access to the mean or confidence bounds of the estimated policy values." }, { "heading": "2 PRELIMINARIES", "text": "We consider an infinite-horizon Markov decision process (MDP) (Puterman, 1994) denoted asM = 〈S,A,R, T, µ0, γ〉, which consists of a state space, an action space, a deterministic reward function,1 a transition probability function, an initial state distribution, and a discount factor γ ∈ (0, 1]. In this setting, a policy π(at|st) interacts with the environment starting at s0 ∼ µ0 and receives a scalar reward rt = R(st, at) as the environment transitions into a new state st+1 ∼ T (st, at) at each timestep t. The value of a policy is defined as\nρ (π) := (1− γ)Es0,at,st [ ∑∞ t=0 γ trt] . (1)" }, { "heading": "2.1 OFFLINE POLICY SELECTION", "text": "We formalize the offline policy selection problem as providing a ranking O ∈ Perm([1, N ]) over a set of candidate policies {πi}Ni=1 given only a fixed dataset D = {x(j) := (s\n(j) 0 , s (j), a(j), r(j), s′(j))}nj=1 where s (j) 0 ∼ µ0, (s(j), a(j)) ∼ dD are samples of an unknown distribution dD, r(j) = R(s(j), a(j)), and s′(j) ∼ T (s(j), a(j)).2 One approach to the offline policy selection problem is to first characterize the value of each policy (Eq. 1, also known as the normalized per-step reward) via OPE under some utility function u(π) that leverages a point estimate (or\n1For simplicity, we restrict our analysis to deterministic rewards, and extending our methods to stochastic reward scenarios is straightforward.\n2This tuple-based representation of the dataset is for notational and theoretical convenience, following Dai et al. (2020); Kostrikov & Nachum (2020), among others. In practice, the dataset is usually presented as finitelength trajectories {(s(j)0 , a (j) 0 , r (j) 0 , s (j) 1 , . . . )}mj=1, and this can be processed into a dataset of finite samples from µ0 and from dD ×R× T . For mathematical simplicity, we assume that the dataset is sampled i.i.d. This\nlower bound) of the policy value; i.e.,\nO ← ArgSortDescending({u(πi)}Ni=1)." }, { "heading": "2.2 SELECTION EVALUATION", "text": "A proposed ranking O will eventually be evaluated according to how well its policy ordering aligns with the policies’ groundtruth values. In this section, we elaborate on potential forms of this evaluation score.\nTo this end, let us denote the groundtruth distribution of returns of policy πi by Z(·|πi). In other words, Z(·|πi) is a distribution over R such that\nz ∼ Z(·|πi) ≡ [ z := (1− γ)\n∞∑ t=0 γt ·R(st, at) ; s0 ∼ µ0, at ∼ πi(st), st+1 ∼ T (st, at)\n] . (2)\nNote that EZ(·|πi) [z] = ρ(πi).\nAs part of the offline policy selection problem, we are given a ranking score S that is a function of a proposed ranking O and groundtruth policy statistics {Z(·|πi)}Ni=1. The ranking score S can take on many forms and is application specific; e.g.,\n• top-k precision: This is an ordinal ranking score. The ranking score considers the top k policies in terms of groundtruth means ρ(πi) and returns the proportion of these which appear in the top k spots of O.\n• top-k accuracy: Another ordinal ranking score, this score considers the top-k policies in sorted order in terms of groundtruth means ρ(πi) and returns the proportion of these which appear in the same ordinal location in O.\n• top-k correlation: Another ordinal ranking score, this represents the Pearson correlation coefficient between the ranking of top-k policies in sorted order in terms of groundtruth means ρ(πi) and the truly best top-k policies.\n• top-k regret: This is a cardinal ranking score. This score respresents the difference in groundtruth means ρ(πi) between the overall best policy – i.e., maxi ρ(πi) – and the best policy among the top-k ranked policies – i.e., maxi∈[1,k] ρ(πO[k]).\n• Beyond expected return: One may define the above ranking scores in terms of statistics of Z(·|πi) other than the groundtruth means ρ(πi). For example, in safety-critical applications, one may be concerned with the variance of the policy return. Accordingly, one may define CVaR analogues to top-k precision and regret.\nFor simplicity, we will restrict our attention to ranking scores which only depend on the average return of πi. To this end, we will use ρi as shorthand for ρ(πi) and assume that the ranking score S is a function of O and {ρi}Ni=1." }, { "heading": "2.3 RANKING SCORE SIMULATION FROM THE POSTERIOR", "text": "It is not clear whether ranking according to vanilla OPE (either mean or confidence based) is ideal for any of the ranking scores above, including, for example, top-1 regret in the presence of uncertainty. However, if one has access to an approximate belief distribution over the policy’s values, there is a simple sampling-based approach that can be used to find a near-optimal ranking (optimality depending on how accurate the belief distribution is) with respect to an arbitrary specified downstream ranking score, and we elaborate on this procedure here.\nFirst, note that if we have access to the true groundtruth policy values {ρi}Ni=1, and the ranking score function S, we can calculate the score value of any rankingO and find the rankingO∗ that optimizes this score. However, we are limited to a finite offline dataset and the full return distributions are unknown. In this offline setting, we propose to instead compute a belief distribution q({ρi}Ni=1), and then we can optimize over the expected ranking score Eq [ S(O, {ρi}Ni=1) ] as shown in Algorithm 1. This algorithm simulates realizations of the groundtruth values {ρi}Ni=1 by sampling from the belief distribution q({ρi}Ni=1), and in this way estimates the expected realized ranking score S over all\nis a common assumption in the OPE literature (Uehara & Jiang, 2020) and may be relaxed in some cases by assuming a fast mixing time (Nachum et al., 2019).\npossible rankings O. As we will show empirically, matching the selection process (the S used in Algorithm 1) to the downstream ranking score naturally leads to improved performance. The question now becomes how to effectively learn a belief distribution over {ρi}Ni=1.\nFigure 1: The belief distributions of ρ1 and ρ2 depend on the uncertainty induced from the finite offline data (D andD′). A user might prefer π2 only if p(ρ2 < ρ1) < ξ (a choice of S). OPE based on mean point estimates would select π2 in either case as ρ2 has the greater mean. Sampling from the posterior belief in OfflineSelect allows simulation of any ranking score under S , aligning policy selection with the user’s choice of S.\nAlgorithm 1 OfflineSelect\nInputs Posteriors q({ρi}Ni=1), ranking score Ŝ Initialize O∗;L∗ Track best score for O in Perm([1, ..., N ]) do L = 0 for j = 1 to n do\nsample {ρ̂(j)i }Ni=1 ∼ q({ρi}Ni=1) Sum up sample scores\nL = L+ Ŝ({ρ̂(j)i }Ni=1,O) end for if L < L∗ then\nUpdate best ranking/score L∗ = L; O∗ = O\nend if end for return O∗, L∗" }, { "heading": "3 BAYESDICE", "text": "To learn a belief distribution over {ρi}Ni=1, we pursue a Bayesian approach to infer an approximate posterior distribution given prior beliefs. While model-based Bayesian approaches exist (e.g., (Deisenroth & Rasmussen, 2011) and variants (Parmas et al., 2018)), they typically suffer from compounding error, so a model-free approach is preferable. However, Bayesian inference is challenging in this model-free scenario because the likelihood function is not easy to compute, as it is defined over infinite horizon returns.\nTherefore, we first investigate several approaches to representing policy value, before identifying a novel posterior estimator that is computationally attractive and can support a broad range of ranking scores for downstream tasks." }, { "heading": "3.1 POLICY RANKING SCORE REPRESENTATION", "text": "In practice, the downstream task of ranking or selecting policy candidates might require more than the value expectation, but also other properties of the policy value distribution. To ensure that the necessary distribution properties are computable, we first consider the class of ranking scores we would like to support:\n• Offline: Since we focus on ranking policies given only offline data, the ranking score should not depend on on-policy samples.\n• Flexible: Since the downstream task may utilize different ranking scores, the representation of the policy value should be sufficient to support their efficient computation.\nWith these considerations in mind, we review ways to represent the value of a policy π. Define Qπ (s, a) = E [ ∑∞ t=0 γ\ntR(st, at)|s0 = s, a0 = a] and dπ (s, a) = (1− γ) ∑∞ t=0 γ\ntdπt (s, a) , with dπt (s, a) = P (st = s, at = a|s0 ∼ µ0,∀i < t, ai ∼ π (·|si) , si+1 ∼ T (·|si, ai)) ,\nwhich are the state-action values and stationary visitations of π. These satisfy the recursions Qπ(s, a) = R(s, a) + γ · PπQπ(s, a), where PπQ(s, a) := Es′∼T (s,a),a′∼π(s′)[Q(s′, a′)]; (3) dπ(s, a) = (1− γ)µ0(s)π(a|s) + γ · Pπ∗ dπ(s, a), where Pπ∗ d(s, a) := π(a|s) ∑ s̃,ã T (s|s̃, ã)d(s̃, ã). (4)\nFrom these identities, the policy value can be expressed in two equivalent ways: ρ(π) = (1− γ) · Ea0∼π(s0)\ns0∼µ0 [Qπ(s0, a0)] (5)\n= E(s,a)∼dπ [r(s, a)]. (6)\nCurrent OPE methods are generally based on one of the representations (1), (5) or (6). For example, importance sampling (IS) estimators (Precup et al., 2000; Murphy et al., 2001; Dudı́k et al., 2011) are based on (1); LSTDQ (Lagoudakis & Parr, 2003) is a representative algorithm for fitting Qπ and thus based on (5); the recent DICE algorithms (Nachum & Dai, 2020; Yang et al., 2020) estimate the stationary density ratio ζ (s, a) := d\nπ(s,a) dD\nso that ρ (π) = EdD [ζ · r], and are thus based on (6). Among the three strategies, the third is the most promising in our scenario. First, IS suffers from an exponential growth in variance (Liu et al., 2018) and further requires knowledge of the behavior policy. In contrast, the functions Qπ and dπ are duals (Nachum & Dai, 2020; Yang et al., 2020), and share common behavior-agnostic and minimax properties (Uehara & Jiang, 2020), However, estimation of Qπ assumes a ranking score with a linear dependence on R (s, a), and therefore, even if we estimate Qπ accurately, it is still impossible to evaluate ranking scores that involve (1− γ)E [ ∑∞ t=0 γ\ntσ(rt)] such that σ(·) : R → R is a nonlinear function (unless one learns a different Q function for each possible ranking score, which may be computationally expensive). By contrast, ranking scores with such nonlinear components can be easily computed from the stationary density ratio as EdD [ζ · σ (r)]. Given these considerations, the estimator via stationary density ratio satisfies both requirements: it enjoys statistical advantages in the offline setting and is flexible for downstream ranking score calculation. Therefore, we focus on a Bayesian estimator for ζπ next." }, { "heading": "3.2 STATIONARY RATIO POSTERIOR ESTIMATION", "text": "Recall that to apply a simple Bayesian approach to infer the posterior of ζπ , one requires a loglikelihood function, but such a quantity is not readily calculable in our scenario from the given data. Therefore, we develop an alternative, computationally tractable approach by considering an optimization view of Bayesian inference under a chance constraint, which allows us to derive the posterior over a set of stochastic equations.\nLet f (·) denote a non-negative convex function with f(0) achieving the minimum 0, e.g., f(x) = x>x. Also let ∆d (s, a) := (1 − γ)µ0(s)π(a|s) + γ · Pπ∗ d(s, a) − d (s, a). Starting with (5) we reduce the |S| |A| many constraints for the stationary distribution of π to a single feature-vectorbased constraint for ζ:\n∆d (s, a) = 0, ∀(s, a) ∈ S ×A⇒ 〈φ,∆d〉 = 0 (7) ⇒ f (〈φ,∆d〉) = 0⇒ max\nβ∈Hφ β> 〈φ,∆d〉 − f∗ (β) = 0 (8)\n⇒ max β∈Hφ\nEdD [ ζ (s, a) · β> (γφ(s′, a′)− φ (s, a)) ] + (1− γ)Eµ0π [ β>φ ] − f∗ (β) = 0,(9)\nwhereHφ denotes the bounded Hilbert space with the feature mappings φ, dD denotes the distribution generating the empirical experience, and we have used Fenchel duality in the middle step. The function φ (·, ·) : S × A → Rm is a feature mapping, with m possibly infinite. Then the condition 〈φ,∆d〉 = 0 can be understood as matching the two distributions (1−γ)µ0(s)π(a|s)+γ ·Pπ∗ d(s, a) and d (s, a) in terms of their embeddings (Smola et al., 2007), which is a generalization of the approximation methods in (De Farias & Van Roy, 2003; Lakshminarayanan et al., 2017). In particular, when |S| |A| is finite and we set φ(s, a) = δs,a, where δs,a ∈ {0, 1}|S||A| is an indicator vector with a single 1 at position (s, a) and 0 otherwise, we are matching the distributions pointwise. The feature map φ (s, a) can also be set to general reproducing kernel k ((s, a), ·) ∈ R∞. As long as the kernel k (·, ·) is characteristic, the embeddings will match if and only if the distributions are identical almost surely (Sriperumbudur et al., 2011).\nGiven that the experience was collected by some other means, i.e., D ∼ dD, the constraint for ζ in (7) might not hold exactly. Therefore, we consider a feasible set ζ ∈ {ζ : ` (ζ,D) 6 } where ` (ζ,D) := maxβ∈Hφ ÊD [ ζ (s, a) · β> (γφ(s′, a′)− φ (s, a))− f∗ (β) ] + (1− γ)Eµ0π [ β>φ ] . (10) Note that ` (ζ) > 0 since Hφ is symmetric. We expect the posterior of ζ, q (ζ), to concentrate most of its mass on this set and balance the prior. Formally, this means\nmin q\nKL (q||p)− λξ, s.t. Pq (` (ζ) 6 ) > ξ, (11)\nwhere the chance constraint considers the probability of the feasibility of ζ under the posterior. This formulation can be equivalently rewritten as\nmin q\nKL (q||p)− λPq (` (ζ) 6 ) (12)\nThen, by applying Markov’s inequality, i.e., Pq (` (ζ) 6 ) = 1− Pq (` (ζ) > ) > 1− Eq [`(ζ)] , we can obtain an upper bound on (12) as\nmin q\nKL (q||p) + λ Eq [`(ζ,D)] (13)\n= min q(ζ) max q(β|ζ)\nKL (q||p) + λ Eq(ζ)q(β|ζ) [ ÊD [ ζ (s, a) · β> (γφ(s′, a′)− φ (s, a))− f∗ (β) ] + (1− γ)Eµ0π [ β>φ ] ] , (14)\nwhere the equality follows by interchangeability (Shapiro et al., 2014; Dai et al., 2017). We amortize the optimization for β w.r.t. each ζ to a distribution q (β|ζ) to reduce the computational effort. Due to the space limitation, we postpone the discussion about the important properties of BayesDICE, including the parametrization of the posteriors, the variants of BayesDICE for undiscounted MDP and alternatives of the log-likelihoods, and the connections to the vanilla Bayesian stochastic processes, to Appendix A. Please refer the details there.\nFinally, note that with the posterior approximation for ζi, denoting the estimate for candidate policy i, we can draw posterior samples of ρ̄i by drawing a sample ζi ∼ q(ζi) and computing ρ̂i = 1 n ∑ (s,a,r)∈D ζi(s, a)r. This defines a posterior distribution over ρ̄i and we further assume that the\ndistributions are independent for each policy, so q({ρ̄i}Ni=1) = ∏ i q(ρ̄i). This defines the necessary inputs for OfflineSelect to determine a ranking of the candidate policies." }, { "heading": "4 RELATED WORK", "text": "We categorize the relevant related work into three categories: offline policy selection, off-policy evaluation, and Bayesian inference for policy evaluation.\nOffline policy selection The decision making problem we formalize as offline policy selection is a member of a set of problems in RL referred to as model selection. Previously, this term has been used to refer to state abstraction selection (Jiang, 2017; Jiang et al., 2015) as well as learning algorithm and feature selection (Foster et al., 2019; Pacchiano et al., 2020). More relevant to our proposed notion of policy selection are a number of previous works which use model selection to refer to the problem of choosing a near-optimal Q-function from a set of candidate approximation functions (Fard & Pineau, 2010; Farahmand & Szepesvári, 2011; Irpan et al., 2019; Xie & Jiang, 2020). In this case, the evaluation metric is typically defined as the L∞ norm of difference of Q versus the state-action value function of the optimal policy Q∗. While one can relate this evaluation metric to the sub-optimality (i.e., regret) of the policy induced by the Q-function, we argue that our proposed policy selection problem is both more general – since we allow for the use of policy evaluation metrics other than sub-optimality – and more practically relevant – since in many practical applications, the policy may not be expressible as the argmax of a Q-function. Lastly, the offline policy selection problem we describe is arguably a formalization of the problem approached in Paine et al. (2020) and referred to as hyperparameter selection. In contrast to this previous work, we not only formalize the decision problem, but also propose a method to directly optimize the policy selection evaluation metric. Offline policy selection has also been studied by Doroudi et al. (2017), which considers what properties a point estimator should have in order for it to yield good rankings in terms of a notion of ranking score referred to as fairness.\nOff-policy evaluation Off-policy evaluation (OPE) is a highly active area of research. While the original motivation for OPE was in the pursuit of policy selection (Precup et al., 2000; Jiang, 2017), the field has historically almost exclusively focused on the related but distinct problem of estimating the online value (accumulated rewards) of a single target policy. In addition to a plethora of techniques for providing point estimates of this groundtruth value (Dudı́k et al., 2011; Bottou et al., 2013; Jiang & Li, 2015; Thomas & Brunskill, 2016; Kallus & Uehara, 2020; Nachum et al., 2019; Zhang et al., 2020; Yang et al., 2020), there is also a growing body of literature that uses frequentist principles to derive high-confidence lower bounds for the value of a policy (Bottou et al., 2013; Thomas et al., 2015b; Hanna et al., 2016; Kuzborskij et al., 2020; Feng et al., 2020; Dai et al., 2020; Kostrikov & Nachum, 2020). As our results demonstrate, ranking or selecting policies based on either their estimated mean or lower confidence bounds can at times be sub-optimal, depending on the evaluation criteria.\nBayesian inference for policy evaluation Our proposed method for policy selection relies on Bayesian principles to estimate a posterior distribution over the groundtruth policy value. While many Bayesian-inspired methods have been proposed for policy optimization (Deisenroth & Rasmussen, 2011; Parmas et al., 2018), especially in the context of exploration (Houthooft et al., 2016; Dearden et al., 2013; Kolter & Ng, 2009), relatively few have been proposed for policy evaluation. In one instance, Fard & Pineau (2010) derive PAC-Bayesian bounds on estimates of the Bellman error of a candidate Q-value function. In contrast to this work, we use our BayesDICE algorithm to estimate a distribution over target policy value, and this distribution allows us to directly optimize arbitrary downstream policy selection metrics." }, { "heading": "5 EXPERIMENTS", "text": "We empirically evaluate the performance of BayesDICE on confidence interval estimation (which can be used for policy selection) and offline policy selection under linear and neural network posterior parametrizations on tabular – Bandit, Taxi (Dietterich, 1998), FrozenLake (Brockman et al., 2016) – and continuous-control – Reacher (Brockman et al., 2016) – tasks. As we show below, BayesDICE outperforms existing methods for confidence interval estimation, producing accurate coverage while maintaining tight interval width, suggesting that BayesDICE achieves accurate posterior estimation, being robust to approximation errors and potentially misaligned Bayesian priors in practice. Moreover, in offline policy selection settings, matching the selection algorithm (Algorithm 1) to the ranking score (enabled by the estimating the posterior) shows clear advantages over ranking based on point estimates or confidence intervals on a variety of ranking scores. See Appendix C for additional results and implementation details." }, { "heading": "5.1 CONFIDENCE INTERVAL ESTIMATION", "text": "Before applying BayesDICE to policy selection, we evaluate the BayesDICE approximate posterior by computing the accuracy of the confidence intervals it produces. We compare BayesDICE against a known set of confidence interval estimators based on concentration inequalities. To compute these baselines, we first use weighted (i.e., self-normalized) per-step importance sampling (Thomas & Brunskill, 2016) to compute a policy value estimate for each logged trajectory. These trajectories provide a finite sample of value estimates. We use self-normalized importance sampling since it has been found to yield better empirical results in MDPs despite being biased (Liu et al., 2018; Nachum et al., 2019); for Bandit results without self-normalization, see Figure 5 in Appendix C. We then use empirical Bernstein’s inequality (Thomas et al., 2015b), bias-corrected bootstrap (Thomas et al., 2015a), and Student’s t-test to derive lower and upper high-confidence bounds on these estimates. We further consider Bayesian Deep Q-Networks (BDQN) (Azizzadenesheli et al., 2018) with an average empirical reward prior in the function approximation setting, which applies Bayesian linear regression to the last layer of a deep Q-network to learn a distribution of Q-values. Both BayesDICE and BDQN output a distribution of parameters, from which we conduct Monte Carlo sampling and use the resulting samples to compute a confidence interval at a given confidence level.\nWe plot the empirical coverage and interval width at different confidence levels in Figure 2. To compute the empirical interval coverage, we conduct 200 trials with randomly sampled datasets. The interval coverage is the proportion of the 200 intervals that contains the true value of the target policy. The interval log-width is the median of the log width of the 200 intervals. As shown in Figure 2, BayesDICE’s coverage closely follows the intended coverage (black dotted line), while maintaining narrow interval width across all tasks considered. This suggests that BayesDICE’s posterior estimation is highly accurate, being robust to approximation errors and potentially misaligned Bayesian priors in practice." }, { "heading": "5.2 POLICY SELECTION", "text": "Next, we demonstrate the benefit of matching the policy selection criteria to the ranking score in offline policy selection. Our evaluation is based on a variety of cardinal and ordinal ranking scores defined in Section 2.2. We begin by considering the use of Algorithm 1 with BayesDICEapproximated posteriors. By keeping the BayesDICE posterior fixed, we focus our evaluation on the performance of Algorithm 1. We plot the groundtruth performance of this procedure applied to Bandit and Reacher in Figure 3. These figures compare using different Ŝ to rank the policies according to Algorithm 1 across different downstream ranking scores S. We find that aligning the criteria Ŝ used in Algorithm 1 with the downstream ranking score S is empirically the best approach (Ŝ = S).\nConfidence interval\n# samples = 50 # samples = 100 # samples = 200\n0.6 0.7 0.8 0.9 0.95\n0.4\n0.6\n0.8\n0.6 0.7 0.8 0.9 0.95\n°3\n°2\n°1\n0\n1\n2\n0.6 0.7 0.8 0.9 0.95\n0.2\n0.4\n0.6\n0.8\n1.0\n0.6 0.7 0.8 0.9 0.95\n°3\n°2\n°1\n0\n1\n2\n0.6 0.7 0.8 0.9 0.95\n0.2\n0.4\n0.6\n0.8\n0.6 0.7 0.8 0.9 0.95\n°3\n°2\n°1\n0\n1\n# trajectories = 50 # trajectories = 100 # trajectories = 20 # trajectories = 50\n0.6 0.7 0.8 0.9 0.95\n0.2\n0.4\n0.6\n0.8\n0.6 0.7 0.8 0.9 0.95\n°5.5\n°5.0\n°4.5\n°4.0\n0.6 0.7 0.8 0.9 0.95\n0.2\n0.4\n0.6\n0.8\n0.6 0.7 0.8 0.9 0.95\n°5.5\n°5.0\n°4.5\n°4.0\n0.6 0.7 0.8 0.9 0.95\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\n0.6 0.7 0.8 0.9 0.95\n°2\n°1\n0\n1\n2\n3\n0.6 0.7 0.8 0.9 0.95\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\n0.6 0.7 0.8 0.9 0.95\n°2\n°1\n0\n1\n2\nFrozenlake TaxiBandit\n0.6 0.7 0.8 0.9 0.95\n0.4\n0.6\n0.8\n1.0\n0.6 0.7 0.8 0.9 0.95\n°4\n°3\n°2\n°1\n0\n# trajectories = 25\nBayesDICE (ours) BDQN\nBootstrapping Bernstein\nStudent t Expected coverage\nIn te\nrv al\nc ov\ner ag e In te rv al lo gw id th\nReacher\nConfidence interval (1 − α)\nFigure 2: Confidence interval estimation on Bandit, FrozenLake, Taxi, and Reacher. The y-axis shows the empirical coverage and median log-interval width across 200 trials. BayesDICE exhibits near true coverage while maintaining narrow interval width, suggesting an accurate posterior approximation.\nIn contrast, using point estimates such as Mean or Mean ± Std can yield much worse downstream performance. We also see that in the Bandit setting, where we can analytically compute the Bayes-optimal ranking, using aligned ranking scores in conjunction with BayesDICE-approximated posteriors achieves near-optimal performance.\nHaving established BayesDICE’s ability to compute accurate posterior distributions as well as the benefit of appropriately aligning the ranking score used in Algorithm 1, we compare BayesDICE to state-of-the-art OPE methods in policy selection. In these experiments, we use Algorithm 1 with posteriors approximated by BayesDICE and Ŝ = S. We compare the use of BayesDICE in this way to ranking via point estimates of DualDICE (Nachum et al., 2019) and other confidence-interval estimation methods introduced in Section 5.1. We present results in Figure 4, in terms of top-k regret and correlation on bandit and reacher across different sample sizes and behavior data. BayesDICE outperforms other methods on both tasks. See additional ranking results in Appendix C." }, { "heading": "6 CONCLUSION", "text": "In this paper, we formally defined the offline policy selection problem, and proposed BayesDICE to first estimate posterior distributions of policy values before using a simulation-based procedure\nto compute an optimal policy ranking. Empirically, BayesDICE not only provides accurate belief distribution estimation, but also shows excellent performance in policy selection tasks." }, { "heading": "A MORE DISCUSSIONS ON BAYESDICE", "text": "In this section, we provide more details about BayesDICE.\nRemark (parametrization of q (ζ) and q (β|ζ)): We parametrize both q (ζ) (and the resulting q (β|ζ)) as Gaussians with the mean and variance approximated by a multi-layer perceptron (MLP), i.e.: ζ = MLPw(s, a) + σw′ξ, ξ ∼ N (0, 1). w and w′ denote the parameters of the MLP.\nRemark (connection to Bayesian inference for stochastic processes): Recall the posterior can be viewed as the solution to an optimization (Zellner, 1988; Zhu et al., 2014; Dai et al., 2016),\nq (ζ|D) = argmin q∈P 〈q (ζ) , log p (ζ,D)〉+KL (q (ζ) ||p (ζ))\nThe (13) is equivalent to define the log-likelihood proportion to ` (ζ,D), which is a stochastic process, including Gaussian process (GP) by setting f∗ (β) = 12β >β. Specifically, plug f (β) = 12β >β back into (13), we have β∗ = ÊD [ζ (s, a) · (γφ(s′, a′)− φ (s, a))] + (1− γ)Eµ0π [φ], resulting the optimization\nmin q KL (q||p) + λ EqEµ0πÊD\n[ ζ (s1, a1) > k ((s1, a1, s ′ 1, a ′ 1) , (s2, a2, s ′ 2, a ′ 2)) ζ (s2, a2) ] , (15)\nwith the kernel k (x1, x2) := (γφ(s′1, a ′ 1)− φ (s1, a1)) > (γφ(s′2, a ′ 2)− φ (s2, a2)) + (1− γ)2 φ ( s01, a 0 1 )> φ ( s01, a 0 1 ) + 2 (1− γ)φ ( s01, a 0 1 )> (γφ(s′2, a ′ 2)− φ (s2, a2)), which is a GP . Obviously, with different choices of f∗ (·), the BayesDICE framework is far beyond GP . Although the GP has been applied for RL (Engel et al., 2003; Ghavamzadeh et al., 2016; Azizzadenesheli et al., 2018), they all focus on prior on value function; while BayesDICE considers general stochastic processes likelihood, including GP , for the stationary ratio modeling, which as we justified is more flexible for different selection criteria in downstream tasks.\nRemark (auxilary constraints and undiscounted MDP): As Yang et al. (2020) suggested, the non-negative and normalization constraints are important for optimization. We exploit positive neuron to ensure the non-negativity of the mean of the q (ζ). For the normalization, we consider the\nchance constraints P (( ÊD (ζ)− 1 )2 6 1 ) > ξ1. By applying the same technique, it leads to\nextra term λ1 1 Eq [ maxα∈R α · ÊD [ζ − 1] ] in (13).\nWith the normalization condition introduced, the proposed BayesDICE is ready for undiscounted MDP by simply setting γ = 1 in (13) together with the above extra term for normalization.\nRemark (variants of log-likelihood): We apply the Markov’s inequality to (12) for the upper bound (13). In fact, the optimization with chance constraint has rich literature (Ben-Tal et al., 2009), where plenty of surrogates can be derived with different safe approximation. For example, if the q is simple, one can directly calculate the CDF for the probability Pq (` (ζ) 6 ); or one can also exploit different probability inequalities to derive other surrogates, e.g., condition value-at-risk, i.e.,\nmin q KL (q||p) + λ inf t\n[ t+ 1 Eq [` (ζ)− t] ] + , (16)\nand Bernstein approximation (Nemirovski & Shapiro, 2007). These surrogates lead to better approximation to the chance probability Pq (` (ζ) 6 ) with the extra cost in optimization." }, { "heading": "B BAYESDICE FOR EXPLORATION VS. EXPLOITATION TRADEOFF", "text": "In main text, we mainly consider exploiting BayesDICE for estimating various ranking scores for both discounted MDP and undiscounted MDP. In fact, with the posterior of the stationary ratio computed, we can also apply it for better balance between exploration vs. exploitation for policy optimization.\nInstead of selecting from a set of policy candidates, the policy optimization is considering all feasible policies and selecting optimistically. Specifically, the feasibility of the stationary state-action distribution can be characterized as∑\na\nd (s, a) = (1− γ)µ0 + P∗d (s) , ∀s ∈ S, (17)\nwhere P∗d (s) := ∑ s̄,ā T (s|s̄, ā) d (s̄, ā). Apply the feature mapping for distribution matching, we obtain the constraint for ζ · π with ζ (s, a) := d(s) dD(s,a) as\nmax β∈Hφ\nβ>EdD [∑ a (ζ (s, a)π (a|s))φ (s)− γ (ζ (s, a)π (a|s))φ (s′) ] +(1− γ)Eµ0 [ β>φ ] −f∗ (β) = 0.\n(18) Then, we have the posteriors for all valid policies should satisfies\nλPq (` (ζ · π,D) 6 ) > ξ, (19) with ` (ζ · π,D) := maxβ∈Hφ β>ÊD [ ∑ a (ζ (s, a)π (a|s))φ (s)− γ (ζ (s, a)π (a|s))φ (s′)] +\n(1− γ)Eµ0 [ β>φ ] − f∗ (β). Meanwhile, we will select one posterior from among these posteriors of all valid policies optimistically, i.e., max\nq(ζ)q(π) Eq [U (τ, r,D)] + λ1ξ − λ2KL (q (ζ) q (π) ||p (ζ, π)) (20)\ns.t. Pq (` (ζ · π,D) 6 ) > ξ (21) where Eq [U (τ, r,D)] denotes the optimistic policy score to capture the upper bound of the policy value estimation. For example, the most widely used one is\nEq [U (τ, r,D)] = EqÊD [τ · r] + λuEq [( ÊD [τ · r]− EqÊD [τ · r] )2] ,\nwhere the second term is the empirical variance and usually known as one kind of “exploration bonus”.\nThen the whole algorithm is iterating between solving (20) and use the obtain policy collecting data into D in (20). This Exploration-BayesDICE follows the same philosophy of Osband et al. (2019); ODonoghue et al. (2018) where the variance of posterior of the policy value is taken into account for exploration. However, there are several significant differences: i), the first and most different is the modeling object, Osband et al. (2019); ODonoghue et al. (2018) is updating with Q-function, while we are handling the dual representation; ii), BayesDICE is compatible with arbitary nonlinear function approximator, while Osband et al. (2019); ODonoghue et al. (2018) considers tabular or linear functions; iii), BayesDICE is considering infinite-horizon MDP, while Osband et al. (2019); ODonoghue et al. (2018) considers fixed finite-horizon case. Therefore, the exploration with BayesDICE pave the path for principle and practical exploration-vs-exploitation algorithm. The regret bound is out of the scope of this paper, and we leave for future work." }, { "heading": "C EXPERIMENT DETAILS AND ADDITIONAL RESULTS", "text": "C.1 ENVIRONMENTS AND POLICIES.\nBandit. We create a Bernoulli two-armed bandit with binary rewards where α controls the proportion of optimal arm (α = 0 and α = 1 means never and always choosing the optimal arm respectively). Our selection experiments are based on 5 target policies with α = [0.75, 0.8, 0.85, 0.9, 0.95].\nReacher. We modify the Reacher task to be infinite horizon, and sample trajectories of length 100 in the behavior data. To obtain different behavior and target policies, We first train a deterministic policy from OpenAI Gym (Brockman et al., 2016) until convergence, and define various policies by converting the optimal policy into a Gaussian policy with optimal mean with standard deviation 0.4− 0.3α. Our selection experiments are based on 5 target policies with α = [0.75, 0.8, 0.85, 0.9, 0.95].\nC.2 DETAILS OF NEURAL NETWORK IMPLEMENTATION\nWe parametrize the distribution correction ratio as a Gaussian using a deep neural network for the continuous control task. Specifically, we use feed-forward networks with two hidden-layers of 64 neurons each and ReLU as the activation function. The networks are trained using the Adam optimizer (β1 = 0.99, β2 = 0.999) with batch size 2048.\nC.3 ADDITIONAL EXPERIMENTAL RESULTS\nReacher" } ]
2,020
null
SP:3d8801dc33baf1d1037f26f50be0da1001003cf3
[ "This paper conducts a comprehensive study on the effect of different regularization on Deep RL algorithms. Regularization has been mostly neglected in RL as most benefits were believed to be in generalization to unseen test environments in supervised learning settings. However, this paper shows that regularization does provide benefit even though training/testing is done on the same environment in deep RL settings. " ]
Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., L2 regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment, and because the deep RL community focuses more on high-level algorithm designs. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement, especially on harder tasks. We also compare these techniques with the more widely used entropy regularization. Our findings are shown to be robust against training hyperparameter variations. In addition, we study regularizing different components and find that only regularizing the policy network is typically the best. Finally, we discuss and analyze why regularization may help generalization in RL from four perspectives sample complexity, return distribution, weight norm, and noise robustness. We hope our study provides guidance for future practices in regularizing policy optimization algorithms. Our code is available at https://github.com/xuanlinli17/iclr2021_rlreg.
[ { "affiliations": [], "name": "Zhuang Liu" }, { "affiliations": [], "name": "Xuanlin Li" }, { "affiliations": [], "name": "Bingyi Kang" }, { "affiliations": [], "name": "Trevor Darrell" } ]
[ { "authors": [ "Maruan Al-Shedivat", "Trapit Bansal", "Yuri Burda", "Ilya Sutskever", "Igor Mordatch", "Pieter Abbeel" ], "title": "Continuous adaptation via meta-learning in nonstationary and competitive environments", "venue": "arXiv preprint arXiv:1710.03641,", "year": 2017 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "CoRR, abs/1207.4708,", "year": 2012 }, { "authors": [ "Richard Cheng", "Abhinav Verma", "Gbor Orosz", "Swarat Chaudhuri", "Yisong Yue", "Joel Burdick" ], "title": "Control regularization for reduced variance reinforcement learning", "venue": null, "year": 1905 }, { "authors": [ "Karl Cobbe", "Oleg Klimov", "Chris Hesse", "Taehoon Kim", "John Schulman" ], "title": "Quantifying generalization in reinforcement learning", "venue": "arXiv preprint arXiv:1812.02341,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yan Duan", "John Schulman", "Xi Chen", "Peter L Bartlett", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Rl2: Fast reinforcement learning via slow reinforcement learning", "venue": "arXiv preprint arXiv:1611.02779,", "year": 2016 }, { "authors": [ "Amir M. Farahmand", "Mohammad Ghavamzadeh", "Shie Mannor", "Csaba Szepesvári" ], "title": "Regularized policy iteration", "venue": "Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "Jesse Farebrother", "Marlos C Machado", "Michael Bowling" ], "title": "Generalization and regularization in dqn", "venue": "arXiv preprint arXiv:1810.00123,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actorcritic methods", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Arno Knapitsch", "Jaesik Park", "Qian-Yi Zhou", "Vladlen Koltun" ], "title": "Tanks and temples: Benchmarking large-scale scene reconstruction", "venue": "ACM Trans. Graph.,", "year": 2017 }, { "authors": [ "Ilya Kostrikov", "Denis Yarats", "Rob Fergus" ], "title": "Image augmentation is all you need: Regularizing deep reinforcement learning", "venue": null, "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Michael Laskin", "Kimin Lee", "Adam Stooke", "Lerrel Pinto", "Pieter Abbeel", "Aravind Srinivas" ], "title": "Reinforcement learning with augmented data, 2020", "venue": null, "year": 2020 }, { "authors": [ "Timothy Lillicrap", "Jonathan Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Gergely Neu", "Anders Jonsson", "Vicenç Gómez" ], "title": "A unified view of entropy-regularized markov decision processes, 2017", "venue": null, "year": 2017 }, { "authors": [ "Charles Packer", "Katelyn Gao", "Jernej Kos", "Philipp Krähenbühl", "Vladlen Koltun", "Dawn Song" ], "title": "Assessing generalization in deep reinforcement learning", "venue": "arXiv preprint arXiv:1810.12282,", "year": 2018 }, { "authors": [ "Simone Parisi", "Voot Tangkaratt", "Jan Peters", "Mohammad Emtiyaz Khan" ], "title": "Td-regularized actor-critic methods", "venue": "In Machine Learning,", "year": 2019 }, { "authors": [ "Anay Pattanaik", "Zhenyi Tang", "Shuijing Liu", "Gautham Bommannan", "Girish Chowdhary" ], "title": "Robust deep reinforcement learning with adversarial attacks", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2040–2042. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Kendall Lowrey", "Emanuel V Todorov", "Sham M Kakade" ], "title": "Towards generalization and simplicity in continuous control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "René Ranftl", "Katrin Lasinger", "David Hafner", "Konrad Schindler", "Vladlen Koltun" ], "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "venue": null, "year": 1907 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-dimensional continuous control using generalized advantage estimation", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Mario Srouji", "Jian Zhang", "Ruslan Salakhutdinov" ], "title": "Structured control nets for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Yee Teh", "Victor Bapst", "Wojciech M Czarnecki", "John Quan", "James Kirkpatrick", "Raia Hadsell", "Nicolas Heess", "Razvan Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Josh Tobin", "Rachel Fong", "Alex Ray", "Jonas Schneider", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Domain randomization for transferring deep neural networks from simulation to the real world", "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ronald J Williams", "Jing Peng" ], "title": "Function optimization using connectionist reinforcement learning algorithms", "venue": "Connection Science,", "year": 1991 }, { "authors": [ "Jianwei Yang", "Jiasen Lu", "Dhruv Batra", "Devi Parikh" ], "title": "A faster pytorch implementation of faster r-cnn", "venue": null, "year": 2017 }, { "authors": [ "Amy Zhang", "Nicolas Ballas", "Joelle Pineau" ], "title": "A dissection of overfitting and generalization in continuous reinforcement learning", "venue": "ArXiv, abs/1806.07937,", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Oriol Vinyals", "Remi Munos", "Samy Bengio" ], "title": "A study on overfitting in deep reinforcement learning", "venue": "arXiv preprint arXiv:1804.06893,", "year": 2018 }, { "authors": [ "Chenyang Zhao", "Olivier Sigaud", "Freek Stulp", "Timothy M. Hospedales" ], "title": "Investigating generalisation in continuous deep reinforcement learning", "venue": "ArXiv, abs/1902.07015,", "year": 2019 }, { "authors": [ "Brian D. Ziebart", "Andrew Maas", "J. Andrew Bagnell", "Anind K. Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3,", "year": 2008 }, { "authors": [ "Batch Normalization" ], "title": "Batch Normalization (BN) (Ioffe & Szegedy, 2015) is invented to address the problem of “internal covariate shift”, and it does the following transformation: ẑ = zin−μB", "venue": null, "year": 2015 }, { "authors": [ "A2C. Sutton" ], "title": "developed a policy gradient to update the parametric policy in a gradient descent manner. However, the gradient estimated in this way suffers from high variance. Advantage Actor Critic (A3C) (Mnih et al., 2016) is proposed to alleviate this problem by introducing a function approximator for values and replacing the Q-values with advantage values. A3C also utilizes multiple actors to parallelize training. The only difference between A2C and A3C is that in a single training", "venue": null, "year": 2000 }, { "authors": [ "Haarnoja" ], "title": "2018)), and is different from the original maximum entropy objective inside the reward term. Note that for the three on-policy algorithms (A2C, TRPO, PPO) we use the same tuning range, and the only exception is the off-policy SAC. The reason why SAC’s tuning range is different is that SAC uses a hyperparameter that controls the scaling of the reward signal, while A2C, TRPO, and PPO", "venue": null, "year": 2018 }, { "authors": [ "Henderson" ], "title": "PPO Walker’s 95% confidence interval for the final return has a range", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The use of regularization methods to prevent overfitting is a key technique in successfully training neural networks. Perhaps the most widely recognized regularization methods in deep learning are L2 regularization (also known as weight decay) and dropout (Srivastava et al., 2014). These techniques are standard practices in supervised learning tasks across many domains. Major tasks in computer vision, e.g., image classification (Krizhevsky et al., 2012; He et al., 2016), object detection (Ren et al., 2015; Redmon et al., 2016), use L2 regularization as a default option. In natural language processing, for example, the Transformer (Vaswani et al., 2017) uses dropout, and the popular BERT model (Devlin et al., 2018) uses L2 regularization. In fact, it is rare to see state-of-the-art neural models trained without regularization in a supervised setting.\nHowever, in deep reinforcement learning (deep RL), those conventional regularization methods are largely absent or underutilized in past research, possibly because in most cases we are maximizing the return on the same task as in training. In other words, there is no generalization gap from the training environment to the test environment (Cobbe et al., 2018). Heretofore, researchers in deep RL have focused on high-level algorithm design and largely overlooked issues related to network training, including regularization. For popular policy optimization algorithms like Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), Proximal Policy Optimization (PPO) (Schulman et al., 2017), and Soft Actor Critic (SAC) (Haarnoja et al., 2018), conventional regularization methods were not considered. In popular codebases such as the OpenAI Baseline (Dhariwal et al., 2017), L2 regularization and dropout were not incorporated. Instead, a commonly used regularization in RL is the entropy regularization which penalizes the high-certainty output from the policy network to encourage more exploration and prevent the agent from overfitting to certain actions. The entropy\n∗Equal contribution\nregularization was first introduced by (Williams & Peng, 1991) and now used by many contemporary algorithms (Mnih et al., 2016; Schulman et al., 2017; Teh et al., 2017; Farebrother et al., 2018).\nIn this work, we take an empirical approach to assess the conventional paradigm which omits common regularization when learning deep RL models. We study agent performance on current task (the environment which the agent is trained on), rather than its generalization ability to different environments as in many recent works (Zhao et al., 2019; Farebrother et al., 2018; Cobbe et al., 2018). We specifically focus our study on policy optimization methods, which are becoming increasingly popular and have achieved top performance on various tasks. We evaluate four popular policy optimization algorithms, namely SAC, PPO, TRPO, and the synchronous version of Advantage Actor Critic (A2C), on multiple continuous control tasks. Various conventional regularization techniques are considered, including L2/L1 weight regularization, dropout, weight clipping (Arjovsky et al., 2017) and Batch Normalization (BN) (Ioffe & Szegedy, 2015). We compare the performance of these regularization techniques to that without regularization, as well as the entropy regularization.\nSurprisingly, even though the training and testing environments are the same, we find that many of the conventional regularization techniques, when imposed to the policy networks, can still bring up the performance, sometimes significantly. Among those regularizers, L2 regularization tends to be the most effective overall. L1 regularization and weight clipping can boost performance in many cases. Dropout and Batch Normalization tend to bring improvements only on off-policy algorithms. Additionally, all regularization methods tend to be more effective on more difficult tasks. We also verify our findings with a wide range of training hyperparameters and network sizes, and the result suggests that imposing proper regularization can sometimes save the effort of tuning other training hyperparameters. We further study which part of the policy optimization system should be regularized, and conclude that generally only regularizing the policy network suffices, as imposing regularization on value networks usually does not help. Finally we discuss and analyze possible reasons for some experimental observations. Our main contributions can be summarized as follows:\n• To our best knowledge, we provide the first systematic study of common regularization methods in policy optimization, which have been largely ignored in the deep RL literature.\n• We find conventional regularizers can be effective on continuous control tasks (especially on harder ones) with statistical significance, under randomly sampled training hyperparameters. Interestingly, simple regularizers (L2, L1, weight clipping) could perform better than entropy regularization, with L2 generally the best. BN and dropout can only help in off-policy algorithms. • We study which part of the network(s) should be regularized. The key lesson is to regularize the policy network but not the value network.\n• We analyze why regularization may help generalization in RL through sample complexity, return distribution, weight norm, and training noise robustness." }, { "heading": "2 RELATED WORKS", "text": "Regularization in Deep RL. There have been many prior works studying the theory of regularization in policy optimization (Farahmand et al., 2009; Neu et al., 2017; Zhang et al., 2020). In practice, conventional regularization methods have rarely been applied in deep RL. One rare case of such use is in Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2016), where Batch Normalization is applied to all layers of the actor and some layers of the critic, and L2 regularization is applied to the critic. Some recent studies have developed more complicated regularization approaches to continuous control tasks. Cheng et al. (2019) regularizes the stochastic action distribution π(a|s) using a control prior and dynamically adjusts regularization weight based on the temporal difference (TD) error. Parisi et al. (2019) uses TD error regularization to penalize inaccurate value estimation and Generalized Advantage Estimation (GAE) (Schulman et al., 2016) regularization to penalize GAE variance. However, most of these regularizations are rather complicated (Cheng et al., 2019) or catered to certain algorithms (Parisi et al., 2019). Also, these techniques consider regularizing the output of the network, while conventional methods mostly directly regularize the parameters. In this work, we focus on studying these simpler but under-utilized regularization methods.\nGeneralization in Deep RL typically refers to how the model perform in a different environment from the one it is trained on. The generalization gap can come from different modes/levels/difficulties of a game (Farebrother et al., 2018), simulation vs. real world (Tobin et al., 2017), parameter variations (Pattanaik et al., 2018), or different random seeds in environment generation (Zhang et al.,\n2018b). There are a number of methods designed to address this issue, e.g., through training the agent over multiple domains/tasks (Tobin et al., 2017; Rajeswaran et al., 2017), adversarial training (Tobin et al., 2017), designing model architectures (Srouji et al., 2018), adaptive training (Duan et al., 2016), etc. Meta RL (Finn et al., 2017; Gupta et al., 2018; Al-Shedivat et al., 2017) try to learn generalizable agents by training on many environments drawn from the same family/distribution. There are also some comprehensive studies on RL generalization with interesting findings (Zhang et al., 2018a;b; Zhao et al., 2019; Packer et al., 2018), e.g., algorithms performing better in training environment could perform worse with domain shift (Zhao et al., 2019).\nRecently, several studies have investigated conventional regularization’s effect on generalization across tasks. (Farebrother et al., 2018) shows that in Deep Q-Networks (DQN), L2 regularization and dropout are sometime beneficial when evaluated on the same Atari game with mode variations. (Cobbe et al., 2018) shows that L2 regularization, dropout, BN, and data augmentation can improve generalization performance, but to a less extent than entropy regularization and -greedy exploration. Different from those studies, we focus on regularization’s effect in the same environment, yet on which conventional regularizations are under-explored." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 SETTINGS", "text": "Regularization Methods. We study six regularization methods, namely, L2 and L1 weight regularization, weight clipping, Dropout (Srivastava et al., 2014), Batch Normalization (Ioffe & Szegedy, 2015), and entropy regularization. See Appendix A for detailed introduction. Note that we consider entropy as a separate regularization method because it encourages exploration and helps to prevent premature convergence (Mnih et al., 2016). In Appendix N, we show that in the presence of certain regularizers, adding entropy on top does not lead to significant performance difference.\nAlgorithms. We evaluate regularization methods on four popular policy optimization algorithms, namely, A2C (Mnih et al., 2016), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), and SAC (Haarnoja et al., 2018). The first three algorithms are on-policy while the last one is off-policy. For the first three algorithms, we adopt the code from OpenAI Baseline (Dhariwal et al., 2017), and for SAC, we use the official implementation at (Haarnoja, 2018).\nTasks. The algorithms with different regularizers are tested on nine continuous control tasks: Hopper, Walker, HalfCheetah, Ant, Humanoid, and HumanoidStandup from MuJoCo (Todorov et al., 2012); Humanoid, AtlasForwardWalk, and HumanoidFlagrun from RoboSchool (OpenAI). Among the MuJoCo tasks, agents for Hopper, Walker, and HalfCheetah are easier to learn, while Ant, Humanoid, HumanoidStandup are relatively harder (larger state-action space, more training examples). The three Roboschool tasks are even harder than the MuJoCo tasks as they require more timesteps to converge (Klimov & Schulman, 2017). To better understand how different regularization methods work on different difficulties, we roughly categorize the first three environments as “easy” tasks and the last six as “hard” tasks. Besides continuous control, we provide results on randomly sampled Atari environments (Bellemare et al., 2012) in Appendix S, which have discrete action space and different reward properties. Our observations are mostly similar to those on continuous control tasks.\nTraining. On MuJoCo tasks, we keep all hyperparameters unchanged as in the codebase adopted. Since hyperparameters for the RoboSchool tasks are not included, we briefly tune the hyperparameters for each algorithm before we apply regularization (details in Appendix D). For details on regularization strength tuning, please see Appendix C. The results shown in this section are obtained by only regularizing the policy network, and a further study on this will be presented in Section 5. We run each experiment independently with five seeds, then use the average return over the last 100 episodes as the final result. Each regularization method is evaluated independently, with other regularizers turned off. We refer to the result without any regularization as the baseline. For BN and dropout, we use its training mode in updating the network, and test mode in sampling trajectories. During training, negligible computation overhead is induced when a regularizer is applied. Specifically, the increase in training time for BN is ∼ 10%, dropout ∼ 5%, while L2, L1, weight clipping, and entropy regularization are all < 1%. We used up to 16 NVIDIA Titan Xp GPUs and 96 Intel Xeon E5-2667 CPUs, and all experiments take roughly 57 days with resources fully utilized.\nAdditional Notes. 1. Note that entropy regularization is still applicable for SAC, despite it already incorporates the maximization of entropy in the reward. In our experiments, we add the entropy regularization term to the policy loss function in equation (12) of (Haarnoja et al., 2018). 2. In our experiments, L2 regularization loss is added to the training loss, which is then optimized using Adam (Kingma & Ba, 2015). (Loshchilov & Hutter, 2019) observes that L2 regularization interacts poorly with Adam and proposes AdamW to decouple weight decay from the optimization steps. However, in policy optimization algorithms, we find that the performance of AdamW with decoupled weight decay is slightly worse than the performance of Adam with L2 loss directly added. Comparisons are shown in Appendix O. 3. Policy network dropout is not applicable to TRPO because during policy updates, different neurons in the old and new policy networks are dropped out, causing different shifts in the old and new action distributions given the same state, which violates the trust region constraint. In this case, the algorithm fails to perform any update from network initialization." }, { "heading": "3.2 RESULTS", "text": "Training curves. We plot the training curves from four environments (rows) in Figure 1, on four algorithms (columns). Figures for the rest five environments are deferred to Appendix P. In the figure, different colors are used to denote different regularization methods, e.g., black is the baseline method. Shades are used to denote ±1 standard deviation range. Notably, these conventional regularizers can frequently boost the performance across different tasks and algorithms, demonstrating that a study on the regularization in deep RL is highly demanding. We observe that BN always significantly hurts the baseline for on-policy algorithms. The reason will be discussed later. For the off-policy SAC algorithm, dropout and BN sometimes bring large improvement on hard tasks like AtlasForwardWalk and RoboschoolHumanoid. Interestingly, in some cases where the baseline (with the default hyperparameters in the codebase) does not converge to a reasonable solution, e.g., A2C Ant, PPO Humanoid, imposing some regularization can make the training converge to a high level.\nHow often do regularizations help? To quantitatively measure the effectiveness of the regularizations on each algorithm across different tasks, we define the condition when a regularization is said to “improve” upon the baseline in a certain environment. Denote the baseline mean return over five seeds on an environment as µenv,b, and the mean and standard deviation of the return obtained with a certain regularization method over five seeds as µenv,r and σenv,r. We say the performance is “improved” by the regularization if µenv,r − σenv,r > max(µenv,b, T (env)), where T (env) is the\nminimum return threshold of an environment. The threshold serves to ensure the return is at least in a reasonable level. We set the threshold to be 105 for HumanoidStandup and 103 for all other tasks.\nThe results are shown in Table 1. Perhaps the most significant observation is that L2 regularization is the most often to improve upon the baseline. A2C algorithm is an exception, where entropy regularization is the most effective. L1 regularization behaves similar to L2 regularization, but is outperformed by the latter. Weight clipping’s usefulness is highly dependent on the algorithms and environments. Despite in total it only helps at 30.6% times, it can sometimes outperform entropy regularization by a large margin, e.g., in TRPO Humanoid and PPO Humanoid as shown in Figure 1. BN is not useful at all in the three on-policy algorithms (A2C, TRPO, and PPO). Dropout is not useful in A2C at all, and sometimes helps in PPO. However, BN and dropout can be useful in SAC. All regularization methods generally improve more often when they are used on harder tasks, perhaps because for easier ones the baseline is often sufficiently strong to reach a high performance.\nNote that under our definition, not “improving” does not indicate “hurting”. If we define “hurting” as µenv,r+σenv,r < µenv,b (the return minimum threshold is not considered here), then total percentage of hurting is 0.0% for L2, 2.8% for L1, 5.6% for weight clipping, 44.4% for dropout, 66.7% for BN, and 0.0% for entropy. In other words, under our parameter tuning range, L2 and entropy regularization never hurt with appropriate strengths. For BN and dropout, we also note that almost all hurting cases are in on-policy algorithms, except one case for BN in SAC. In sum, all regularizations in our study very rarely hurt the performance except for BN/dropout in on-policy methods.\nHow much do regularizations improve? For each algorithm and environment (for example, PPO on Ant), we calculate a z-score for each regularization method and the baseline, by treating results produced by all regularizations (including baseline) and all five seeds together as a population, and calculate each method’s average z-scores from its five final results (positively clipped). z-score is also known as “standard score”, the signed fractional number of standard deviations by which the value of a data point is above the mean value. For each algorithm and environment, a regularizer’s z-score roughly measures its relative performance among others. The z-scores are then averaged over environments of a certain difficulty (easy/hard), and the results are shown in Table 2. In terms of the average improved margin, we can draw mostly similar observations as the improvement frequency (Table 1): L2 tops the average z-score most often, and by large margin in total; entropy regularization is best used with A2C; Dropout and BN are only useful in the off-policy SAC algorithm; the improvement over baseline is larger on hard tasks. Notably, for all algorithms, any regularization on average outperforms the baseline on hard tasks, except dropout and BN in on-policy algorithms. On hard tasks, L1 and weight clipping also perform higher than entropy in total, besides L2. To\nfurther verify our observations, we present z-scores for MuJoCo environments in Appendix G where we increase the number of seeds from 5 to 10. Our observations are consistent with those in Table 2.\nBesides the improvement percentage (Table 1) and the z-score (Table 2), we provide more metrics of comparison (e.g., average ranking, min-max scaled return) to comprehensively compare the different regularization methods. We also conduct statistical significance tests on these metrics, and the improvement are mostly statistically significant (p <0.05). We believe evaluating under a variety of metrics make our conclusions more reliable. Detailed results are in Appendix F, I, and J. In addition, we provide detailed justification in Appendix K that, because we test on the entire set of environments instead of on a single environment, our sample size is large enough to satisfy the condition of significance tests and provide reliable results." }, { "heading": "4 ROBUSTNESS WITH HYPERPARAMETER CHANGES", "text": "In the previous section, the experiments are conducted mostly with the default hyperparameters in the codebase we adopt, which are not necessarily optimized. For example, PPO Humanoid baseline performs poorly using default hyperparameters, not converging to a reasonable solution. Meanwhile, it is known that RL algorithms are very sensitive to hyperparameter changes (Henderson et al., 2018). Thus, our findings can be vulnerable to such variations. To further confirm our findings, we evaluate the regularizations under a variety of hyperparameter settings. For each algorithm, we sample five hyperparameter settings for the baseline and apply regularization on each of them. Due to the heavy computation cost, we only evaluate on five environments: Hopper, Walker, Ant, Humanoid, HumanoidStandup. Under our sampled hyperparameters, poor baselines are mostly significantly improved. See Appendix E/ Q for details on sampling and curves. The z-scores are shown in Table 3. We note that our main findings in Section 3 still hold. Interestingly, compared to the previous section, L2, L1, and weight clipping all tend to be better than entropy regularization by larger margins. For the p-scores of statistical significance/improvement percentages, see Appendix F/H.\nTo better visualize the robustness against change of hyperparameters, we show the result when a single hyperparameter is varied in Figure 2. We note that the certain regularizations can consistently improve the baseline with different hyperparameters. In these cases, proper regularizations can ease the hyperparameter tuning process, as they bring up performance of baselines with suboptimal hyperparameters to be higher than that with better ones." }, { "heading": "5 POLICY AND VALUE NETWORK REGULARIZATION", "text": "Our experiments so far only impose regularization on policy network. To investigate the relationship between policy and value network regularization, we compare four options: 1) no regularization, and regularizing 2) policy network, 3) value network, 4) policy and value networks. For 2) and 3) we tune\nthe regularization strengths independently and then use the appropriate ones for 4) (more details in Appendix C). We evaluate all four algorithms on the six MuJoCo tasks and present the improvement percentage in Table 4. Note that entropy regularization is not applicable to the value network. We observe that generally, only regularizing the policy network is the most often to improve almost all algorithms and regularizations. Regularizing the value network alone does not bring improvement as often as other options. Though regularizing both is better than regularizing value network alone, it is worse than only regularizing the policy network. For detailed training curves, refer to Appendix R.\nWe also note that the policy optimization algorithms in our study have adopted multiple techniques to train the value function. For example, SAC uses the replay buffer and the clipped double-Q learning. A2C, TRPO, and PPO adopt multi-step roll-out, and the sum of discounted rewards is used as the value network objective. However, analyzing the individual effects of these techniques is not the main focus of our current work. We would like to leave the interaction between these techniques and value network regularization for future work." }, { "heading": "6 ANALYSIS AND CONCLUSION", "text": "Why does regularization benefit policy optimization? In RL, when we are training and evaluating on the same environment, there is no generalization gap across different environments. However, there is still generalization between samples: the agents is only trained on the limited trajectories it has experienced, which cannot cover the whole state-action space of the environment. A successful policy needs to generalize from seen samples to unseen ones, which potentially makes regularization necessary. This might also explain why regularization could be more helpful on harder tasks, which have larger state space, and the portion of the space that have appeared in training tends to be smaller. We study how regularization helps generalization through the following perspectives:\nSampling Complexity. We compare the return with varying number of training samples/timesteps, since the performance of learning from fewer samples is closely related to generalization ability. From the results in Figure 3, we find that for regularized models to reach the same return level as baseline, they need much fewer training samples. This suggests that certain regularizers can significantly reduce the sampling complexity of baseline and thus lead to better generalization.\nReturn Distribution. We evaluate agents trained with and without regularization on 100 different trajectories and plot the return distributions over trajectories in Figure 4. These trajectories represent unseen samples during training, since the state space is continuous (so it is impossible to traverse identical trajectories). For baseline, some trajectories yield relatively high returns, while others yield low returns, demonstrating the baseline cannot stably generalize to unseen examples; for regularized models, the returns are more concentrated at a high level, demonstrating they can more stably generalize to unseen samples. This suggests that certain conventional regularizers can improve the model’s generalization ability to larger portion of unseen samples.\nWeight Norm. We observe that on many tasks, smaller policy weight norm correlates with better generalization ability. An example is illustrated in Table 5 and Figure 5. We observe that L2 regularization accomplishes the effect of entropy regularization and, at the same time, limits the policy norm. Even though both the entropy-regularized model and the L2-regularized model have similar final policy entropy, L2-regularized model have much higher final performance, which suggests that simply increasing the policy entropy is not enough. We conjecture that L2-encouraged small weight norm makes the network less prone to overfitting and provides a better optimization landscape for the model.\nTable 6: Effect of data augmentation on final performance on PPO Humanoid.\nBaseline L2 w/o DA 3485±302 8148±335 w/ DA 3483±293 9006±145\n0\n5 R et\nur n\n1e3 Humanoid\n10\n20\n30\nPo lic\ny W\nei gh\nt N or\nm\nbaseline L2 entropy\n0 1 2 Timesteps 1e7\n0\n20\nPo lic\ny En\ntro py\nFigure 5: Return, policy network L2 norm, and policy entropy for PPO Humanoid.\nRobustness to Training Noise. Recent works (Kostrikov et al., 2020; Laskin et al., 2020) have applied data augmentation (DA) to RL, mainly on image-based inputs, to improve data efficiency and generalization. Laskin et al. (2020) adds noise to state-based input observations by random scaling them as a form of DA. We apply this technique to both baseline and L2 regularization on PPO Humanoid. At each time step, we randomly scale the input state by a factor of s, where s ∼ Unif(1−k, 1+k), k ∈ {0.05, 0.1, 0.2, 0.4, 0.6, 0.8}. We select the k with the highest performance on the original environment and report the results in Table 6. Interestingly, while DA cannot improve the baseline performance, it can significantly improve the performance of L2-regularized model. This suggests L2 regularizer can make the model robust to, or even benefit from, noisy/augmented input during training.\nWhy do BN and dropout work only with off-policy algorithms? One finding in our experiments is BN and dropout can sometimes\nimprove on the off-policy algorithm SAC, but mostly hurt on-policy algorithms. We further confirm this observation through experiments on Deep Deterministic Policy Gradient (DDPG, Lillicrap et al. (2016)), another off-policy algorithm, and present the results in Appendix M. We hypothesize two possible reasons: 1) for both BN and dropout, training mode is used to train the network, and testing mode is used to sample actions during interaction with the environment, leading to a discrepancy between the sampling policy and optimization policy (the same holds if we always use training mode). For on-policy algorithms, if such discrepancy is large, it can cause severe “off-policy issues”, which hurts the optimization process or even crashes it since their theory necessitates that the data\nis “on policy”, i.e., data sampling and optimization policies are the same. For off-policy algorithms, this discrepancy is not an issue, since they sample data from replay buffer and do not require the two policies to be the same. 2) BN can be sensitive to input distribution shifts, since the mean and std statistics depend on the input, and if the input distribution changes too quickly in training, the mapping functions of BN layers can change quickly too, which can possibly destabilize training. One evidence for this is that in supervised learning, when transferring a ImageNet pretrained model to other vision datasets, sometimes the BN layers are fixed (Yang et al., 2017) and only other layers are trained. In off-policy algorithms, the sample distributions are relatively slow-changing since we always draw from the whole replay buffer which holds cumulative data; in on-policy algorithms, we always use the samples generated from the latest policy, and the faster-changing input distribution for on-policy algorithms could be harmful to BN.\nIn summary, we conducted the first systematic study of regularization methods on multiple policy optimization algorithms. We found that conventional regularizations (L2, L1, weight clipping) could be effective at improving performance, sometimes more than entropy regularization. BN and dropout could be useful but only on off-policy algorithms. Our findings were confirmed with multiple sampled hyperparameters. Further experiments have shown that generally, the best practice is to regularize the policy network but not the value network or both. Finally we analyze why regularization can help in RL with experiments and discussions." }, { "heading": "A Regularization Methods 14", "text": "" }, { "heading": "B Policy Optimization Algorithms 15", "text": "" }, { "heading": "C Regularization Implementation & Tuning Details 16", "text": "" }, { "heading": "D Default Hyperparameter Settings 18", "text": "" }, { "heading": "E Hyperparameter Sampling Details 20", "text": "F Statistical Significance Test of z-scores 22\nG z-score Statistics under More Random Seeds on MuJoCo 22\nH Improvement Percentage for Hyperparameter Experiments 23\nI Statistical Significance Test of z-scores (Entropy Regularization) 24" }, { "heading": "J Additional Metrics 25", "text": "J.1 Ranking all regularizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25\nJ.2 Scaled Returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26" }, { "heading": "K Justification of Methodology and Statistical Significance 28", "text": "" }, { "heading": "L Regularization with a Fixed Strength 31", "text": "" }, { "heading": "M DDPG Results 32", "text": "" }, { "heading": "N Regularizing with L2 and Entropy 33", "text": "" }, { "heading": "O L2 Regularization vs. Fixed Weight Decay (AdamW) 34", "text": "" }, { "heading": "P Additional Training Curves (Default Hyperparameters) 35", "text": "" }, { "heading": "Q Training Curves for Hyperparameter Experiments 36", "text": "" }, { "heading": "R Training Curves for Policy vs. Value Experiments 40", "text": "S Atari Experiments 44" }, { "heading": "A REGULARIZATION METHODS", "text": "There are in general two types of common approaches for imposing regularization. One is to discourage complex models (e.g., weight regularization, weight clipping), and the other is to inject certain noise in network activations (e.g., dropout and Batch Normalization). Here we briefly introduce the methods we investigate in our experiments.\nL2 / L1 Weight Regularization. Suppose L is the original empirical loss we want to minimize. When applying L2 regularization, we add an additional L2-norm squared loss term 12λ||θ|| 2 2 to L, where θ are model parameters and λ is a hyperparameter. Similarly, in the case of L1 weight regularization, the additional loss term is λ||θ||1. In our experiments, the total loss is optimized using Adam (Kingma & Ba, 2015). Using L2/L1 regularization can encourage the model to be simpler/sparse. From a Bayesian view, they impose certain prior distributions on model weights.\nWeight Clipping. Weight clipping is a simple operation: after each gradient update step, each individual weight is clipped to range [−c, c], where c is a hyperparameter. This could be formally described as θi ← max(min(θi, c),−c). In Wasserstein GANs (Arjovsky et al., 2017), weight clipping is used to enforce the constraint of Lipschitz continuity. This plays an important role in stabilizing the training of GANs (Goodfellow et al., 2014) Weight clipping can also be seen as a regularizor since it reduce the complexity of the model space, by preventing any weight’s magnitude from being larger than c.\nDropout. Dropout (Srivastava et al., 2014) is one of the most successful regularization techniques developed specifically for neural networks. During training, a certain percentage of neurons is deactivated; during testing, all neurons in the neural network are kept, and rescaling is applied to ensure the scale of the activations is the same as training. One explanation for its effectiveness in reducing overfitting is they can prevent “co-adaptation” of neurons. In the policy optimization algorithms we investigate, when the policy or the value network performs updates using minibatches of trajectory data or replay buffer data, we use the training mode of dropout. When the policy network samples trajectories from the environment, we use the testing mode of dropout.\nBatch Normalization. Batch Normalization (BN) (Ioffe & Szegedy, 2015) is invented to address the problem of “internal covariate shift”, and it does the following transformation: ẑ = zin−µB√\nσ2B+ ; zout =\nγẑ + β, where µB and σB are the mean and standard deviation of input activations over B, and γ and β are trainable affine transformation parameters. BN turns out to greatly accelerate the convergence and bring up the accuracy. It also acts as a regularizer (Ioffe & Szegedy, 2015): during training, the statistics µB and σB depend on the current batch, and BN subtracts and divides different values in each iteration. This stochasticity can encourage subsequent layers to be robust against such input variation. In policy optimization algorithms, we switch between training and testing modes the same way as we do in dropout.\nEntropy Regularization. In a policy optimization framework, the policy network is used to model a conditional distribution over actions, and entropy regularization is widely used to prevent the learned policy from overfitting to one or some of the actions. More specifically, in each step, the output distribution of the policy network is penalized to have a high entropy. Policy entropy is calculated at each step as Hsi = −Eai∼π(ai|si) log π(ai|si), where (si, ai) is the state-action pair. Then the per-sample entropy is averaged within the batch of state-action pairs to obtain the regularization term LH = 1N ∑ si Hsi . A coefficient λ is also needed, and λL\nH is added to the policy objective J(θ). The sum is then maximized during policy updates. Entropy regularization also encourages exploration and prevents premature convergence due to increased randomness in actions, leading to better performance in the long run." }, { "heading": "B POLICY OPTIMIZATION ALGORITHMS", "text": "The policy optimization family of algorithms is one of the most popular methods for solving reinforcement learning problems. It directly parameterizes and optimizes the policy to gain more cumulative rewards. Below, we give a brief introduction to the algorithms we evaluate in our work.\nA2C. Sutton et al. (2000) developed a policy gradient to update the parametric policy in a gradient descent manner. However, the gradient estimated in this way suffers from high variance. Advantage Actor Critic (A3C) (Mnih et al., 2016) is proposed to alleviate this problem by introducing a function approximator for values and replacing the Q-values with advantage values. A3C also utilizes multiple actors to parallelize training. The only difference between A2C and A3C is that in a single training iteration, A2C waits for parallel actors to finish sampling trajectories before updating the neural network parameters, while A3C updates in an asynchronous manner.\nTRPO. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) proposes to constrain each update within a safe region defined by KL divergence to guarantee policy improvement during training. Though TRPO is promising at obtaining reliable performance, approximating the KL constraint is quite computationally heavy.\nPPO. Proximal Policy Optimization (PPO) (Schulman et al., 2017) simplifies TRPO and improves computational efficiency by developing a surrogate objective that involves clipping the probability ratio to a reliable region, so that the objective can be optimized using first-order methods.\nSAC. Soft Actor Critic (SAC) (Haarnoja et al., 2018) optimizes the maximum entropy objective in reward (Ziebart et al., 2008), which is different from the objective of the on-policy methods above. SAC combines soft policy iteration, which maximizes the maximum entropy objective, and clipped double Q learning (Fujimoto et al., 2018), which prevents overestimation bias, during actor and critic updates, respectively." }, { "heading": "C REGULARIZATION IMPLEMENTATION & TUNING DETAILS", "text": "As mentioned in the paper, in Section 3 we only regularize the policy network; in Section 5, we investigate regularizing both policy and value networks.\nFor L1 and L2 regularization, we add λ|̇|θ||1 and λ2 |̇|θ|| 2 2, respectively, to the loss of policy network or value network of each algorithm (for SAC’s value regularization, we apply regularization only to the V network instead of also to the two Q networks). L1 and L2 loss are applied to all the weights of the policy or value network. For A2C, TRPO, and PPO, we tune λ in the range of [1e− 5, 5e− 5, 1e− 4, 5e− 4] for L1 and [5e− 5, 1e− 4, 5e− 4, 1e− 3] for L2. For SAC, we tune λ in the range of [5e− 4, 1e− 3, 5e− 3, 1e− 2] for L1 and [1e− 3, 5e− 3, 1e− 2, 5e− 2] for L2. For weight clipping, the OpenAI Baseline implementation of the policy network of A2C, TRPO, and PPO outputs the mean of policy action from a two-layer fully connected network (MLP). The log standard deviation of the policy action is represented by a standalone trainable vector. We find that when applied only to the weights of MLP, weight clipping makes the performance much better than when applied to only the logstd vector or both. Thus, for these three algorithms, the policy network weight clipping results shown in all the sections above come from clipping only the MLP part of the policy network. On the other hand, in the SAC implementation, both the mean and the log standard deviation come from the same MLP, and there is no standalone log standard deviation vector. Thus, we apply weight clipping to all the weights of the MLP. For all algorithms, we tune the policy network clipping range in [0.1, 0.2, 0.3, 0.5]. For value network, the MLP produces a single output of estimated value given a state, so we clip all the weights of the MLP. For A2C, TRPO, and PPO, we tune the clipping range in [0.1, 0.2, 0.3, 0.5]. For SAC, we only clip the V network and do not clip the two Q networks for simplicity. We tune the clipping range in [0.3, 0.5, 0.8, 1.0] due to its weights having larger magnitude.\nFor BatchNorm/dropout, we apply it before the activation function of each hidden layer/immediately after the activation function. When the policy or the value network is performing update using minibatches of trajectory data or minibatches of replay buffer data, we use the train mode of regularization and update the running mean and standard deviation. When the policy is sampling trajectory from the environment, we use the test mode of regularization and use the existing running mean and standard deviation to normalize data. For Batch Normalization/dropout on value network, only training mode is applied since value network does not participate in sampling trajectories. Note that adding policy network dropout on TRPO causes the KL divergence constraint Es∼ρθold [DKL (πθold(·|s)‖πθ(·|s))] ≤ δ to be violated almost every time during policy network update. Thus, policy network dropout causes the training to fail on TRPO, as the policy network cannot be updated.\nFor entropy regularization, we add −λLH to the policy loss. λ is tuned from [5e− 5, 1e− 4, 5e− 4, 1e − 3] for A2C, TRPO, PPO and [0.1, 0.5, 1.0, 5.0] for SAC. Note that for SAC, our entropy regularization is added directly on the optimization objective (equation 12 in Haarnoja et al. (2018)), and is different from the original maximum entropy objective inside the reward term.\nNote that for the three on-policy algorithms (A2C, TRPO, PPO) we use the same tuning range, and the only exception is the off-policy SAC. The reason why SAC’s tuning range is different is that SAC uses a hyperparameter that controls the scaling of the reward signal, while A2C, TRPO, and PPO do not. In the original implementation of SAC, the reward signals are pre-tuned to be scaled up by a factor ranging from 5 to 100, according to specific environments. Also, unlike A2C, TRPO, and PPO, SAC uses unnormalized reward because if the reward magnitude is small, then, according to the original paper, the policy becomes almost uniform. Due to the above reasons, the reward magnitude of SAC is much higher than the magnitude of rewards used by A2C, TRPO, and PPO. Thus, the policy network loss and the value network loss have larger magnitude than those of A2C, TRPO, and PPO, so the appropriate regularization strengths become higher. Considering the SAC’s much larger reward magnitude, we selected a different range of hyperparameters for SAC before we ran the whole experiments.\nThe optimal policy network regularization strength we selected for each algorithm and environment used in Section 3 can be seen in the legends of Appendix R. In addition to the results with environmentspecific strengths presented in Section 3, we also present the results when the regularization strength is fixed across all environments for the same algorithm. The results are shown in Appendix L.\nIn Section 5, to investigate the effect of regularizing both policy and value networks, we combine the tuned optimal policy and value network regularization strengths. The detailed training curves are presented in Appendix R.\nAs a side note, when training A2C, TRPO, and PPO on the HalfCheetah environment, the results have very large variance. Thus, for each regularization method, after we obtain the best strength, we rerun it for another five seeds as the final result in Table 1 and 2." }, { "heading": "D DEFAULT HYPERPARAMETER SETTINGS", "text": "Training timesteps. For A2C, TRPO, and PPO, we run 5e6 timesteps for Hopper, Walker, and HalfCheetah; 2e7 timesteps for Ant, Humanoid (MuJoCo), and HumanoidStandup; 5e7 timesteps for Humanoid (RoboSchool); and 1e8 timesteps for AtlasForwardWalk and HumanoidFlagrun. For SAC, since its simulation speed is much slower than A2C, TRPO, and PPO (as SAC updates its policy and value networks using a minibatch of replay buffer data at every timestep), and since it takes fewer timesteps to converge, we run 1e6 timesteps for Hopper and Walker; 3e6 timesteps for HalfCheetah and Ant; 5e6 timesteps for Humanoid and HumanoidStandup; and 1e7 timesteps for the RoboSchool environments.\nHyperparameters for RoboSchool. In the original PPO paper (Schulman et al., 2017), hyperparameters for the Roboschool tasks are given, so we apply the same hyperparameters to our training, except that instead of linear annealing the log standard deviation of action distribution from −0.7 to −1.6, we let it to be learnt by the algorithm, as implemented in OpenAI Baseline (Dhariwal et al., 2017). For TRPO, due to its proximity to PPO, we copy PPO’s hyperparameters if they exist in both algorithms. We then tune the value update step size in [3e− 4, 5e− 4, 1e− 3]. For A2C, we keep the original hyperparameters and tune the number of actors in [32, 128] and the number of timesteps for each actor between consecutive policy updates in [5, 16, 32]. For SAC, we tune the reward scale from [5, 20, 100].\nThe detailed hyperparameters used in our baselines for both MuJoCo and RoboSchool are listed in Tables 7-10." }, { "heading": "E HYPERPARAMETER SAMPLING DETAILS", "text": "In Section 4, we present results based on five hyperparameter settings. To obtain such hyperparameter variations, we consider varying the learning rates and the hyperparameters that each algorithm is very sensitive to. For A2C, TRPO, and PPO, we consider a range of rollout timesteps between consecutive policy updates by varying the number of actors or the number of trajectory sampling timesteps for each actor. For SAC, we consider a range of reward scale and a range of target smoothing coefficient.\nMore concretely, for A2C, we sample the learning rate from [2e− 4, 7e− 4, 2e− 3] linear decay, the number of trajectory sampling timesteps (nsteps) for each actor from [3, 5, 16, 32], and the number of actors (nenvs) from [1, 4]. For TRPO, we sample the learning rate of value network (vf_stepsize) from [3e− 4, 5e− 4, 1e− 3] and the number of trajectory sampling timesteps for each actor (nsteps) in [1024, 2048, 4096, 8192]. The policy update uses conjugate gradient descent and is controlled by the max KL divergence. For PPO, we sample the learning rate from [1e− 4 linear, 3e− 4 constant], the number of actors (nenvs) from [1, 2, 4, 8], and the probability ratio clipping range (cliprange) in [0.1, 0, 2]. For SAC, we sample the learning rate from [1e− 4, 3e− 4, 1e− 3] the target smoothing coefficient (τ ) from [0.001, 0.005, 0.01], and the reward scale from small, default, and large mode. The default reward scale of 5 is changed to (3, 5, 20); 20 to (4, 20, 100); 100 to (20, 100, 400) for each mode, respectively. Sampled hyperparameters 1-5 for each algorithms are listed in Table 11-14.\nF STATISTICAL SIGNIFICANCE TEST OF z-SCORES\nFor each regularization method, we collect the z-scores produced by all seeds and all environments of a certain difficulty (e.g. for L2 on PPO and hard environments, we have 6 envs × 5 seeds = 30 zscores), and perform Welch’s t-test (two-sample t-test with unequal variance) with the corresponding z-scores produced by the baseline. The resulting p-values for Table 2 in Section 3 and Table 3 in Section 4 are presented in Table 15 and Table 16, respectively. Note that whether the significance indicates improvement or harm depends on the relative mean z-score in Table 2 and Table 3. For example, for BN and dropout in on-policy algorithms, the statistical significance denotes harm, and in most other cases it denotes improvement. From the results, we observe that the improvement is statistically significant (p < 0.05) for hard tasks in general, with only a few exceptions. In total, L2, L1, entropy and weight clipping are all statistically significantly better than baseline. For Welch’s t-test between entropy regularization and other regularizers, see Appendix I.\nG z-SCORE STATISTICS UNDER MORE RANDOM SEEDS ON MUJOCO\nTo further verify our result, we increase the number of seeds from 5 to 10 and present z-scores for the six MuJoCo environments (easy: Hopper, Walker, HalfCheetah; hard: Ant, Humanoid, HumanoidStandup) in Table 17. We also present tests of statistical significance in Table 18. Our observations are consistent with those in Table 2. Due to the large computation cost required, we do not include the three hard Roboschool environments in the calculation of z-scores.\nH IMPROVEMENT PERCENTAGE FOR HYPERPARAMETER EXPERIMENTS\nWe provide the percentage of improvement result in Table 19 as a complement with Table 3, for the experiments with multiple sampled hyperparameters.\nI STATISTICAL SIGNIFICANCE TEST OF z-SCORES (ENTROPY REGULARIZATION)\nAs a complement to Table 2 in Section 3 and Table 3 in Section 4, we present the p-value results from Welch’s t-test comparing the z-scores of entropy regularization with other regularizers in Table 20 and Table 21. Note that whether the significance indicates improvement or harm over entropy regularization depends on the relative mean z-score in Table 2 under default hyperparameter setting and Table 3 under sampled hyperparameter setting. We observe that in total, L2 has significant improvement over entropy in both default hyperparameter setting and sampled hyperparameter setting. L1 and weight clipping are significantly better than entropy under sampled hyperparameter setting. In general, the improvement over entropy is statistically more significant for hard tasks." }, { "heading": "J ADDITIONAL METRICS", "text": "J.1 RANKING ALL REGULARIZERS\nWe compute the “average ranking” metric to compare the relative effectiveness of different regularization methods. Note that the average ranking of different methods across a set of tasks/datasets has been adopted as a metric before, as in (Ranftl et al., 2019) and (Knapitsch et al., 2017). Here, we rank the performance of all the regularization methods, together with the baseline, for each algorithm and task, and present the average ranks in Table 22 and Table 23, with statistical significance tests in Table 24 and 25. The ranks of returns among different regularizers are collected for each environment (after averaging over 5 random seeds), and then the mean rank over all seeds is calculated. From Table 22 and Table 23, we observe that, except for BN and dropout in on-policy algorithms, all regularizations on average outperform baselines. Again, L2 regularization is the strongest in most cases. Other similar observations can be made as in previous tables. For every algorithm, baseline ranks lower on harder tasks than easier ones; in total, it ranks 3.50 for easier tasks and 5.25 for harder tasks. This indicates that regularization is more effective when the tasks are harder.\nJ.2 SCALED RETURNS\nMin-max scaling is a linear-mapping operation to map values ranging from [min(x),max(x)] to [0, 1], using x′ = x−min(x)max(x)−min(x) . For each environment and policy optimization algorithm (for example, PPO on Ant), we calculate a \"scaled return\" for each regularization method and the baseline, using the maximum mean return obtained using any regularization method (including baseline) as max(x) and 0 as min(x), on positively clipped returns. We then average the scaled returns of mean return over environments of a certain difficulty (easy/hard). We present the results under the default hyperparameter setting in Table 26-28 and the results under sampled hyperparameter settings in\nTable 29-31. To analyze whether regularization significantly improves over the baseline and whether conventional regularizers significantly improves over entropy, we perform Welch’s t-test on the scaled returns, using an identical approach to the one we used for z-score. Our observation is similar to the ones we made in Section 3 and Section 4." }, { "heading": "K JUSTIFICATION OF METHODOLOGY AND STATISTICAL SIGNIFICANCE", "text": "In this section, we provide rigorous justification that, (1) when the sample size is large enough (n ≥ 30) , the normality assumption for the sampling distribution is not needed (loc); (2) since we test on the entire set of environments instead of on a single environment, our sample size is large enough to satisfy the condition of Welch’s t-test and provide reliable results.\nConsider two distributions with mean and variance pairs (µ1, σ21) and (µ2, σ 2 2), respectively, where neither distribution needs to be normal, and the mean and variances are unknown. Let H0 : µ1 = µ2 be the null hypothesis, and H1 : µ1 6= µ2 be the alternate hypothesis. Let (X1, X2, . . . , Xn) and (Y1, Y2, . . . , Yn) be independent samples from the two distributions. Then, under the null hypothesis, the t statistic from Welch’s t-test converges in distribution to N (0, 1) as n→∞. We formalize the above statement below.\nTheorem K.1. Consider two distributions with mean and variance pairs (µ1, σ21) and (µ2, σ22), where the mean and variances are unknown. Define H0 : µ1 = µ2 and H1 : µ1 6= µ2. Let (X1, X2, . . . , Xn) and (Y1, Y2, . . . , Yn) be independent samples from the two distributions. Then, under H0, the t statistic from Welch’s t-test converges in distribution to the standard normal distribution as n→∞. That is, tn =\n√ n(Xn−Y n)√ S2X,n+S 2 Y,n d−→ N (0, 1), where Xn, Y n are the sample means for\n(X1, X2, . . . , Xn) and (Y1, Y2, . . . , Yn); S2X,n and S 2 Y,n are the sample variances.\nProof. We have S2X,n p−→ σ21 and S2Y,n p−→ σ22 . Then due to independence, (S2X,n, S2Y,n) p−→\n(σ21 , σ 2 2). By the continuous mapping theorem, √ S2X,n + S 2 Y,n p−→ √ σ21 + σ 2 2 . The rejection /\nacceptance region of tn is based on the null hypothesis. Under the null hypothesis, according to the Central Limit Theorem, √ n(Xn − µ1) d−→ N (0, σ21), √ n(Y n − µ1)\nd−→ N (0, σ22). Then due to independence, ( √ n(Xn − µ1), √ n(Y n − µ1))\nd−→ (N (0, σ21),N (0, σ22)). By Slutsky’s theorem,√ n(Xn − Y n)\nd−→ N (0, σ21 + σ22). Again by Slutsky, √ n(Xn−Y n)√ S2X,n+S 2 Y,n d−→ N (0, 1).\nTherefore, if n ≥ 30 (i.e. the sample size is large), we do not need the normality assumption of our distribution to apply Welch’s t-test, and we can use our t-statistic to obtain the p-value the same way as from the z-test (loc) (i.e. the p-value equals 2 · Φ(−|t|), where Φ is the cumulative distribution function (CDF) of the standard normal distribution). Also, the t-test can be applied when n grows much larger than 30 (lar).\nWe now show that our sample size is large enough to apply the above theorem. For each algorithm and regularizer, we calculate the average z-score, the average ranking, and the average scaled return over a set of environments and all seeds. We then test whether the performance of a regularizer is significantly different from that of baseline. We take the average z-score metric as an example. Let E be the set of environments with uniform distribution over the environments, and let S be the set of seeds. For e ∈ E and s ∈ S, let freg(e, s) denote the z-score of a certain regularizer under an environment e and seed s, and let fbaseline(e, s) denote the z-score under the baseline. We use Welch’s t-test to test whether µreg 6= µbaseline, given unknown σreg, σbaseline, on a policy optimization algorithm.\nFor experiments in Section 3 (e.g. Table 2), for each policy optimization algorithm, we test the distribution of {freg(e, s) : e ∈ E, s ∈ S} versus {fbaseline(e, s) : e ∈ E, s ∈ S} on 9 environments (3 easy, 6 hard). We obtain 5 seeds * 3 envs = 15 data samples for \"easy\" environments, and 5 seeds * 6 envs = 30 data samples for \"hard\" environments, so that the \"total\" column has 15 + 30 = 45 data samples. In Appendix G, we increase the number of seeds from 5 to 10, so that we obtain 30 data samples for \"easy\" environments. In the last three columns, we aggregate the data from each policy optimization algorithm and test whether a regularizer performs significantly different from the baseline across algorithms, environments, and seeds. Since there are 4 algorithms, we obtain 15 * 4 = 60 samples for \"easy\", 30 * 4 = 120 for \"hard\", and 60 + 120 = 180 for \"total\". The sample size is large enough and satisfy our condition for Welch’s t-test.\nFor experiments in Section 4 (e.g. Table 3), for each policy optimization algorithm, we test {freg(h, e, s) : h ∈ H, e ∈ E, s ∈ S} versus {fbaseline(h, e, s) : h ∈ H, e ∈ E, s ∈ S}, where\nH is the set of training hyperparameters. In other words, we test whether a regularizer’s performance over training hyperparameters, environments, and seeds is significantly different from that of baseline. We conducted experiments on 2 easy environments and 3 hard environments. We obtain 5 hyperparameters * 5 seeds * 2 envs = 50 data samples for \"easy\" environments, 5 * 5 * 3 = 75 for \"hard\", and 50 + 75 = 125 samples for \"total\". In the last three columns, we aggregate the data from each policy optimization algorithm. We obtain 50 * 4 = 200 data samples for \"easy\", 75 * 4 = 300 for \"hard\", and 200 + 300 = 500 for \"total\". The sample size is large enough and satisfy our condition for Welch’s t-test.\nWe further plot the distribution of our z-score metric in the quantile-quantile (Q-Q) plots in Figure 6 and Figure 7. A Q-Q plot plots the quantiles of two distributions X and Y against each other, where in our case X is normal. If the plot approximately follows the line y = x, then the two distributions have approximately the same cumulative distribution function (CDF). In our case, this means that Y is approximately normal. We observe that empirically, the distribution of our performance metric is close to normal. As a result, the t-statistic we calculate from our samples is close to the t-distribution with parameter n, which converges to N (0, 1) quickly as n increases.\nWe have presented the mean values for our performance metrics (z-scores in Table 2, 3; average ranking in Table 22, 23; scaled return in Table 26, 29). Given statistical significance, whether a regularizer improves upon baseline depends on whether the performance metric is higher than the baseline. For example, in the \"Total\" column and \"hard\" subcolumn of Table 2, the z-score of L2 regularization is 0.58, while the z-score of baseline is -0.27. The p-value in the corresponding entry of Table 15 is 0.00. Therefore, L2 regularization significantly improves over the baseline on hard tasks. Note that the p-value is not a standalone performance metric. It only serves as a complement to our metrics and indicates whether the performance of a regularizer differs significantly from our baseline.\nIn addition, we note that the Figure 5 in Henderson et al. (2018) shows that, under the same hyperparameter configuration, two sets of 5 different runs on the HalfCheetah environment can be\nsignificantly different from each other. We find that the unique environment property of HalfCheetah contributes to such observation. For A2C, PPO, and TRPO on HalfCheetah, there is a certain probability that the policy found is suboptimal, where the half cheetah robot runs upside-down using its head. In this case, the final return never rises above 2200. In other cases, the half cheetah robot runs using its legs, and the final return is almost always above 4000. Therefore, it is possible that in a set of 5 runs, 4 of the runs have final returns above 4000, while for another set of 5 runs, 4 of the runs have final returns below 2200. This causes a significant performance difference between the two sets of runs. However, for all other environments, the final return is approximately normally distributed with respect to seeds, instead of categorically distributed like HalfCheetah. The variance on the other environments is much smaller than that of HalfCheetah. For example, according to Table 3 of Henderson et al. (2018), PPO Walker’s 95% confidence interval for the final return has a range of 800, while HalfCheetah has a range of 2200. Thus, other environments do not yield as much fluctuations as HalfCheetah. In fact, in our experiments under the default hyperparameter setting in Section 3, we do not find any regularization on any algorithm to \"improve\" upon HalfCheetah, according to our definition that a regularizer \"improves\" upon the baseline in Section 3. Thus, our observations do not change if we take away the HalfCheetah environment." }, { "heading": "L REGULARIZATION WITH A FIXED STRENGTH", "text": "In previous sections, we tune the strength of regularization for each algorithm and environment, as described in Appendix C. Now we restrict the regularization methods to a single strength for each algorithm, across different environments. The results are shown in Table 32 and 33. The selected strength are presented in Table 34. We see that the L2 regularization is still generally the best performing one, but SAC is an exception, where BN is better. This can be explained by the fact that in SAC, the reward scaling coefficient is different for each environment, which potentially causes the optimal L2 and L1 strength to vary a lot across different environments, while BN does not have a strength parameter." }, { "heading": "M DDPG RESULTS", "text": "To study the effect of regularization on off-policy algorithms, besides the SAC results, we also present results on DDPG (Lillicrap et al., 2016) in Table 35. We run DDPG on 5 MuJoCo environments: Hopper, Walker, Ant, Humanoid, and HumanoidStandup. We did not run DDPG on HalfCheetah due to its large variance. We then analyze the performance through the calculation of z-scores, and we also perform Welch’s t-test. Note that entropy regularization is not applicable here because DDPG’s policy network outputs a deterministic action. We obtain similar observations as we did in SAC. Notably, Dropout and Batch Normalization can be useful in DDPG, as indicated by the higher average z-score than the baseline, which supports our hypothesis that they can be helpful on off-policy algorithms." }, { "heading": "N REGULARIZING WITH L2 AND ENTROPY", "text": "We also investigate the effect of combining L2 regularization with entropy regularization, given that both cases of applying one of them alone yield performance improvement. We take the optimal strength of L2 regularization and entropy regularization together and compare with applying L2 regularization or entropy regularization alone. From Figure 8, we find that the performance increases for PPO HumanoidStandup, approximately stays the same for TRPO Ant, and decreases for A2C HumanoidStandup. Thus, the regularization benefits are not always addable. This phenomenon is possibly caused by the fact that the algorithms already achieve good performance using only L2 regularization or entropy regularization, and further performance improvement is restrained by the intrinsic capabilities of algorithms." }, { "heading": "O L2 REGULARIZATION VS. FIXED WEIGHT DECAY (ADAMW)", "text": "For the Adam optimizer (Kingma & Ba, 2015), “fixed weight decay” (AdamW in Loshchilov & Hutter (2019)) differs from L2 regularization in that the gradient of 12λ||θ||\n2 is not computed with the gradient of the original loss, but the weight is “decayed” finally with the gradient update. For Adam these two procedures are very different (see Loshchilov & Hutter (2019) for more details). In this section, we compare the effect of adding L2 regularization with that of using AdamW, with PPO on Humanoid and HumanoidStandup. The result is shown in Figure 9. Similar to L2, we briefly tune the strength of weight decay in AdamW and the optimal one is used. We find that while both L2 regularization and AdamW can significantly improve the performance over baseline, the performance of AdamW tends to be slightly lower than the performance of L2 regularization." }, { "heading": "P ADDITIONAL TRAINING CURVES (DEFAULT HYPERPARAMETERS)", "text": "As a complement with Figure 1 in Section 3, we plot the training curves of the other five environments in Figure 10." }, { "heading": "Q TRAINING CURVES FOR HYPERPARAMETER EXPERIMENTS", "text": "In this section, we plot the full training curves of the experiments in Section 4 with five sampled hyperparameter settings for each algorithm in Figure 11 to 14. The strength of each regularization is tuned according to the range in Appendix C." }, { "heading": "R TRAINING CURVES FOR POLICY VS. VALUE EXPERIMENTS", "text": "We plot the training curves with our study in Section 5 on policy and value network regularizations in\nFigure 15-18.\nFigure 15: The interaction between policy and value network regularization for A2C. The optimal policy regularization and value regularization strengths are listed in the legends. Results of regularizing both policy and value networks are obtained by combining the optimal policy and value regularization strengths." }, { "heading": "S ATARI EXPERIMENTS", "text": "We present results on 5 randomly sampled Atari environments (Asteroids, Pacman, Qbert, Roadrunner, Riverraid) in Table 36. Note that SAC not applicable here because it requires the environment to have continuous action space, while Atari environments have discrete action space. We find that L2 regularization can still significantly improve over the baseline, while L1 and weight clipping are slightly less effective. Interestingly, while BN still significantly harms performance on on-policy environments (A2C, TRPO, PPO), dropout can significantly outperform the baseline. We also observe that, different from continuous control tasks, entropy regularization can improve a lot on baseline, perhaps due to the action space being discrete." } ]
2,021
CONTINUOUS CONTROL
SP:824f8e8bc7c19ac46059d53c2ad192a2f905fd90
[ "The authors propose methodology for sharing learned differencing coefficients for estimating spatial derivatives between multiple spatio-temporal modeling tasks. They show that increased number of tasks improves learning. Additionally, the authors propose a meta-initialization procedure by which the differencing coefficients are initialized to values obtained from synthetic data. They show that this initialization procedure improves performance. " ]
Modeling the dynamics of real-world physical systems is critical for spatiotemporal prediction tasks, but challenging when data is limited. The scarcity of realworld data and the difficulty in reproducing the data distribution hinder directly applying meta-learning techniques. Although the knowledge of governing partial differential equations (PDE) of the data can be helpful for the fast adaptation to few observations, it is mostly infeasible to exactly find the equation for observations in real-world physical systems. In this work, we propose a framework, physics-aware meta-learning with auxiliary tasks whose spatial modules incorporate PDE-independent knowledge and temporal modules utilize the generalized features from the spatial modules to be adapted to the limited data, respectively. The framework is inspired by a local conservation law expressed mathematically as a continuity equation and does not require the exact form of governing equation to model the spatiotemporal observations. The proposed method mitigates the need for a large number of real-world tasks for meta-learning by leveraging spatial information in simulated data to meta-initialize the spatial modules. We apply the proposed framework to both synthetic and real-world spatiotemporal prediction tasks and demonstrate its superior performance with limited observations.
[]
[ { "authors": [ "Ferran Alet", "Tomás Lozano-Pérez", "Leslie P Kaelbling" ], "title": "Modular meta-learning", "venue": "arXiv preprint arXiv:1806.10166,", "year": 2018 }, { "authors": [ "Ferran Alet", "Erica Weng", "Tomás Lozano-Pérez", "Leslie Pack Kaelbling" ], "title": "Neural relational inference with fast modular meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your maml", "venue": "arXiv preprint arXiv:1810.09502,", "year": 2018 }, { "authors": [ "Yohai Bar-Sinai", "Stephan Hoyer", "Jason Hickey", "Michael P Brenner" ], "title": "Learning data-driven discretizations for partial differential equations", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Peter Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Jimenez Rezende" ], "title": "Interaction networks for learning about objects, relations and physics", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Lex Berman" ], "title": "National aqi observations (2014-05 to 2016-12)", "venue": "Harvard Dataverse,", "year": 2017 }, { "authors": [ "Wei Cao", "Dong Wang", "Jian Li", "Hao Zhou", "Lei Li", "Yitan Li" ], "title": "Brits: Bidirectional recurrent imputation for time series", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Michael B Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "arXiv preprint arXiv:1612.00341,", "year": 2016 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yutian Chen", "Abram L Friesen", "Feryal Behbahani", "David Budden", "Matthew W Hoffman", "Arnaud Doucet", "Nando de Freitas" ], "title": "Modular meta-learning with shrinkage", "venue": null, "year": 1909 }, { "authors": [ "Gautier Cosne", "Guillaume Maze", "Pierre Tandeo" ], "title": "Coupling oceanic observation systems to study mesoscale ocean dynamics", "venue": "arXiv preprint arXiv:1910.08573,", "year": 2019 }, { "authors": [ "Emmanuel de Bezenac", "Arthur Pajot", "Patrick Gallinari" ], "title": "Deep learning for physical processes: Incorporating prior scientific knowledge", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Martino Milani", "Frédérick Gusset", "Nathanaël Perraudin" ], "title": "Deepsphere: a graph-based spherical cnn", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Ján Drgona", "Lieve Helsen", "Draguna Vrabie" ], "title": "Stripping off the implementation complexity of physics-based model predictive control for buildings via deep learning", "venue": "https://www. nips. cc/,", "year": 2019 }, { "authors": [ "Shengdong Du", "Tianrui Li", "Yan Yang", "Shi-Jinn Horng" ], "title": "Deep air quality forecasting using hybrid deep learning framework", "venue": "arXiv preprint arXiv:1812.04783,", "year": 2018 }, { "authors": [ "Yan Duan", "Marcin Andrychowicz", "Bradly Stadie", "OpenAI Jonathan Ho", "Jonas Schneider", "Ilya Sutskever", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "One-shot imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Akira Fukui", "Dong Huk Park", "Daylen Yang", "Anna Rohrbach", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding", "venue": "arXiv preprint arXiv:1606.01847,", "year": 2016 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradient-based meta-learning as hierarchical bayes", "venue": "arXiv preprint arXiv:1801.08930,", "year": 2018 }, { "authors": [ "Samuel Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Yijun Lin", "Nikhit Mago", "Yu Gao", "Yaguang Li", "Yao-Yi Chiang", "Cyrus Shahabi", "José Luis Ambite" ], "title": "Exploiting spatiotemporal patterns for accurate air quality forecasting using deep learning", "venue": "In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems,", "year": 2018 }, { "authors": [ "Zichao Long", "Yiping Lu", "Xianzhong Ma", "Bin Dong" ], "title": "Pde-net: Learning pdes from data", "venue": "arXiv preprint arXiv:1710.09668,", "year": 2017 }, { "authors": [ "Zichao Long", "Yiping Lu", "Bin Dong" ], "title": "Pde-net 2.0: Learning pdes from data with a numeric-symbolic hybrid deep network", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Michael Lutter", "Christian Ritter", "Jan Peters" ], "title": "Deep lagrangian networks: Using physics as model prior for deep learning", "venue": null, "year": 1907 }, { "authors": [ "A Manepalli", "A Albert", "A Rhoades", "D Feldman", "M Prabhat" ], "title": "Emulating numeric hydroclimate models with physics-informed conditional generative adversarial networks. Environmetrics, 2019", "venue": null, "year": 2019 }, { "authors": [ "Nikhil Mishra", "Mostafa Rohaninejad", "Xi Chen", "Pieter Abbeel" ], "title": "A simple neural attentive meta-learner", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Devang K Naik", "Richard J Mammone" ], "title": "Meta-neural networks that learn by learning", "venue": "[Proceedings", "year": 1992 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Evan Racah", "Christopher Beckham", "Tegan Maharaj", "Samira Kahou", "Mr. Prabhat", "Chris Pal" ], "title": "Extremeweather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Evan Racah", "Christopher Beckham", "Tegan Maharaj", "Samira Ebrahimi Kahou", "Mr Prabhat", "Chris Pal" ], "title": "Extremeweather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Aniruddh Raghu", "Maithra Raghu", "Samy Bengio", "Oriol Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": null, "year": 1909 }, { "authors": [ "Maziar Raissi", "Paris Perdikaris", "George E Karniadakis" ], "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "venue": "Journal of Computational Physics,", "year": 2019 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "arXiv preprint arXiv:1807.05960,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Meta-learning with memory-augmented neural networks", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-.", "venue": null, "year": 1987 }, { "authors": [ "Sungyong Seo", "Yan Liu" ], "title": "Differentiable physics-informed graph networks", "venue": "arXiv preprint arXiv:1902.02950,", "year": 2019 }, { "authors": [ "Sungyong Seo", "Chuizheng Meng", "Yan Liu" ], "title": "Physics-aware difference graph networks for sparselyobserved dynamics", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ping-Wei Soh", "Jia-Wei Chang", "Jen-Wei Huang" ], "title": "Adaptive deep learning-based air quality prediction model using the most relevant spatial-temporal relations", "venue": "Ieee Access,", "year": 2018 }, { "authors": [ "Xianfeng Tang", "Huaxiu Yao", "Yiwei Sun", "Charu Aggarwal", "Prasenjit Mitra", "Suhang Wang" ], "title": "Joint modeling of local and global temporal dynamics for multivariate time series forecasting with missing", "venue": null, "year": 1911 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn: Introduction and overview", "venue": "In Learning to learn,", "year": 1998 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jiawei Zhuang", "Dmitrii Kochkov", "Yohai Bar-Sinai", "Michael P Brenner", "Stephan Hoyer" ], "title": "Learned discretizations for passive scalar advection in a 2-d turbulent flow", "venue": "arXiv preprint arXiv:2004.05477,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has recently shown promise to play a major role in devising new solutions to applications with natural phenomena, such as climate change (Manepalli et al., 2019; Drgona et al., 2019), ocean dynamics (Cosne et al., 2019), air quality (Soh et al., 2018; Du et al., 2018; Lin et al., 2018), and so on. Deep learning techniques inherently require a large amount of data for effective representation learning, so their performance is significantly degraded when there are only a limited number of observations. However, in many tasks in physical systems in the real-world we only have access to a limited amount of data. One example is air quality monitoring (Berman, 2017), in which the sensors are irregularly distributed over the space – many sensors are located in urban areas whereas there are much fewer sensors in vast rural areas. Another example is extreme weather modeling and forecasting, i.e., temporally short events (e.g., tropical cyclones (Racah et al., 2017b)) without sufficient observations over time. Moreover, inevitable missing values from sensors (Cao et al., 2018; Tang et al., 2019) further reduce the number of operating sensors and shorten the length of fullyobserved sequences. Thus, achieving robust performance from a few spatiotemporal observations in physical systems remains an essential but challenging problem.\nLearning on a limited amount of data from physical systems can be considered as a few shot learning. While recently many meta-learning techniques (Schmidhuber, 1987; Andrychowicz et al., 2016; Ravi & Larochelle, 2017; Santoro et al., 2016; Snell et al., 2017; Finn et al., 2017) have been developed to address this few shot learning setting, there are still some challenges for the existing meta-learning methods to be applied in modeling natural phenomena. First, it is not easy to find a set of similar meta-tasks which provide shareable latent representations needed to understand targeted observations. For instance, while image-related tasks (object detection (He et al., 2017) or visual-question-answering tasks (Andreas et al., 2016; Fukui et al., 2016)) can take advantage of an image-feature extractor pre-trained by a large set of images (Deng et al., 2009) and well-designed architecture (Simonyan & Zisserman, 2014; He et al., 2016; Sandler et al., 2018), there is no such large data corpus that is widely applicable for understanding natural phenomena. Second, unlike computer vision or natural language processing tasks where a common object (images or words) is clearly de-\nfined, it is not straightforward to find analogous objects in the spatiotemporal data. Finally, exact equations behind natural phenomena are usually unknown, leading to the difficulty in reproducing the similar dataset via simulation. For example, although there have been some works (de Bezenac et al., 2018; Lutter et al., 2019; Greydanus et al., 2019) improving data efficiency via explicitly incorporating PDEs as neural network layers when modeling spatiotemporal dynamics, it is hard to generalize for modeling different or unknown dynamics, which is ubiquitous in real-world scenario.\nIn this work, we propose physics-aware modules designed for meta-learning to tackle the few shot learning challenges in physical observations. One of fundamental equations in physics describing the transport of physical quantity over space and time is a continuity equation:\n∂ρ ∂t +∇ · J = σ, (1)\nwhere ρ is the amount of the target quantity (u) per unit volume, J is the flux of the quantity, and σ is a source or sink, respectively. This fundamental equation can be used to derive more specific transport equations such as the convection-diffusion equation, Navier-Stokes equations, and Boltzmann transport equation. Thus, the continuity equation is the starting point to model spatiotemporal (conservative) observations which are accessible from sensors. Based on the form of ρ and J with respect to a particular quantity u, Eq. 1 can be generalized as:\n∂u ∂t = F (∇u,∇2u, . . . ), (2)\nwhere the function F (·) describes how the target u is changed over time from its spatial derivatives. Inspired by the form of Eq. 2, we propose two modules: spatial derivative modules (SDM) and time derivative modules (TDM). Since the spatial derivatives such as ∇,∇·, and ∇2 are commonly used across different PDEs, the spatial modules are PDE-independent and they can be meta-initialized from synthetic data. Then, the PDE-specific temporal module is trained to learn the unknown function F (·) from few observations in the real-world physical systems. This approach can effectively leverage a large amount of simulated data to train the spatial modules as the modules are PDE-independent and thus mitigating the need for a large amount of real-world tasks to extract shareable features. In addition, since the spatial modules are universally used in physics equations, the representations from the modules can be conveniently integrated with datadriven models for modeling natural phenomena. Based on the modularized PDEs, we introduce a novel approach that marries physics knowledge in spatiotemporal prediction tasks with metalearning by providing shareable modules across spatiotemporal observations in the real-world.\nOur contributions are summarized below:\n• Modularized PDEs and auxiliary tasks: Inspired by forms of PDEs in physics, we decompose PDEs into shareable (spatial) and adaptation (temporal) parts. The shareable one is PDE-independent and specified by auxiliary tasks: supervision of spatial derivatives.\n• Physics-aware meta-learning: We provide a framework for physcis-aware meta-learning, which consists of PDE-independent/-specific modules. The framework is flexible to be applied to the modeling of different or unknown dynamics.\n• Synthetic data for shareable modules: We extract shareable parameters in the spatial modules from synthetic data, which can be generated from different dynamics easily." }, { "heading": "2 MODULARIZED PDES AND META-LEARNING", "text": "In this section, we describe how the physics equations for conserved quantities are decomposable into two parts and how the meta-learning approach tackles the task by utilizing synthetic data when the data are limited." }, { "heading": "2.1 DECOMPOSABILITY OF VARIANTS OF A CONTINUITY EQUATION", "text": "In physics, a continuity equation (Eq. 1) describes how a locally conserved quantity such as temperature, fluid density, heat, and energy is transported across space and time. This equation underlies\nmany specific equations such as the convection-diffusion equation and Navier-Stokes equations:\nu̇ = ∇ · (D∇u)−∇ · (vu) +R, (Convection-Diffusion eqn.) u̇ = −(u · ∇)u + ν∇2u−∇ω + g. (Incompressible Navier-Stokes eqn.)\nwhere the scalar u and vector field u are the variables of interest (e.g., temperature, flow velocity, etc.). A dot over a variable is time derivative. The common feature in these equations is that the forms of equations can be digested as (Bar-Sinai et al., 2019; Zhuang et al., 2020):\nu̇ = F (ux, uy, uxx, uyy, . . . ), (3)\nwhere the right-hand side denotes a function of spatial derivatives. As the time derivative can be seen as a Euler discretization (Chen et al., 2018), it is notable that the next state is a function of the current state and spatial derivatives. Thus, knowing spatial derivatives at time t is a key step for spatiotemporal prediction at time t + 1 for locally conserved quantities. According to Eq. 3, the spatial derivatives are universally used in variants of Eq. 1 and only the updating function F (·) is specifically defined for a particular equation. This property implies that PDEs for physical quantities can be decomposable into two modules: spatial and temporal derivative modules." }, { "heading": "2.2 SPATIAL DERIVATIVE MODULES: PDE-INDEPENDENT MODULES", "text": "Finite difference method (FDM) is widely used to discretize a d-order derivative as a linear combination of function values on a n-point stencil.\n∂du ∂xd ≈ n∑ i=1 αiu(xi), (4)\nwhere n > d. According to FDM, it is independent for a form of PDE to compute spatial derivatives, which are input components of F (·) in Eq. 3. Thus, we can modularize spatial derivatives as PDE-independent modules. The modules that can be learnable as a data-driven manner to infer the coefficients (αi) have been proposed recently (Bar-Sinai et al., 2019; Seo et al., 2020). The datadriven coefficients are particularly useful when the discretization in the n-point stencil is irregular and low-resolution where the fixed coefficients cause substantial numerical errors." }, { "heading": "2.3 TIME DERIVATIVE MODULE: PDE-SPECIFIC MODULE", "text": "Once upto d-order derivatives are modularized by learnable parameters, the approximated spatial derivatives from the spatial modules are fed into an additional module to learn the function F (·) in Eq. 3. This module is PDE-specific as the function F describes how the spatiotemporal observations change. Since the exact form of a ground truth PDE is not given, the time derivative module is datadriven and will be adapted to observations instead." }, { "heading": "2.4 META-LEARNING WITH PDE-INDEPENDENT/-SPECIFIC MODULES", "text": "Recently, Raghu et al. (2019) investigate the effectiveness of model agnostic meta-learning (MAML, Finn et al. (2017)) and it is found that the outer loop of MAML is more likely to learn parameters for reusable features rather than rapid adaptation. The finding that feature reuse is the predominant reason for efficient learning of MAML allows us to use additional information which is beneficial for learning better representations. Previously, the objective in meta-training has been considered to be matched with one in meta-test as the purpose of meta-learning is to learn good initial parameters applicable across similar tasks (e.g., image classification to image classification). We are now able to incorporate auxiliary tasks under a meta-learning setting to reinforce reusable features for a main task. As described in Sec. 2.1, the spatial modules are reusable across different observations, and thus, we can meta-initialize the spatial modules first with spatial derivatives provided by synthetic datasets. Then, we can integrate the spatial modules with the task-specific temporal module during meta-test to help adaptation of TDM on few observations. Since the spatial modules are trained by readily available synthetic datasets, a large number of similar tasks for meta-training is not required." }, { "heading": "3 PHYSICS-AWARE META-LEARNING WITH AUXILIARY TASKS", "text": "In this section, we develop a physics-aware meta-learning framework for the modularized PDEs. Fig. 1 describes the proposed framework and its computational process." }, { "heading": "3.1 SPATIAL DERIVATIVE MODULE", "text": "Algorithm 1 Spatial derivative module (SDM) Input: Graph signals ui and edge features eij = xj − xi on G where xi is a coordinate of node i. Output: Spatial derivatives {ûk,i | i ∈ V and k ∈ K} where K = {∇x,∇y,∇2x,∇2y}. Require: Spatial derivative modules {φk | k ∈ K}\n1: for k ∈ K do 2: { ak,i, bk,(i,j) | (i, j) ∈ E and k ∈ K } = φk({u} , {e} ,G) 3: for i ∈ V do 4: ûk,i = ak,iui + ∑ (j,i)∈E bk,(j,i)uj 5: end for 6: end for\nAs we focus on the modeling and prediction of sensor-based observations, where the available data points are inherently on a spatially sparse irregular grid, we use graph networks for each module φk to learn the finite difference coefficients (Bar-Sinai et al., 2019). Given a graph G = (V,E) where V = {1, . . . , N} and E = {(i, j) : i, j ∈ V}, a node i denotes a physical location xi = (xi, yi) where a function value ui = u(xi, yi) is observed. Then, the graph signals with positional relative displacement as edge features are fed into the spatial modules to approximate spatial derivatives by Alg. 1. The coefficients (ai, b(i,j)) on each node i and edge (i, j) are output of φ and they are linearly combined with the function values ui and uj . K denotes a set of finite difference operators. For example, if we set K = {∇x,∇y,∇2x,∇2y}, we have 4 modules which approximate first/second order of spatial derivatives in 2-dimension, respectively." }, { "heading": "3.2 TIME DERIVATIVE MODULE", "text": "Algorithm 2 Time derivative module (TDM)\nInput: Graph signals u and approximated spatial derivatives ûk where k ∈ K on G. Time interval ∆t Output: Prediction of signals at next time step û(t) Require: Time derivative module\n1: ût = TDM({ui, ûk,i | i ∈ V and k ∈ K}) 2: û(t) = u(t− 1) + ût−1 ·∆t Once spatial derivatives are approximated, another learnable module is required to combine them for a target task. The form of line 2 in Alg. 2 comes from Eq. 3 and TDM is adapted to learn the unknown function F (·) in the equation. As our target task is the regression of graph signals, we use a recurrent graph network for TDM." }, { "heading": "3.3 META-LEARNING WITH AUXILIARY OBJECTIVE", "text": "As discussed in Sec. 2.1, it is important to know spatial derivatives at time t to predict next signals at time t + 1 for locally conserved physical quantities, however, it is impractical to access the spatial derivatives in the sensor-based observations as they are highly discretized over space. In this section, we propose a physics-aware meta-learning framework to meta-initialize a spatial module by leveraging synthetic dataset with auxiliary tasks to provide reusable features for the main tasks: prediction spatiotemporal observations in the real-world.\nThe meta-initialization with the auxiliary tasks from synthetic datasets is particularly important. First, the spatial modules can be universal feature extractors for modeling observations following unknown physics-based PDEs. Unlike other domains such as computer vision, it has been considered that there is no particular shareable architecture for learning spatiotemporal dynamics from physical systems. We propose that the PDE-independent spatial modules can be applicable as feature extractors across different dynamics as long as the dynamics follow a local form of conservation laws. Second, we can utilize synthetic data to meta-train the spatial modules as they are PDE-agnostic. This property allows us to utilize a large amount of synthetic datasets which are readily generated by numerical methods regardless of the exact form of PDE for targeted observations. Finally, we can provide a stronger inductive bias which is beneficial for modeling real-world observations but not available in the observations explicitly.\nAlgorithm 3 Meta-initialization with auxiliary tasks: Supervision of spatial derivatives\nInput: A set of meta-train task datasets D = {D1, . . . ,DB} where Db = (Dtrb ,Dteb ). Db = {(ubi , ebij , y (a1,b) i , . . . , y (aK ,b) i ) : i ∈ Vb, (i, j) ∈ Eb} where y (ak,·) i is an k-th auxiliary task label for the i-th node, given node/edge feature ub and eb, respectively. Learning rate α and β. Output: Meta-initialized spatial modules Φ.\n1: Initialize auxiliary modules Φ = (φ1, . . . , φK) 2: while not converged do 3: for Db in D do 4: Φ′b = Φ− α∇Φ ∑K k=1 Lauxk (Dtrb ;φk) 5: end for 6: Φ← Φ− β∇Φ ∑B b=1 ∑K k=1 Lauxk (Dteb ;φ′b,k) 7: end while\nAlg. 3 describes how the spatial modules are meta-initialized by MAML under the supervision of K different spatial derivatives. First, we generate values and spatial derivatives on a 2D regular grid from an analytical function. Then, we sample a finite number of points from the regular grid to represent discretized nodes and build a graph from the sampled nodes. Each graph signal and its discretization becomes input feature of a meta-train task and corresponding spatial derivatives are the auxiliary task labels. Fig. 2 visualizes graph signals and spatial derivatives for meta-initialization.\nOnce the spatial modules are initialized throughout meta-training, we reuse the modules for metatest where the temporal module (the head of the network) are adapted on few observations from real-world sensors (Alg. 4). Although the standard MAML updates the body of the network (the spatial modules) as well, we only adapt the head layer (θ) as like almost-no-inner-loop method\nAlgorithm 4 Adaptation on meta-test tasks\nInput: A set of meta-test task datasets D = {D1, . . . ,DM} where Db = (Dtrm,Dtem). Meta-initialized SDM (Φ). Learning rate α. Output: Adapted TDM θ′m on the m-th task.\n1: Initialize temporal modules (θ1, . . . , θM ) 2: for Dm in D do 3: θ′m = θm − α∇θmL(Dtrm; Φ, θm) 4: end for\nin Raghu et al. (2019). The task at test time is graph signal prediction and the temporal modules (θ) are adapted by a regression loss function L = ∑T t=1 ||u(t) − û(t)||2 on length T sequence (Dtrm) and evaluated on held-out (t > T ) sequence (Dtem) with the adapted parameters." }, { "heading": "4 SPATIAL DERIVATIVE MODULES: REUSABLE MODULES", "text": "We have claimed that the spatial modules provide reusable features associated with spatial derivatives such as ∇xu,∇yu, and ∇2xu across different dynamics or PDEs. While it has been shown that the data-driven approximation of spatial derivatives is more precise than that of finite difference method (Seo et al., 2020; Bar-Sinai et al., 2019), it is not guaranteed that the modules effectively provide transferrable parameters for different spatial resolution, discretization, and fluctuation of function values. We explore whether the proposed spatial derivative modules based on graph networks can be used as a feature provider for different spatial functions and discretization.\nWe perform two sets of experiments: evaluate few-shot learning performance (1) when SDM is trained from scratch; (2) when SDM is metainitialized. Fig. 2 shows how the graph signal and its discretization is changed over the different settings. If the number of nodes is large, it can provide spatially high-resolution and thus, the spatial derivatives can be more precisely ap-\nproximated. Table 1 shows the parameters we used to generate synthetic datasets. Note that metatest data is designed to evaluate interpolation/extrapolation properties. Initial frequency decides the degree of fluctuation (In Fig. 2, the middle one has higher F than that of the left one.). For each parameter combination, we generate 100 different snapshots from the following form in Long et al. (2017):\nui = ∑\n|k|,|l|≤F\nλk,l cos(kxi + lyi) + γk,l sin(kxi + lyi), λk,l, γk,l ∼ N (0, 0.02) , (5)\nwhere the index i denotes the i-th node whose coordinate is (xi, yi) in the 2D space ([0, 2π] × [0, 2π]) and k, l are randomly sampled integers. From the synthetic data, the first- and second-order derivatives are analytically given and SDM is trained to approximate them.\nThe prediction results for spatial derivatives are shown in Table 2. The results show that the proposed module (SDM) is efficiently adaptable to different configuration on few samples from metainitialized parameters compared to learning from scratch. The finding implies that the parameters for spatial derivatives can be generally applicable across different spatial resolution, discretization, and function fluctuation." }, { "heading": "5 EXPERIMENTAL EVALUATION", "text": "" }, { "heading": "5.1 PRELIMINARY: WHICH SYNTHETIC DYNAMICS NEED TO BE GENERATED?", "text": "While Table 2 demonstrates that the PDE-independent representations are reusable across different configurations, it is still an open question: which topological configuration needs to be used to construct the synthetic dynamics? According to Table 2, the most important factor affecting error is\nan initial frequency (F ), which determines min/max scales and fluctuation of function values, and it implies that the synthetic dynamics should be similarly scaled to a target dynamics. We use the same topological configuration in Table 1 to generate synthetic datasets for a task in Section 5.2 and adapted configuration for a task in Section 5.3. We describe more details in Appendix B." }, { "heading": "5.2 MULTI-STEP GRAPH SIGNAL GENERATION", "text": "Task: We adopt a set of multi-step spatiotemporal sequence generation tasks to evaluate our proposed framework. In each task, the data is a sequence of L frames, where each frame is a set of observations on N nodes in space. Then, we train an auto-regressive model with the first T frames (T -shot) and generate the following L−T frames repeatedly from a given initial input (T -th frame) to evaluate its performance.\nDatasets: For all experiments, we generate meta-train tasks with the parameters described in Table 1 and the target observations are 2 realworld datasets: (1) AQICO: national air quality index (AQI) observations (Berman, 2017); (2) ExtremeWeather: the extreme weather dataset (Racah et al., 2017b). For the AQI-CO dataset, we construct 12 meta-test tasks with the\ncarbon monoxide (CO) ppm records from the first week of each month in 2015 at land-based stations. For the extreme weather dataset, we select the top-10 extreme weather events with the longest lasting time from the year 1984 and construct a meta-test task from each event with the\nobserved surface temperatures at randomly sampled locations. Since each event lasts fewer than 20 frames, each task has a very limited amount of available data. In both datasets, graph signals are univariate. Note that all quantities have fluidic properties such as diffusive and convection. Fig. 3 shows the spatiotemporal dynamics of the extreme weather observations and sampled points. More details are in the supplementary material.\nBaselines: We evaluate the performance of a physics-aware architecture (PA-DGN) (Seo et al., 2020), which also consists of spatial derivative modules and recurrent graph networks (RGN), to see how the additional spatial information affects prediction performance for same architecture. Note that PA-DGN has same modules in PiMetaL and the difference is that PiMetaL utilizes metainitialized spatial modules and PA-DGN is randomly initialized for learning from scratch on metatest tasks. Additionally, the spatial modules in PA-DGN is replaced by finite difference method (FDM+RGN) to see if the numerical method provides better PDE-agnostic representations. The baselines and PiMetaL are trained on the meta-test support set only to demonstrate how the additional spatial information is beneficial for few-shot learning tasks.\nDiscussion: Table 3 shows the multi-step prediction performance of our proposed framework against the baselines on real-world datasets. Overall, PA-DGN and PiMetaL show similar trend such that the prediction error is decreased as longer series are available for few-shot adaptation. There are two important findings: first, with the similar expressive power in terms of the number of learnable parameters, the meta-initialized spatial modules provide high quality representations which are easily adaptable across different spatiotemporal dynamics in the real-world. This performance gap demonstrates that we can get a stronger inductive bias from synthetic datasets without knowing PDE-specific information. Second, the contribution of the meta-initialization is more significant when the length of available sequence is shorter (T = 5) and this demonstrates when the metainitialization is particularly effective. Finally, the finite difference method provides proxies of exact spatial derivatives and the representations are useful particularly when T = 5 but its performance is rapidly saturated and it comes from the gap between the learnable spatial modules and fixed numerical coefficients. The results provide a new point of view on how to utilize synthetic or simulated datasets to handle challenges caused by limited datasets." }, { "heading": "5.3 GRAPH SIGNAL REGRESSION", "text": "Task, datasets, and baselines: Defferrard et al. (2019) conducted a graph signal regression task: predict the temperature xt from the temperature on the previous 5 days (xt−5 : xt−1). We split the GHCN dataset1 spatially into two regions: (1) the USA (1,705 stations) and (2) Europe (EU) (703 stations) where there are many weather stations full functioning. In this task, the number of shots is defined as the number of input and output pairs to train a model. As the input length is fixed, more variants of graph neural networks are considered as baselines. We concatenate the 5- step signals and feed it into Graph convolutional networks (GCN) (Kipf & Welling, 2017), Graph attention networks (GAT) (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Graph networks (GN) (Battaglia et al., 2018) to predict next signals across all nodes.\nDiscussion: Table 4 shows the results of the graph signal regression task across different baselines and the proposed method. There are two patterns in the results. First, although in general we observe an improvement in performance for all methods when we move from the 5-shot setting to the 10-shot setting, PiMetaL’s performance yields the smallest error. Second, for the EU dataset, while 5-shot seems enough to achieve stable performance, it demonstrates that the PDE-independent\n1Global Historical Climatology Network (GHCN) provided by National Oceanic and Atmospheric Administration (NOAA). https://www.ncdc.noaa.gov/ghcn-daily-description\nrepresentations make the regression error converge to a lower level. Overall, the experimental results prove that the learned spatial representations from simulated dynamics are beneficial for learning on limited data." }, { "heading": "6 RELATED WORK", "text": "Physics-informed learning Since physics-informed neural networks are introduced in Raissi et al. (2019), which find that a solution of a PDE can be discovered by neural networks, physical knowledge has been used as an inductive bias for deep neural networks. Advection-diffusion equation is incorporated with deep neural networks for sea-surface temperature dynamics (de Bezenac et al., 2018). Lutter et al. (2019); Greydanus et al. (2019) show that Lagrangian/Hamiltonian mechanics can be imposed to learn the equations of motion of a mechanical system and Seo & Liu (2019) regularizes a graph neural network with a specific physics equation. Rather than using explicitly given equations, physics-inspired inductive bias is also used for reasoning dynamics of discrete objects (Battaglia et al., 2016; Chang et al., 2016) and continuous quantities (Seo et al., 2020). Long et al. (2017; 2019) propose a numeric-symbolic hybrid deep neural network designed to discover PDEs from observed dynamic data. While there are many physics-involved works, to the best of our knowledge, we are the first to provide a framework to use the physics-inspired inductive bias under the meta-learning settings to tackle the limited data issue which is pretty common for real-world data such as extreme weather events (Racah et al., 2017b).\nMeta-learning The aim of meta-learning is to enable learning parameters which can be used for new tasks unknown at the time of learning, leading to agile models which adapt to a new task utilizing only a few samples (Schmidhuber, 1987; Naik & Mammone, 1992; Thrun & Pratt, 1998). Based on how the knowledge from the related tasks is used, meta-learning methods have been classified as optimization-based (Andrychowicz et al., 2016; Ravi & Larochelle, 2017; Duan et al., 2017; Finn et al., 2017; Nichol et al., 2018; Antoniou et al., 2018; Rusu et al., 2018; Grant et al., 2018), modelbased (Santoro et al., 2016; Munkhdalai & Yu, 2017; Duan et al., 2017; Mishra et al., 2018), and metric-based (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017). Recently, another branch of meta-learning has been introduced to more focus on finding a set of reusable modules as components of a solution to a new task. Alet et al. (2018; 2019) provide a framework, structured modular meta-learning, where a finite number of modules are introduced as task-independent modules and an optimal structure combining the modules is found from a limited number of data. Chen et al. (2019) introduces techniques to automatically discover task-independent/dependent modules based on Bayesian shrinkage to find more adaptable modules. To our knowledge, none of the above works provide a solution to use meta-learning for modeling physics-related spatiotemporal dynamics where it is hard to generate enough tasks for meta-initialization." }, { "heading": "7 CONCLUSION", "text": "In this paper, we propose a framework for physics-aware meta-learning with auxiliary tasks. By incorporating PDE-independent/-invariant knowledge (spatial derivatives) from simulated data, the framework provide reusable features to meta-test tasks with a limited amount of data. Experiments show that auxiliary tasks and physics-aware meta-learning help construct reusable modules that improve the performance of spatiotemporal predictions in real-world tasks where data is limited. Although introducing auxiliary tasks based on synthetic datasets improves the prediction performance, they need to be chosen and constructed manually and intuitively. Designing and identifying the most useful auxiliary tasks and data will be the focus of our future work." }, { "heading": "A TASK 1: MULTI-STEP GRAPH SIGNAL GENERATION", "text": "" }, { "heading": "A.1 META-TRAIN", "text": "Data: For all experiments, we generate the data for meta-train tasks from a sum of sinusoidal functions with different spatial frequencies (Eq. 6).\nu(x, y) = ∑\n|k|,|l|≤F\nλk,l cos(kx+ ly) + γk,l sin(kx+ ly), λk,l, γk,l ∼ N (0, 0.02) , (6)\nwhere (x, y) in the 2D space ([0, 2π] × [0, 2π]) and k, l are randomly sampled integers. Once the spatially continuous function values are generated, we uniformly sample different number of locations from all grid points as observed nodes to simulate the case where the observations are irregularly distributed in space. We then construct a k-Nearest Neighbor graph based on the Euclidean distance as the input of graph neural networks. The combination of parameters to generate the synthetic dataset is given in Table 5. We construct 100 snapshots per a combination of the parameters (N,E, F ) using a unique random seed. 75 snapshots per each combination are used for Dtr and 25 snapshots are for Dte.\nTasks: For each node, we have the first and second order derivatives. We meta-train the spatial derivative modules (Sec. 3.1) to predict the spatial derivatives by feeding node and edge features (function value at a node and relative displacement, respectively) as input." }, { "heading": "A.2 META-TEST", "text": "" }, { "heading": "A.2.1 SYNTHETIC", "text": "Data: We generate the synthetic meta-test data from Eq. 6 but set different parameters to simulate the realistic scenario where meta-train tasks and meta-test tasks do not share the same distribution.\nTasks: We reuse the spatial modules in A.1 to evaluate how the meta-initialized parameters are easily adaptable to unseen graph signals with different spatial resolution, discretization, and the degree of function fluctuation. We use 15 snapshots for the adaptation in meta-test and 75 snapshots are used to evaluate the proposed model." }, { "heading": "A.2.2 REAL-WORLD DATASET", "text": "Data:\nAQI-CO (Berman, 2017): There are multiple pollutants in the dataset and we choose carbon monoxide (CO) ppm as a target pollutant in this paper. We select sensors located in between latitude (26, 33) and longitude (115,125) (East region of China). In this region, we sample multiple multivariate time series whose length should be larger than 12 steps (12 hours) for multiple meta-tasks. There are around 60 working sensors and the exact number of the working sensors is varying over different tasks. Fig. 4 shows the locations of selected AQI sensors.\nExtremeWeather: We select the data in the year 1984 from the extreme weather dataset in (Racah et al., 2017a). The data is an array of shape (1460, 16, 768, 1152), containing 1460 frames (4 per day, 365 days in the year). 16 channels in each frame correspond to 16 spatiotemporal variables. Each channel has a size of 768×1152 corresponding to one measurement per 25 square km on earth. For each frame, the dataset provides fewer than or equal to 15 bounding boxes, each of which labels the region affected by an extreme weather event and one of the four types of the extreme weather: (1) tropical depression, (2) tropical cyclone, (3) extratropical cyclone, (4) atmospheric river. In the single feature setting, we only utilize the channel of surface temperature (TS).\nTasks:\nAQI-CO: We select the first sequence of carbon monoxide (CO) ppm records from each month in the year 2015 at land-based stations, and set up the meta-test task on each sequence as the prediction of CO ppm. We construct a 6-NN graph based on the geodesic distances among stations.\nExtremeWeather: First, we aggregate all bounding boxes into multiple sequences. In each sequence, all bounding boxes (1) are in consecutive time steps, (2) are affected by the same type of extreme weather, and (3) have an intersection over union (IoU) ratio above 0.25 with the first bounding box in the sequence. Then we select the top-10 longest sequences. For each sequence, we consider its first bounding box A as the region affected by an extreme weather event, and extend it to a new sequence of 20 frames by cropping and appending the same region A from successive frames. For each region we uniformly sample 10% of available pixels as observed nodes to simulate irregularly spaced weather stations and build a 4-NN graph based on the Euclidean distance. Fig. 3 visualizes the first 5 frames of one extended sequence. In the single feature experiment, we set up a meta-test task on each extended sequence as the prediction of the surface temperature (TS) on all observed nodes with the initial TS given only." }, { "heading": "A.3 EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.3.1 BASELINES", "text": "PA-DGN (train from scratch) (Seo et al., 2020): For each meta-test task, initialize one PA-DGN model randomly and train it on the single task. The spatial derivative layer uses a message passing neural network (MPNN) with 2 GN blocks using 2-layer MLPs as update functions. The forward network part uses a recurrent graph neural network with 2 recurrent GN blocks using 2-layer GRU cells as update functions. We set its hidden dimension to 64, in which case PA-DGN has a similar number of parameters with RGN. The PA-DGN model has 384,653 learnable parameters.\nA.3.2 OURS\nPiMetaL: Meta-train the spatial derivative modules (SDM) with our proposed Alg. 3 on the meta-train tasks generated in A.1. Then for each meta-test task, initialize one time derivative module (TDM) randomly and output of SDM is fed into the TDM to train it on the single task. The architecture for SDM and TDM are identical for the spatial derivative layer and the recurrent graph network in PA-DGN, respectively." }, { "heading": "A.3.3 TRAINING SETTINGS", "text": "Training hyperparameters: For all meta-train and meta-test tasks, we use the Adam optimizer with the learning rate 1e-3. In each training epoch, we sample 1 task from all available tasks.\nEnvironments: All experiments are implemented with Python3.6 and PyTorch 1.3.0, and are conducted with NVIDIA GTX 1080 Ti GPUs.\nRuntime: The baselines RGN (train from scratch) and PA-DGN (train from scratch) will finish in 30 minutes. All other baselines will finish the meta-train stage in 4 hours and the meta-test stage in 2 hours. The runtime is measured in environments described above." }, { "heading": "B TASK 2: GRAPH SIGNAL REGRESSION", "text": "" }, { "heading": "B.1 META-TRAIN", "text": "Data: For the graph signal regression task, we generate synthetic dynamics for meta-train tasks and the synthetic data is adapted to a target dataset. Before setting the topological configuration for the synthetic dynamics, we first examine the target dataset to understand its topological properties. Based on the number of stations and the scale of records, we tune the topological configuration for the synthetic dataset. We use (N,F ) = (1700, 2) for the USA records and (N,F ) = (700, 1.5) for Europe records, respectively, and 100 different initial values are generated to define different tasks. Fig. 5 visualizes how the regional stations are distributed and Fig. 6 demonstrates how the spatial distribution of synthetic nodes and scales of synthetic values are adapted to the corresponding target dynamics." }, { "heading": "B.2 META-TEST", "text": "Data: The GHCN-Daily summaries from land surface stations across the globe provide daily climate records from numerous sources. As the records from 100,000 stations in 180 countries and territories, the distribution of the weather stations is spatially non-uniform. We sample sensors from two different regions (1) the USA and (2) Europe and construct a graph structure from the regional stations based on k-NN algorithm (k = 4) as described in Defferrard et al. (2019). There are 1,705 and 703 fully functioning sensors in the USA and Europe, respectively. We use 2010 year records and first few daily records for few-shot training (5 and 10) and next 100/150 days for validation and test. Note that the number of learnable parameters is significantly reduced compared to those of the previous task to minimize overfitting as well as be comparable to other variants of graph neural networks." }, { "heading": "B.3.1 BASELINES", "text": "" }, { "heading": "B.3 EXPERIMENTAL DETAILS", "text": "works (GCN) (Kipf & Welling, 2017), Graph attention networks (GAT) (Veličković et al., 2018), GraphSAGE (Hamilton et al., 2017), and Graph networks (GN) (Battaglia et al., 2018) to predict next signals across all nodes. For the baselines, we commonly consider 3-hop neighbors of i-th node to predict of the i-th node and the number of learnable parameters is similar to provide similar expressive power." }, { "heading": "C SENSITIVITY ANALYSIS OF SYNTHETIC DYNAMICS", "text": "It is important to study how much the model’s performance is dependent on synthetic topology. In this section, we conduct an ablation study to see whether different choices of the synthetic topology affects the performance significantly or not. According to Table 4, the regression error is fairly converged from a few samples for the Europe records. Thus, we apply different synthetic topology for the data to see if the saturated regression error is significantly changed. For this ablation study, we reuse the synthetic dynamics adapted for the USA records and generate one more synthetic dynamics for spatially low-resolution cases.\nTable 7 shows that the regression performance across different topology is stable regardless of the number of shots, however, it is significantly degraded when we change the synthetic topology from the adapted one ((N,F ) = (700, 1.5)) for meta-training. When we increase the spatial resolution (N = 700 → 1700), the meta-initialized spatial modules are adapted to learn spatial derivatives defined on spatially higher resolution. In such case, SDM likely assigns high weights to directly adjacent nodes as well as farther nodes (e.g., 3-hop nodes) as all neighbor nodes are strongly associated with exact spatial derivatives. On the other hand, if SDM is meta-initialized from a lower resolution (N = 700 → 128), further nodes are too much underestimated. Thus, it is important to construct proper topology for transferring the PDE-independent representations from synthetic dynamics to target dynamics." } ]
2,020
null
SP:0d632e93235a2e5b3016ba66b339e0141d510f1f
[ "This paper benchmarks popular optimizers for training neural networks. The experiments consider all possible combinations of 3 different tuning budgets, and 4 different fixed learning rate schedules on 8 deep learning workloads for 14 optimizers. The paper highlights two main observations: 1) there is no clear dominating optimizer, and 2) selecting from a pool of optimizers with their default parameters is often as good as tuning a fixed optimizer." ]
Choosing the optimizer is considered to be among the most crucial design decisions in deep learning, and it is not an easy one. The growing literature now lists hundreds of optimization methods. In the absence of clear theoretical guidance and conclusive empirical evidence, the decision is often made based on anecdotes. In this work, we aim to replace these anecdotes, if not with a conclusive ranking, then at least with evidence-backed heuristics. To do so, we perform an extensive, standardized benchmark of more than a dozen particularly popular deep learning optimizers while giving a concise overview of the wide range of possible choices. Analyzing almost 35,000 individual runs, we contribute the following three points: (i) Optimizer performance varies greatly across tasks. (ii) We observe that evaluating multiple optimizers with default parameters works approximately as well as tuning the hyperparameters of a single, fixed optimizer. (iii) While we can not discern an optimization method clearly dominating across all tested tasks, we identify a significantly reduced subset of specific algorithms and parameter choices that generally lead to competitive results in our experiments. This subset includes popular favorites and some lesser-known contenders. We have open-sourced all our experimental results, making them directly available as challenging and well-tuned baselines.1 This allows for more meaningful comparisons when evaluating novel optimization methods without requiring any further computational efforts.
[]
[ { "authors": [ "Laurence Aitchison" ], "title": "Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Rohan Anil", "Vineet Gupta", "Tomer Koren", "Kevin Regan", "Yoram Singer" ], "title": "Second Order Optimization Made Practical", "venue": null, "year": 2002 }, { "authors": [ "Imen Ayadi", "Gabriel Turinici" ], "title": "Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent", "venue": null, "year": 2002 }, { "authors": [ "Kiwook Bae", "Heechang Ryu", "Hayong Shin" ], "title": "Does Adam optimizer keep close to the optimal point", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Jiyang Bai", "Jiawei Zhang" ], "title": "BGADAM: Boosting based Genetic-Evolutionary ADAM for Convolutional Neural Network Optimization", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Lukas Balles", "Philipp Hennig" ], "title": "Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Irwan Bello", "Barret Zoph", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Neural Optimizer Search with Reinforcement Learning", "venue": "In 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "SIGNSGD: Compressed Optimisation for Non-Convex Problems", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Leonard Berrada", "Andrew Zisserman", "M. Pawan Kumar" ], "title": "Training Neural Networks for and by Interpolation", "venue": "In 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Oleksandr Borysenko", "Maksym Byshkin" ], "title": "CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing", "venue": null, "year": 2005 }, { "authors": [ "Aleksandar Botev", "Hippolyt Ritter", "David Barber" ], "title": "Practical Gauss-Newton Optimisation for Deep Learning", "venue": "In 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Léon Bottou" ], "title": "Stochastic gradient descent tricks", "venue": "In Neural networks: Tricks of the trade. Springer,", "year": 2012 }, { "authors": [ "Chia-Yu Chen", "Jungwook Choi", "Daniel Brand", "Ankur Agrawal", "Wei Zhang", "Kailash Gopalakrishnan" ], "title": "AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training", "venue": "In 32nd AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Jinghui Chen", "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ], "title": "Closing the generalization gap of adaptive gradient methods in training deep neural networks", "venue": "In 29th International Joint Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the Convergence of A Class of AdamType Algorithms for Non-Convex Optimization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yushu Chen", "Hao Jing", "Wenlai Zhao", "Zhiqiang Liu", "Ouyi Li", "Liang Qiao", "Wei Xue", "Haohuan Fu", "Guangwen Yang" ], "title": "An Adaptive Remote Stochastic Gradient Method for Training", "venue": "Neural Networks. arXiv preprint:", "year": 2019 }, { "authors": [ "Yushu Chen", "Hao Jing", "Wenlai Zhao", "Zhiqiang Liu", "Liang Qiao", "Wei Xue", "Haohuan Fu", "Guangwen Yang" ], "title": "NAMSG: An Efficient Method For Training", "venue": "Neural Networks. arXiv preprint:", "year": 2019 }, { "authors": [ "Ziyi Chen", "Yi Zhou" ], "title": "Momentum with Variance Reduction for Nonconvex Composition Optimization", "venue": null, "year": 2005 }, { "authors": [ "Dami Choi", "Christopher J. Shallue", "Zachary Nado", "Jaehoon Lee", "Chris J. Maddison", "George E. Dahl" ], "title": "On Empirical Comparisons of Optimizers for Deep Learning", "venue": null, "year": 1910 }, { "authors": [ "Aditya Devarakonda", "Maxim Naumov", "Michael Garland" ], "title": "AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Jianbang Ding", "Xuancheng Ren", "Ruixuan Luo", "Xu Sun" ], "title": "An Adaptive and Momental Bound Method for Stochastic Learning", "venue": null, "year": 1910 }, { "authors": [ "Timothy Dozat" ], "title": "Incorporating Nesterov Momentum into Adam", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Shiv Ram Dubey", "Soumendu Chakraborty", "Swalpa Kumar Roy", "Snehasis Mukherjee", "Satish Kumar Singh", "Bidyut Baran Chaudhuri" ], "title": "diffGrad: An Optimization Method for Convolutional Neural Networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", "venue": "Journal of Machine Learning Research, JMLR,", "year": 2011 }, { "authors": [ "Abraham J. Fetterman", "Christina H. Kim", "Joshua Albrecht" ], "title": "SoftAdam: Unifying SGD and Adam for better stochastic gradient descent, 2019", "venue": null, "year": 2019 }, { "authors": [ "Boris Ginsburg", "Patrice Castonguay", "Oleksii Hrinchuk", "Oleksii Kuchaiev", "Vitaly Lavrukhin", "Ryan Leary", "Jason Li", "Huyen Nguyen", "Jonathan M. Cohen" ], "title": "Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks", "venue": null, "year": 1905 }, { "authors": [ "Donald Goldfarb", "Yi Ren", "Achraf Bahamou" ], "title": "Practical Quasi-Newton Methods for Training Deep Neural Networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Shampoo: Preconditioned Stochastic Tensor Optimization", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "João F. Henriques", "Sébastien Ehrhardt", "Samuel Albanie", "Andrea Vedaldi" ], "title": "Small Steps and Giant Leaps: Minimal Newton Solvers for Deep Learning", "venue": "In IEEE/CVF International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Byeongho Heo", "Sanghyuk Chun", "Seong Joon Oh", "Dongyoon Han", "Sangdoo Yun", "Youngjung Uh", "Jung-Woo Ha" ], "title": "Slowing Down the Weight Norm Increase in Momentum-based Optimizers", "venue": null, "year": 2006 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal Language Model Fine-tuning for Text Classification", "venue": "In 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Yifan Hu", "Siqi Zhang", "Xin Chen", "Niao He" ], "title": "Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Yuzheng Hu", "Licong Lin", "Shange Tang" ], "title": "Second-order Information in First-order Optimization Methods", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Haiwen Huang", "Chang Wang", "Bin Dong" ], "title": "Nostalgic Adam: Weighting More of the Past Gradients When Designing the Adaptive Learning Rate", "venue": "In 28th International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xunpeng Huang", "Hao Zhou", "Runxin Xu", "Zhe Wang", "Lei Li" ], "title": "Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs", "venue": null, "year": 2006 }, { "authors": [ "Yasutoshi Ida", "Yasuhiro Fujiwara", "Sotetsu Iwamura" ], "title": "Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks", "venue": "In 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Wendyam Eric Lionel Ilboudo", "Taisuke Kobayashi", "Kenji Sugimoto" ], "title": "TAdam: A Robust Stochastic Gradient Optimizer", "venue": null, "year": 2003 }, { "authors": [ "Zhanhong Jiang", "Aditya Balu", "Sin Yong Tan", "Young M Lee", "Chinmay Hegde", "Soumik Sarkar" ], "title": "On Higher-order Moments in Adam", "venue": null, "year": 1910 }, { "authors": [ "Tyler B. Johnson", "Pulkit Agrawal", "Haijie Gu", "Carlos Guestrin" ], "title": "AdaScale SGD: A User-Friendly Algorithm for Distributed Training, 2020", "venue": null, "year": 2020 }, { "authors": [ "Dominic Kafka", "Daniel Wilke" ], "title": "Gradient-only line searches: An Alternative to Probabilistic Line Searches", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Chad Kelterborn", "Marcin Mazur", "Bogdan V. Petrenko" ], "title": "Gravilon: Applications of a New Gradient Descent Method to Machine Learning", "venue": null, "year": 2008 }, { "authors": [ "Nitish Shirish Keskar", "Richard Socher" ], "title": "Improving Generalization Performance by Switching from Adam to SGD", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Mohammad Emtiyaz Khan", "Didrik Nielsen", "Voot Tangkaratt", "Wu Lin", "Yarin Gal", "Akash Srivastava" ], "title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Kfir Yehuda Levy", "Alp Yurtsever", "Volkan Cevher" ], "title": "Online Adaptive Methods, Universality and Acceleration", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Wenjie Li", "Zhaoyang Zhang", "Xinjiang Wang", "Ping Luo" ], "title": "AdaX: Adaptive Gradient Descent with Exponential Long Term Memory", "venue": "arXiv preprint: 2004.09740,", "year": 2020 }, { "authors": [ "Zhize Li", "Hongyan Bao", "Xiangliang Zhang", "Peter Richtárik" ], "title": "PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization", "venue": "arXiv preprint: 2008.10898,", "year": 2020 }, { "authors": [ "Liang Liu", "Xiaopeng Luo" ], "title": "A New Accelerated Stochastic Gradient Method with Momentum", "venue": null, "year": 2006 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Pietro Longhi" ], "title": "Wall crossing invariants from spectral networks", "venue": "Annales Henri Poincaré,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic Gradient Descent with Warm Restarts", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive Gradient Methods with Dynamic Bound of Learning Rate", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jerry Ma", "Denis Yarats" ], "title": "Quasi-hyperbolic momentum and Adam for deep learning", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Maren Mahsereci" ], "title": "Probabilistic Approaches to Stochastic Optimization", "venue": "Ph.D. Thesis, University of Tuebingen,", "year": 2018 }, { "authors": [ "Itzik Malkiel", "Lior Wolf" ], "title": "MTAdam: Automatic Balancing of Multiple Training Loss Terms", "venue": null, "year": 2006 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing Neural Networks with Kronecker-Factored Approximate Curvature", "venue": "In 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Mahesh Chandra Mukkamala", "Matthias Hein" ], "title": "Variants of RMSProp and Adagrad with Logarithmic Regret Bounds", "venue": "In 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Maximus Mutschler", "Andreas Zell" ], "title": "Parabolic Approximation Line Search for DNNs", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Parvin Nazari", "Davoud Ataee Tarzanagh", "George Michailidis" ], "title": "DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization", "venue": null, "year": 1901 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate O(1/kˆ2)", "venue": "Soviet Mathematics Doklady,", "year": 1983 }, { "authors": [ "Francesco Orabona", "Dávid Pál" ], "title": "Scale-Free Algorithms for Online Linear Optimization", "venue": "In Algorithmic Learning Theory - 26th International Conference,", "year": 2015 }, { "authors": [ "Antonio Orvieto", "Jonas Köhler", "Aurélien Lucchi" ], "title": "The Role of Memory in Stochastic Optimization", "venue": "In 35th Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "B.T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Konpat Preechakul", "Boonserm Kijsirikul" ], "title": "CProp: Adaptive Learning Rate Scaling from Past Gradient Conformity", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the Convergence of Adam and Beyond", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A Stochastic Approximation Method", "venue": "The Annals of Mathematical Statistics,", "year": 1951 }, { "authors": [ "Michal Rolínek", "Georg Martius. L" ], "title": "Practical loss-based stepsize adaptation for deep learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Arnold Salas", "Samuel Kessler", "Stefan Zohren", "Stephen Roberts" ], "title": "Practical Bayesian Learning of Neural Networks via Adaptive Subgradient Methods", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Pedro Savarese", "David McAllester", "Sudarshan Babu", "Michael Maire" ], "title": "Domain-independent Dominance of Adaptive Methods", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Tom Schaul", "Yann LeCun" ], "title": "Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients", "venue": "In 1st International Conference on Learning Representations,", "year": 2013 }, { "authors": [ "Tom Schaul", "Sixin Zhang", "Yann LeCun" ], "title": "No more pesky learning rates", "venue": "In 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Fanhua Shang", "Kaiwen Zhou", "Hongying Liu", "James Cheng", "Ivor W. Tsang", "Lijun Zhang", "Dacheng Tao", "Licheng Jiao" ], "title": "VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning", "venue": "IEEE Trans. Knowl. Data Eng.,", "year": 2020 }, { "authors": [ "Leslie N. Smith" ], "title": "Cyclical Learning Rates for Training Neural Networks", "venue": "In IEEE Winter Conference on Applications of Computer Vision,", "year": 2017 }, { "authors": [ "Leslie N. Smith", "Nicholay Topin" ], "title": "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Huikang Sun", "Lize Gu", "Bin Sun" ], "title": "Adathm: Adaptive Gradient Method Based on Estimates of Third-Order Moments", "venue": "In 4th IEEE International Conference on Data Science in Cyberspace,", "year": 2019 }, { "authors": [ "Wonyong Sung", "Iksoo Choi", "Jinhwan Park", "Seokhyun Choi", "Sungho Shin" ], "title": "S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima", "venue": null, "year": 2009 }, { "authors": [ "Conghui Tan", "Shiqian Ma", "Yu-Hong Dai", "Yuqiu Qian" ], "title": "Barzilai-Borwein Step Size for Stochastic Gradient Descent", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Zeyi Tao", "Qi Xia", "Qun Li" ], "title": "A new perspective in understanding of Adam-Type algorithms and beyond, 2019", "venue": null, "year": 2019 }, { "authors": [ "Brian Teixeira", "Birgi Tamersoy", "Vivek Singh", "Ankur Kapoor" ], "title": "Adaloss: Adaptive Loss Function for Landmark Localization", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5—RMSProp: Divide the gradient by a running average of its recent magnitude", "venue": null, "year": 2012 }, { "authors": [ "Qianqian Tong", "Guannan Liang", "Jinbo Bi" ], "title": "Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Phuong Thi Tran", "Le Trieu Phong" ], "title": "On the Convergence Proof of AMSGrad and a New Version", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Rasul Tutunov", "Minne Li", "Alexander I. Cowen-Rivers", "Jun Wang", "Haitham Bou-Ammar" ], "title": "Compositional ADAM: An Adaptive Compositional Solver", "venue": null, "year": 2002 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention Is All You Need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Sharan Vaswani", "Aaron Mishkin", "Issam H. Laradji", "Mark Schmidt", "Gauthier Gidel", "Simon Lacoste-Julien" ], "title": "Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Thijs Vogels", "Sai Praneeth Karimireddy", "Martin Jaggi" ], "title": "PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Bao Wang", "Tan M. Nguyen", "Andrea L. Bertozzi", "Richard G. Baraniuk", "Stanley J. Osher" ], "title": "Scheduled restart momentum for accelerated stochastic gradient descent", "venue": "arXiv preprint: 2002.10583,", "year": 2020 }, { "authors": [ "Dong Wang", "Yicheng Liu", "Wenwo Tang", "Fanhua Shang", "Hongying Liu", "Qigong Sun", "Licheng Jiao" ], "title": "signADAM++: Learning Confidences for Deep Neural Networks", "venue": "In International Conference on Data Mining Workshops,", "year": 2019 }, { "authors": [ "Guanghui Wang", "Shiyin Lu", "Quan Cheng", "Weiwei Tu", "Lijun Zhang" ], "title": "SAdam: A Variant of Adam for Strongly Convex Functions", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jiaxuan Wang", "Jenna Wiens" ], "title": "AdaSGD: Bridging the gap between SGD and Adam", "venue": null, "year": 2006 }, { "authors": [ "Shipeng Wang", "Jian Sun", "Zongben Xu" ], "title": "HyperAdam: A Learnable Task-Adaptive Adam for Network Training", "venue": "In 33rd AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xiaoxia Wu", "Rachel Ward", "Léon Bottou" ], "title": "WNGrad: Learn the Learning Rate in Gradient Descent", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Cong Xie", "Oluwasanmi Koyejo", "Indranil Gupta", "Haibin Lin" ], "title": "Local AdaAlter: CommunicationEfficient Stochastic Gradient Descent with Adaptive Learning Rates", "venue": null, "year": 1911 }, { "authors": [ "Chen Xing", "Devansh Arpit", "Christos Tsirigotis", "Yoshua Bengio" ], "title": "A Walk with SGD", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Yangyang Xu" ], "title": "Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization", "venue": null, "year": 2006 }, { "authors": [ "Minghan Yang", "Dong Xu", "Yongfeng Li", "Zaiwen Wen", "Mengyun Chen" ], "title": "Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems", "venue": null, "year": 2006 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Sheng Shen", "Kurt Keutzer", "Michael W. Mahoney" ], "title": "ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Kurt Keutzer", "Cho-Jui Hsieh" ], "title": "Large Batch Optimization for Deep Learning: Training BERT in 76 minutes", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jihun Yun", "Aurelie C. Lozano", "Eunho Yang" ], "title": "Stochastic Gradient Methods with Block Diagonal Matrix Adaptation", "venue": "arXiv preprint:", "year": 2019 }, { "authors": [ "Manzil Zaheer", "Sashank J. Reddi", "Devendra Singh Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive Methods for Nonconvex Optimization", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Matthew D. Zeiler" ], "title": "ADADELTA: An Adaptive Learning Rate Method", "venue": "arXiv preprint:", "year": 2012 }, { "authors": [ "Guodong Zhang", "Shengyang Sun", "David Duvenaud", "Roger Grosse" ], "title": "Noisy Natural Gradient as Variational Inference", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jian Zhang", "Ioannis Mitliagkas" ], "title": "YellowFin and the Art of Momentum Tuning", "venue": "In Machine Learning and Systems,", "year": 2019 }, { "authors": [ "Jiawei Zhang", "Fisher B. Gouza" ], "title": "GADAM: Genetic-Evolutionary ADAM for Deep Neural Network Optimization", "venue": "arXiv preprint:", "year": 2018 }, { "authors": [ "Jingzhao Zhang", "Sai Praneeth Karimireddy", "Andreas Veit", "Seungyeon Kim", "Sashank J Reddi", "Sanjiv Kumar", "Suvrit Sra" ], "title": "Why are adaptive methods good for attention models", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "Michael R. Zhang", "James Lucas", "Geoffrey Hinton", "Jimmy Ba" ], "title": "Lookahead Optimizer: k steps forward, 1 step back", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Zijun Zhang", "Lin Ma", "Zongpeng Li", "Chuan Wu" ], "title": "Normalized Direction-preserving Adam", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Shen-Yi Zhao", "Yin-Peng Xie", "Wu-Jun Li" ], "title": "Stochastic Normalized Gradient Descent with Momentum for Large Batch Training", "venue": null, "year": 2007 }, { "authors": [ "Bingxin Zhou", "Xuebin Zheng", "Junbin Gao" ], "title": "ADAMT: A Stochastic Optimization with Trend Correction Scheme", "venue": null, "year": 2001 }, { "authors": [ "Zhiming Zhou", "Qingru Zhang", "Guansong Lu", "Hongwei Wang", "Weinan Zhang", "Yong Yu" ], "title": "AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Liu Ziyin", "Zhikang T. Wang", "Masahito Ueda" ], "title": "LaProp: a Better Way to Combine Momentum with Adaptive Gradient", "venue": null, "year": 2002 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large-scale stochastic optimization drives a wide variety of machine learning tasks. Because choosing the right optimization algorithm and effectively tuning its hyperparameters heavily influences the training speed and final performance of the learned model, doing so is an important, every-day challenge to practitioners. Hence, stochastic optimization methods have been a focal point of research (cf. Figure 1), engendering an ever-growing list of algorithms, many of them specifically targeted towards deep learning. The hypothetical machine learning practitioner who is able to keep up with the literature now has the choice among hundreds of methods (cf. Table 2 in the appendix)—each with their own set of tunable hyperparameters—when deciding how to train their model.\nThere is limited theoretical analysis that would clearly favor one of these choices over the others. Some authors have offered empirical comparisons on comparably small sets of popular methods (e.g. Wilson et al., 2017; Choi et al., 2019; Sivaprasad et al., 2020); but for most algorithms, the only formal empirical evaluation is offered by the original work introducing the method. Many practitioners and researchers, meanwhile, rely on personal and anecdotal experience, and informal discussion on social media or with colleagues. The result is an often unclear, perennially changing “state of the art” occasionally driven by hype. The key obstacle for an objective benchmark is the combinatorial cost of such an endeavor posed by comparing a large number of methods on a large number of problems, with the high resource and time cost of tuning each method’s parameters and repeating each (stochastic) experiment repeatedly for fidelity.\nOffering our best attempt to construct such a comparison, we conduct a large-scale benchmark of optimizers to further the debate about deep learning optimizers, and to help understand how the choice of optimization method and hyperparameters influences the training performance. Specifically,\n1https://github.com/AnonSubmitter3/Submission543\nwe examine whether recently proposed methods show an improved performance compared to more established methods such as SGD or ADAM. Additionally, we are interested in assessing whether optimization methods with well-working default hyperparameters exist that are able to keep up with tuned optimization methods. To this end, we evaluate more than a dozen optimization algorithms, largely selected for their perceived popularity, on a range of representative deep learning problems (see Figure 4) drawing conclusions from tens of thousands of individual training runs.\nRight up front, we want to state clearly that it is impossible to include all optimizers (cf. Table 2 in the appendix), and to satisfy any and all expectations readers may have on tuning and initialization procedures, or the choice of benchmark problems—not least because everyone has different expectations in this regard. In our personal opinion, what is needed is an empirical comparison by a third party not involved in the original works. As a model reader of our work, we assume a careful practitioner who does not have access to near-limitless resources, nor to a broad range of personal experiences. As such, the core contributions (in order of appearance, not importance) of our work are:\nA concise summary of optimization algorithms and schedules A partly automated, mostly manual literature review provides a compact but extensive list of recent advances in stochastic optimization. We identify more than a hundred optimization algorithms (cf. Table 2 in the appendix) and more than 20 families of hyperparameter schedules (cf. Table 3 in the appendix) published at least as pre-prints.\nAn extensive optimizer benchmark on deep learning tasks We conduct a large-scale optimizer benchmark, specifically focusing on optimization problems arising in deep learning. We evaluate 14 optimizers on eight deep learning problems using four different schedules, tuning over dozens of hyperparameter settings, to our knowledge, this is the most comprehensive empirical evaluation of deep learning optimizers to date (cf. Section 1.1 on related work).\nAn analysis of thousands of optimization runs Our empirical experiments indicate that an optimizer’s performance highly depends on the test problem (see Figure 4). But some high-level trends emerge, too: (1) Evaluating multiple optimizers with default hyperparameters works approximately as well as tuning the hyperparameters for a fixed optimizer. (2) Using an additional untuned learning rate schedule helps on average, but its effect varies greatly depending on the optimizer and the test problem. (3) While there is no optimizer that clearly dominates across all tested workloads, some of the algorithms we tested exhibited highly variable performance. Others demonstrated decent performance consistently. We deliberately refrain from recommending a single one among them, because we could not find a clear winner with statistical confidence.\nAn open-source baseline for future optimizer benchmarks Our results are accessible online in an open and easily accessible form (see footnote on Page 1). These results can thus be used as competitive and well-tuned baselines for future benchmarks of new algorithms, drastically reducing the amount of computational budget required for a meaningful optimizer comparison. Our baselines can easily be expanded, and we encourage others to contribute to this collection.\nThe high-level result of our benchmark is, perhaps expectedly, not a clear winner. Instead, our comparison shows that, while some optimizers are frequently decent, they also generally perform similarly, switching their relative positions in the ranking which can partially be explained by the\nNo Free Lunch Theorem (Wolpert & Macready, 1997). A key insight of our comparison is that a practitioner with a new deep learning task can expect to do about equally well by taking almost any method from our benchmark and tuning it, as they would by investing the same computational resources into running a set of optimizers with their default settings and picking the winner.\nPossibly the most important takeaway from our comparison is that “there are now enough optimizers.” Methods research in stochastic optimization should focus on significant (conceptual, functional, performance) improvements—such as methods specifically suited for certain problem types, innerloop parameter tuning or structurally novel methods. We make this claim not to discourage research but, quite on the contrary, to offer a motivation for more meaningful, non-incremental research." }, { "heading": "1.1 RELATED WORK", "text": "Following the rapid increase in publications on optimizers, benchmarking these methods for the application in deep learning has only recently attracted significant interest. Schneider et al. (2019) introduced a benchmarking framework called DEEPOBS, which includes a wide range of realistic deep learning test problems together with standardized procedures for evaluating optimizers. Metz et al. (2020) presented TASKSET, another collection of optimization problems focusing on smaller but many more test problems. For the empirical analysis presented here, we use DEEPOBS as it provides optimization problems closer to real-world deep learning tasks. In contrast to our evaluation of existing methods, TASKSET and its analysis focuses on meta-learning new algorithms or hyperparameters.\nBoth Choi et al. (2019) and Sivaprasad et al. (2020) analyzed specific aspects of benchmarking process. Sivaprasad et al. (2020) used DEEPOBS to illustrate that the relative performance of an optimizer depends significantly on the used hyperparameter tuning budget. The analysis by Choi et al. (2019) supports this point, stating that “the hyperparameter search space may be the single most important factor explaining the rankings.” They further stress a hierarchy among optimizers, demonstrating that, given sufficient hyperparameter tuning, more general optimizers can never be outperformed by special cases. In their study, however, they manually chose a hyperparameter search space per optimizer and test problem basing it either on prior published results, prior experiences, or pre-tuning trials. Here we instead aim to identify well-performing optimizers in the case of a less extensive tuning budget and especially when there is no prior knowledge about well-working hyperparameter values for each specific test problem. We further elaborate on the influence of our chosen hyperparameter search strategy in Section 4 discussing the limitations of our empirical study.\nOur work is also related to empirical generalization studies of adaptive methods, such as that of Wilson et al. (2017) which sparked an extensive discussion whether adaptive methods (e.g. ADAM) tend to generalize worse than standard first-order methods (i.e. SGD)." }, { "heading": "2 BENCHMARKING PROCESS", "text": "Any benchmarking effort requires tricky decisions on the experimental setup that influence the result. Evaluating on a specific task or picking a certain tuning budget, for example, may favor or disadvantage certain algorithms (Sivaprasad et al., 2020). It is impossible to avoid these decisions or to cover all possible choices. Aiming for generality, we evaluate the performance on eight diverse real-world deep learning problems from different disciplines (Section 2.1). From a collection of more than a hundred deep learning optimizers (Table 2 in the appendix) we select 14 of the most popular and most promising choices (cf. Figure 1) for this benchmark (Section 2.2). For each test problem and optimizer we evaluate all possible combinations of three different tuning budgets (Section 2.3) and four selected learning rate schedules (Section 2.4), thus covering the following combinatorial space:\nProblem P1 P2 . . .\nP8 8 × Optimizer AMSBound AMSGrad . . . SGD 14 × Tuning one-shot small budget large budget 3 × Schedule constant cosine decay cosine warm restarts trapezoidal 4 .\nCombining those options results in 1,344 possible configurations and roughly 35,000 individual runs." }, { "heading": "2.1 TEST PROBLEMS", "text": "We consider the eight optimization tasks summarized in Table 1, available as the “small” (P1–P4) and “large” (P5–P8) problem sets, respectively, together forming the default collection of DEEPOBS. A detailed description of these problems, including architectures, training parameters, etc. can be found in the work of Schneider et al. (2019).2 DEEPOBS’ test problems provide several performance metrics, including the training and test loss, the validation accuracy, etc. While these are all relevant, any comparative evaluation of optimizers requires picking only a few, if not just one particular performance metric. For our analysis (Section 3), we focus on the final test accuracy (or the final test loss, if no accuracy is defined for this problem). This metric captures, for example, the optimizer’s ability to generalize and is thus highly relevant for practical use. Our publicly released results include all metrics for completeness. An example of training loss performance is shown in Figure 16 in the appendix. Accordingly, the tuning (Section 2.3) is done with respect to the validation metric. We discuss possible limitations resulting from these choices in Section 4." }, { "heading": "2.2 OPTIMIZER SELECTION", "text": "In Table 2 in the appendix we collect over a hundred optimizers introduced for, suggested for, or used in deep learning. This list was manually and incrementally collected by multiple researchers trying to keep up with the field over recent years. It is thus necessarily incomplete, although it may well represent one of the most exhaustive of such collections. Even this incomplete list, though, contains too many entries for a meaningful benchmark with the degrees of freedom collected above. This is a serious problem for research: Even an author of a new optimizer, let alone a practitioner, could not possibly be expected to compare their work with every possible competing method.\nWe thus selected a subset of 14 optimizers, which we consider to be currently the most popular choices in the community (see Table 4 in the appendix). These do not necessarily reflect the “best” algorithms, but are either commonly used by practitioners and researchers, or have recently generated enough attention to garner interest. Our selection is focused on first-order optimization methods, both due to their prevalence for non-convex continuous optimization problems in deep learning as well as to simplify the comparison. Whether there is a significant difference between these optimizers or if they are inherently redundant is one of the questions this work investigates.\nWith our list, we tried to focus on optimization algorithms over techniques, although we acknowledge, the line being very blurry. Techniques such as averaging weights (Izmailov et al., 2018, e.g.) or ensemble methods (Garipov et al., 2018, e.g.) have been shown to be simple but effective at improving the optimization performance. Those methods, however, can be applied to all methods in our lists, similar to regularization techniques, learning rate schedules, or tuning methods and we have, therefore, decided to omit them from Table 2.\n2All experiments were performed using version 1.2.0-beta of DEEPOBS and TensorFlow version 1.15 Abadi et al. (2015)." }, { "heading": "2.3 TUNING", "text": "Budget Optimization methods for deep learning regularly expose hyperparameters to the user. The user sets them either by relying on the default suggestion; using experience from previous experiments; or using additional tuning runs to find the best-performing setting. All optimizers in our benchmark have tunable hyperparameters, and we consider three different tuning budgets.\nThe first budget consists of just a single run. This one-shot budget uses the default values proposed by the original authors, where available (Table 4 in the appendix lists the default parameters). If an optimizer performs well in this setting, this has great practical value, as it drastically reduces the computational resources required for training. The other budgets consist of 25 and 50 tuning runs for what we call the small and large budget settings, respectively.\nWe only use a single seed for tuning, then repeat the best setting 10 times using different seeds. This allows us to report standard deviations in addition to means, assessing stability. Progressing in this way has the “feature” that our tuning process can sometimes pick “lucky” seeds, which do not perform as well when averaging over multiple runs. This is arguably a good reflection of reality. Stable optimizers should be preferred in practice, which is thus reflected in our benchmark. See Appendix C for further analysis. By contrast, using all 10 random seeds for tuning as well would drastically increase cost, not only for this benchmark, rendering it practically infeasible, but also as an approach for the practical user. Appendix D explores this aspect further: If anything, re-tuning would further broaden the distribution of results.\nTuning method We tune parameters by random search, for both the small and the large budget. Random search is a common choice in practice due to its efficiency advantage over grid search (Bergstra & Bengio, 2012) and its ease of implementation and parallelization compared to Bayesian optimization (see also Section 4). A minor complication of random search is that the sampling distribution affects the optimizer’s performance. One can think of the sampling distribution as a prior over good parameter settings, and bad priors consequently ruin performance. We followed the mathematical bounds and intuition provided by the optimizers’ authors for relevant hyperparameters. The resulting sampling distributions can be found in Table 4 in the appendix. In case there is no prior knowledge provided in the cited work we chose similar distributions for similar hyperparameters across different optimizers. Even though a hyperparameter might have a similar naming throughout different optimization algorithms (e.g. learning rate α), its appropriate search space can differ across optimizers. Without grounded heuristics on how the hyperparameters differ between optimizers, the most straightforward approach for any user is to use the same search space.\nWhat should be considered a hyperparameter? There’s a fuzzy boundary between (tunable) hyperparameters and (fixed) design parameters. A recently contentious example is the ε in adaptive learning rate methods like ADAM. It was originally introduced as a safeguard against division by zero, but has recently been re-interpreted as a problem-dependent hyperparameter choice (see Choi et al. (2019) for a discussion). Under this view, one can actually consider several separate optimizers called ADAM: From an easy-to-tune but potentially limited ADAMα, only tuning the learning rate, to the tricky-to-tune but all-powerful ADAMα,β1,β2,ε, which subsumes SGD as a corner case in its hyperparameter space. In our benchmark, we include ADAMα,β1,β2 as a popular choice. While they share the same update rule, we consider them to be different optimizers." }, { "heading": "2.4 SCHEDULES", "text": "The literature on learning rate schedules is now nearly as extensive as that on optimizers (cf. Table 3 in the appendix). In theory, schedules can be applied to all hyperparameters of an optimization algorithm but to keep our configuration space feasible, we only apply schedules to the learning rate, by far the most popular practical choice (Goodfellow et al., 2016; Zhang et al., 2020). We choose four different learning rate schedules, trying to cover all major types of schedules (see Appendix E):\n• A constant learning rate schedule; • A cosine decay (Loshchilov & Hutter, 2017) as an example of a smooth decay; • A cosine with warm restarts schedule (Loshchilov & Hutter, 2017) as a cyclical schedule; • A trapezoidal schedule (Xing et al., 2018) from the warm-up schedules (Goyal et al., 2017)." }, { "heading": "3 RESULTS", "text": "How well do optimizers work out-of-the-box? By comparing each optimizer’s one-shot results against the tuned versions of all 14 optimizers, we can construct a 14× 14 matrix of performance gains. Figure 2 illustrates this on five test problems showing improvements by a positive sign and a green cell. Detailed plots for all problems are in Figures 9 and 10 in the appendix. For example, the bottom left cell of the largest matrix in Figure 2 shows that AMSBOUND (1) tuned using a small budget performs 2.5% better than SGD (14) with default parameters on this specific problem.\nA green row in Figure 2 indicates that an optimizer’s default setting is performing badly, since it can be beaten by any well-tuned competitor. We can observe badly-performing default settings for MOMENTUM, NAG and SGD, advocating the intuition that non-adaptive optimization methods require more tuning, but also for AMSGRAD and ADADELTA. This is just a statement about the default parameters suggested by the authors or the popular frameworks, well-working default parameters might well exist for those methods. Conversely, a white & red row signals a wellperforming default setting, since even tuned optimizers cannot significantly outperform this algorithm. ADAM, NADAM and RADAM, as well as AMSBOUND and ADABOUND all have white or red rows on several (but not all!) test problems, supporting the rule of thumb that adaptive methods have well-working default parameters. Conversely, green (or red) columns highlight optimizers that, when tuned, perform better (or worse) than all untuned optimization methods. We do not observe such columns consistently across tasks. This supports the conclusion that an optimizer’s performance is heavily problem-dependent and that there is no single best optimizer across workloads.\nFigures 9 to 12 in the appendix and our conclusions from them suggest an interesting alternative approach for machine learning practitioners: Instead of picking a single optimizer and tuning its hyperparameters, trying out multiple default setting optimizers and picking the best one should yield competitive results with less computational and tuning choice efforts. The similarity of those two approaches might be due to the fact that optimizers have implicit learning rate schedules and trying out different optimizers is similar to trying out different (well-tested) schedules (Agarwal et al., 2020).\nHow much do tuning and schedules help? We consider the final performance achieved by varying budgets and schedules to quantify the usefulness of tuning and applying parameter-free schedules (Figure 3). While there is no clear trend for any individual setting (gray lines), in the median we observe that increasing the budget improves performance, albeit with diminishing returns. For\nexample, using the large budget without any schedule leads to a median relative improvement of the performance of roughly 3.4 % compared to the default parameters (without schedule).\nSimilarly, applying a parameter-free (i.e. untuned) schedule improves median performance. For example, the large tuning budget coupled with a trapezoidal learning rate schedule leads to a median relative improvement of roughly 5.3 % compared to the default parameters. However, while these trends hold in the median, their individual effect varies wildly among optimizers and test problems, as is apparent from the noisy structure of the individual lines shown in Figure 3.\noneshot\nconst.\noneshot\ncosine wr\noneshot\ncosine\noneshot\ntrapez.\nsmall budget\nconst.\nsmall budget\ncosine wr\nsmall budget\ncosine\nsmall budget\ntrapez.\nlarge budget\nconst.\nlarge budget\ncosine wr\nlarge budget\ncosine\nlarge budget\ntrapez.\nTuning:\nSchedule:\n2.0%\n0.0%\n2.0%\n4.0%\n6.0%\n8.0%\n10.0%\nRe la\ntiv e\nim pr\nov em\nen t\nFigure 3: Lines in gray (—, smoothed by cubic splines for visual guidance only) show the relative improvement for a certain tuning and schedule (compared to the one-shot tuning without schedule) for all 14 optimizers on all eight test problems. The median over all lines is plotted in orange (—) with the shaded area (z) indicating the area between the 25th and 75th percentile.\nWhich optimizers work well after tuning? Figure 4 compares the optimizers’ performance across the test problems. There is no single optimizer that dominates its competitors across all tasks. Nevertheless, some optimizers generally perform well, while others vary wildly in their behavior. Further\nsupporting the hypothesis of previous sections, we note that taking the best out of a small set of un-\ntuned optimizers — for example, ADAM and ADABOUND— frequently results in competitive overall performance, even compared to well-tuned optimizers. Combining these runs with a tuned version of ADAM (or variants thereof) generally yields competitive results in our benchmark. Nevertheless, achieving (or getting close to) the absolute best performance still requires testing multiple optimizers. Which optimizer wins in the end, though, is problem-dependent: optimizers that achieve top scores on one problem can perform rather badly on other tasks. We note in passing that the individual optimizer rankings can change when considering e.g. a smaller budget or an additional learning rate schedule (see Figures 13 to 15 in the appendix). However, the overall trends described here are consistent." }, { "heading": "4 LIMITATIONS", "text": "Any empirical benchmark has constraints and limitations. Here we highlight some of them and characterize the context within which our results should be considered.\nGeneralization of the results By using the test problems from DEEPOBS, which span models and data sets of varying complexity, size, and different domains, we aim for generalization. Our results are, despite our best efforts, reflective of not just these setups, but also to the chosen training parameters, the software framework, and further unavoidable choices. The design of our comparisons aims to be close to what an informed practitioners would encounter in practice. It goes without saying that even a carefully curated range of test problems cannot cover all challenges of machine learning or even just deep learning. In particular, our conclusions may not generalize to other types of workloads such as GANs, reinforcement learning, or applications where e.g. memory usage is crucial. Similarly, our benchmark does not cover more large-scale problems such as ImageNet (Deng et al., 2009) or transformer models (Vaswani et al., 2017) for machine translation. Studying, whether there are systematic differences between these types of optimization problems presents an interesting avenue for further research.\nWe don’t consider this study the definitive work on benchmark deep learning optimizers, but rather an important step in the right direction. While our comparison includes many “dimensions” of deep learning optimization, e.g. by considering different problems, tuning budgets, and learning rate schedules, there are many more. To keep the benchmark feasible, we chose to use the fixed L2-regularization and batch size that DEEPOBS suggests for each problem. We also did not include optimization techniques such as weight averaging or ensemble methods as they can be combined with all evaluated optimizers. Future works could study how these techniques interact with different optimization methods. However, to keep our benchmark feasible, we have selected what we believe to be the most important aspects affecting an optimizer comparison. We hope, that our study lays the groundwork so that other works can build on it and analyze these questions.\nInfluence of the hyperparameter search strategy As noted by, e.g., Choi et al. (2019) and Sivaprasad et al. (2020), the hyperparameter tuning method, its budget, and its search domain, can significantly affect performance. By reporting results from three different hyperparameter optimization budgets (including the tuning-free one-shot setting) we try to quantify the effect of tuning. We argue that our random search process presents a realistic setting for many but certainly not all deep learning practitioners. One may criticize our approach as simplistic, but note that more elaborate schemes, in particular Bayesian optimization, would multiply the number of design decisions (kernels, search utilities, priors, and scales) and thus significantly complicate the analysis.\nThe individual hyperparameter sampling distributions significantly affect the relative rankings of the optimizers. A badly chosen search space can make tuning next to impossible. Note, though, that this problem is inherited by practitioners. It is arguably an implicit flaw of an optimizer to not come with well-identified search spaces for its hyperparameters and should thus be reflected in a benchmark." }, { "heading": "5 CONCLUSION", "text": "Faced with an avalanche of research to develop new stochastic optimization methods, practitioners are left with the near-impossible task of not just picking a method from this ever-growing list, but also to guess or tune hyperparameters for them, even to continuously tune them during optimization. Despite efforts by the community, there is currently no method that clearly dominates the competition.\nWe have provided an extensive empirical benchmark of optimization methods for deep learning. It reveals structure in the crowded field of optimization for deep learning: First, although many methods perform competitively, a subset of methods tends to come up near the top across the spectrum of problems. Secondly, tuning helps about as much as trying other optimizers. Our open data set allows many, more technical observations, e.g., that the stability to re-runs is an often overlooked challenge.\nPerhaps the most important takeaway from our study is hidden in plain sight: the field is in danger of being drowned by noise. Different optimizers exhibit a surprisingly similar performance distribution compared to a single method that is re-tuned or simply re-run with different random seeds. It is thus questionable how much insight the development of new methods yields, at least if they are conceptually and functionally close to the existing population. We hope that benchmarks like ours can help the community to rise beyond inventing yet another optimizer and to focus on key challenges, such as automatic, inner-loop tuning for truly robust and efficient optimization. We are releasing our data to allow future authors to ensure that their method contributes to such ends." }, { "heading": "A LIST OF OPTIMIZERS AND SCHEDULES CONSIDERED", "text": "" }, { "heading": "B LIST OF OPTIMIZERS SELECTED", "text": "" }, { "heading": "C ROBUSTNESS TO RANDOM SEEDS", "text": "Data subsampling, random weight initialization, dropout and other aspects of deep learning introduce stochasticity to the training process. As such, judging the performance of an optimizer on a single run may be misleading due to random fluctuations. In our benchmark we use 10 different seeds of the final setting for each budget in order to judge the stability of the optimizer and the results. However, to keep the magnitude of this benchmark feasible, we only use a single seed while tuning, analogously to how a single user would progress. This means that our tuning process can sometimes choose hyperparameter settings which might not even converge for seeds other than the one used for tuning.\nFigure 5 illustrates this behavior on an example problem where we used 10 seeds throughout a tuning process using grid search. The figure shows that in the beginning performance increases when increasing the learning rate, followed by an area were it sometimes works but other times diverges. Picking hyperparameters from this “danger zone” can lead to unstable results. In this case, where we only consider the learning rate, it is clear that decreasing the learning rate a bit to get away from this “danger zone” would lead to a more stable, but equally well-performing algorithm. In more complicated cases, however, we are unable to use a simple heuristic such as this. This might be the case, for example, when tuning multiple hyperparameters or when the effect of the hyperparameter on the performance is less straight forward. Thus, this is a problem not created by improperly using the tuning method, but by an unstable optimization method.\nIn our benchmark, we observe in total 49 divergent seeds for the small budget and 56 for the large budget, or roughly 1% of the runs in each budget. Most of them occur when using SGD (23 and 18 cases for the small and large budget respectively), MOMENTUM (13 and 17 cases for the small and large budget respectively) or NAG (7 and 12 cases for the small and large budget respectively), which might indicate that adaptive methods are less prone to this kind of behavior. For the small budget tuning, none of these cases occur when using a constant schedule (4 for the large budget), and most of them occur when using the cosine with warm restarts schedule (27 and 25 cases for the small and large budget respectively). However, as our data on diverging seeds is very limited, it is not conclusive enough to draw solid conclusions." }, { "heading": "D RE-TUNING EXPERIMENTS", "text": "In order to test the stability of our benchmark and especially the tuning method, we selected two optimizers in our benchmark and re-tuned them on all test problems a second time. We used completely independent random seeds for both tuning and the 10 repetitions with the final setting. Figure 6 and Figure 7 show the distribution of all 10 random seeds for both the original tuning as well as the re-tuning runs for RMSPROP and ADADELTA. It is evident, that re-tuning results in a shift of this distribution, since small (stochastic) changes during tuning can result in a different chosen hyperparameter setting.\nThese differences also highlight how crucial it is to look at multiple test problems. Individually, small changes, such as re-doing the tuning with different seeds can lead to optimization methods changing rankings. However, they tend to average out when looking at an unbiased list of multiple problems. These results also further supports the statement made in Section 3 that there is no optimization method clearly domination the competition, as small performance margins might vanish when re-tuning." }, { "heading": "E LIST OF SCHEDULES SELECTED", "text": "The schedules selected for our benchmark are illustrated in Figure 8. All learning rate schedules are multiplied by the initial learning rate found via tuning or picked as the default choice.\nWe use a cosine decay (Loshchilov & Hutter, 2017) that starts at 1 and decays in the form of a half period of a cosine to 0. As an example of a cyclical learning rate schedule, we test a cosine with warm restarts schedule with a cycle length ∆t = 10 which increases by a factor of 2 after each cycle without any discount factor. Depending on the number of epochs we train our model, it is possible that training stops shortly after one of those warm restarts. Since performance typically declines shortly after increasing the learning rate, we don’t report the final performance for this schedule, but instead the performance achieved after the last complete period (just before the next restart). This approach is suggested by the original work of Loshchilov & Hutter (2017). However, we still use the final performance while tuning.\nA representation of a schedule including warm-up is the trapezoidal schedule from Xing et al. (2018). For our benchmark we set a warm-up and cool-down period of 1/10 the training time.\nF IMPROVEMENT AFTER TUNING\nWhen looking at Figure 2, one might realize that few diagonal entries contain negative values. Since diagonal entries reflect the intra-optimizer performance change when tuning on the respective task, this might feel quite counterintuitive at first. In theory, this can occur if the respective tuning distributions is chosen poorly, the tuning randomness simply got “unlucky”, or we observe significantly worse results for our additional seeds (see Figure 5).\nIf we compare Figures 9 and 10 to Figures 11 and 12 we can see most negative diagonal entries vanish or at least diminish in magnitude. For the latter two figures we allow for more tuning runs and only consider the seed that has been used for this tuning process. The fact that the effect of negative diagonal entries reduces is an indication that they mostly result from the two latter reasons mentioned." }, { "heading": "G OPTIMIZER PERFORMANCE ACROSS TEST PROBLEMS", "text": "Similarly to Figure 4, we show the corresponding plots for the small budget with no learning rate schedule in Figure 13 and the large budget with the cosine and trapezoidal learning rate schedule in Figures 14 and 15. Additionally, in Figure 16 we show the same setting as Figure 4 but showing the training loss instead of the test loss/accuracy.\nThe high-level trends mentioned in Section 3 also hold for the smaller tuning budget in Figure 13. Namely, taking the winning optimizer for several untuned algorithms (here marked for ADAM and ADABOUND) will result in a decent performance in most test problems with much less effort. Adding a tuned version ADAM (or variants thereof) to this selection would result in a very competitive performance. The absolute top-performance however, is achieved by changing optimizers across different test problems.\nNote, although the large budget is a true superset of the small budget it is not given that it will always perform better. Our tuning procedure guarantees that the validation performance on the seed that has been used for tuning is as least as good on the large budget than on the small budget. But due to averaging over multiple seeds and reporting test performance instead of validation performance, this hierarchy is no longer guaranteed. We discuss the possible effects of averaging over multiple seeds further in Appendix C.\nThe same high-level trends also emerge when considering the cosine or trapezoidal learning rate schedule in Figures 14 and 15. We can also see that the top performance in general increase when adding a schedule (cf. Figure 4 and Figure 15).\nComparing Figure 4 and Figure 16 we can assess the generalization performance of the optimization method not only to an unseen test set, but also to a different performance metric (accuracy instead of loss). Again, the overall picture of varying performance across different test problems remains consistent when considering the training loss performance. Similarily to the figures showing test set performance we cannot identify a clear winner, although ADAM ands its variants, such as RADAM perform near the top consistently. Note that while Figure 16 shows the training loss, the optimizers have still be tuned to achieve the best validation performance (i.e. accuracy if available, else the loss)." }, { "heading": "H TABULAR VERSION", "text": "" }, { "heading": "APPENDIX REFERENCES", "text": "Laurence Aitchison. Bayesian filtering unifies adaptive and non-adaptive neural network optimization\nmethods. In Advances in Neural Information Processing Systems 33, NeurIPS, 2020.\nRohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, and Yoram Singer. Second Order Optimization Made Practical. arXiv preprint: 2002.09018, 2020.\nImen Ayadi and Gabriel Turinici. Stochastic Runge-Kutta methods and adaptive SGD-G2 stochastic gradient descent. arXiv preprint: 2002.09304, 2020.\nKiwook Bae, Heechang Ryu, and Hayong Shin. Does Adam optimizer keep close to the optimal point?. arXiv preprint: 1911.00289, 2019.\nJiyang Bai and Jiawei Zhang. BGADAM: Boosting based Genetic-Evolutionary ADAM for Convolutional Neural Network Optimization. arXiv preprint: 1908.08015, 2019.\nLukas Balles and Philipp Hennig. Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients. In 35th International Conference on Machine Learning, ICML, 2018.\nIrwan Bello, Barret Zoph, Vijay Vasudevan, and Quoc V. Le. Neural Optimizer Search with Reinforcement Learning. In 34th International Conference on Machine Learning, ICML, 2017.\nJeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. SIGNSGD: Compressed Optimisation for Non-Convex Problems. In 35th International Conference on Machine Learning, ICML, 2018.\nLeonard Berrada, Andrew Zisserman, and M. Pawan Kumar. Training Neural Networks for and by Interpolation. In 37th International Conference on Machine Learning, ICML, 2020.\nOleksandr Borysenko and Maksym Byshkin. CoolMomentum: A Method for Stochastic Optimization by Langevin Dynamics with Simulated Annealing. arXiv preprint: 2005.14605, 2020.\nAleksandar Botev, Hippolyt Ritter, and David Barber. Practical Gauss-Newton Optimisation for Deep Learning. In 34th International Conference on Machine Learning, ICML, 2017.\nLéon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade. Springer, 2012.\nChia-Yu Chen, Jungwook Choi, Daniel Brand, Ankur Agrawal, Wei Zhang, and Kailash Gopalakrishnan. AdaComp: Adaptive Residual Gradient Compression for Data-Parallel Distributed Training. In 32nd AAAI Conference on Artificial Intelligence, AAAI, 2018.\nJinghui Chen, Dongruo Zhou, Yiqi Tang, Ziyan Yang, Yuan Cao, and Quanquan Gu. Closing the generalization gap of adaptive gradient methods in training deep neural networks. In 29th International Joint Conference on Artificial Intelligence, IJCAI, 2020.\nXiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the Convergence of A Class of AdamType Algorithms for Non-Convex Optimization. In 7th International Conference on Learning Representations, ICLR, 2019a.\nYushu Chen, Hao Jing, Wenlai Zhao, Zhiqiang Liu, Ouyi Li, Liang Qiao, Wei Xue, Haohuan Fu, and Guangwen Yang. An Adaptive Remote Stochastic Gradient Method for Training Neural Networks. arXiv preprint: 1905.01422, 2019b.\nYushu Chen, Hao Jing, Wenlai Zhao, Zhiqiang Liu, Liang Qiao, Wei Xue, Haohuan Fu, and Guangwen Yang. NAMSG: An Efficient Method For Training Neural Networks. arXiv preprint: 1905.01422, 2019c.\nZiyi Chen and Yi Zhou. Momentum with Variance Reduction for Nonconvex Composition Optimization. arXiv preprint: 2005.07755, 2020.\nDami Choi, Christopher J. Shallue, Zachary Nado, Jaehoon Lee, Chris J. Maddison, and George E. Dahl. On Empirical Comparisons of Optimizers for Deep Learning. arXiv preprint: 1910.05446, 2019.\nAditya Devarakonda, Maxim Naumov, and Michael Garland. AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks. arXiv preprint: 1712.02029, 2017.\nJianbang Ding, Xuancheng Ren, Ruixuan Luo, and Xu Sun. An Adaptive and Momental Bound Method for Stochastic Learning. arXiv preprint: 1910.12249, 2019.\nTimothy Dozat. Incorporating Nesterov Momentum into Adam. In 4th International Conference on Learning Representations, ICLR, 2016.\nShiv Ram Dubey, Soumendu Chakraborty, Swalpa Kumar Roy, Snehasis Mukherjee, Satish Kumar Singh, and Bidyut Baran Chaudhuri. diffGrad: An Optimization Method for Convolutional Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 2020.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, JMLR, 12, 2011.\nAbraham J. Fetterman, Christina H. Kim, and Joshua Albrecht. SoftAdam: Unifying SGD and Adam for better stochastic gradient descent, 2019.\nBoris Ginsburg, Patrice Castonguay, Oleksii Hrinchuk, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, Huyen Nguyen, and Jonathan M. Cohen. Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks. arXiv preprint: 1905.11286, 2019.\nDonald Goldfarb, Yi Ren, and Achraf Bahamou. Practical Quasi-Newton Methods for Training Deep Neural Networks. In Advances in Neural Information Processing Systems 33, NeurIPS, 2020.\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.\nPriya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv preprint: 1706.02677, 2017.\nMikhail Grankin. RangerLars. https://github.com/mgrankin/over9000, 2020.\nVineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned Stochastic Tensor Optimization. In 35th International Conference on Machine Learning, ICML, 2018.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.\nJoão F. Henriques, Sébastien Ehrhardt, Samuel Albanie, and Andrea Vedaldi. Small Steps and Giant Leaps: Minimal Newton Solvers for Deep Learning. In IEEE/CVF International Conference on Computer Vision, ICCV, 2019.\nByeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Youngjung Uh, and Jung-Woo Ha. Slowing Down the Weight Norm Increase in Momentum-based Optimizers. arXiv preprint: 2006.08217, 2020.\nJeremy Howard and Sebastian Ruder. Universal Language Model Fine-tuning for Text Classification. In 56th Annual Meeting of the Association for Computational Linguistics, 2018.\nYifan Hu, Siqi Zhang, Xin Chen, and Niao He. Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning. In Advances in Neural Information Processing Systems 33, NeurIPS, 2020.\nYuzheng Hu, Licong Lin, and Shange Tang. Second-order Information in First-order Optimization Methods. arXiv preprint: 1912.09926, 2019.\nHaiwen Huang, Chang Wang, and Bin Dong. Nostalgic Adam: Weighting More of the Past Gradients When Designing the Adaptive Learning Rate. In 28th International Joint Conference on Artificial Intelligence, IJCAI, 2019.\nXunpeng Huang, Hao Zhou, Runxin Xu, Zhe Wang, and Lei Li. Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs. arXiv preprint: 2006.07037, 2020.\nYasutoshi Ida, Yasuhiro Fujiwara, and Sotetsu Iwamura. Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks. In 26th International Joint Conference on Artificial Intelligence, IJCAI, 2017.\nWendyam Eric Lionel Ilboudo, Taisuke Kobayashi, and Kenji Sugimoto. TAdam: A Robust Stochastic Gradient Optimizer. arXiv preprint: 2003.00179, 2020.\nZhanhong Jiang, Aditya Balu, Sin Yong Tan, Young M Lee, Chinmay Hegde, and Soumik Sarkar. On Higher-order Moments in Adam. arXiv preprint: 1910.06878, 2019.\nTyler B. Johnson, Pulkit Agrawal, Haijie Gu, and Carlos Guestrin. AdaScale SGD: A User-Friendly Algorithm for Distributed Training, 2020.\nDominic Kafka and Daniel Wilke. Gradient-only line searches: An Alternative to Probabilistic Line Searches. arXiv preprint: 1903.09383, 2019.\nChad Kelterborn, Marcin Mazur, and Bogdan V. Petrenko. Gravilon: Applications of a New Gradient Descent Method to Machine Learning. arXiv preprint: 2008.11370, 2020.\nNitish Shirish Keskar and Richard Socher. Improving Generalization Performance by Switching from Adam to SGD. arXiv preprint: 1712.07628, 2017.\nMohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam. In 35th International Conference on Machine Learning, ICML, 2018.\nDiederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR, 2015.\nKfir Yehuda Levy, Alp Yurtsever, and Volkan Cevher. Online Adaptive Methods, Universality and Acceleration. In Advances in Neural Information Processing Systems 31, NeurIPS, 2018.\nWenjie Li, Zhaoyang Zhang, Xinjiang Wang, and Ping Luo. AdaX: Adaptive Gradient Descent with Exponential Long Term Memory. arXiv preprint: 2004.09740, 2020a.\nZhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richtárik. PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization. arXiv preprint: 2008.10898, 2020b.\nLiang Liu and Xiaopeng Luo. A New Accelerated Stochastic Gradient Method with Momentum. arXiv preprint: 2006.00423, 2020.\nLiyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. In 8th International Conference on Learning Representations, ICLR, 2020.\nPietro Longhi. Wall crossing invariants from spectral networks. Annales Henri Poincaré, 19(3), 2017.\nIlya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Warm Restarts. In 5th International Conference on Learning Representations, ICLR, 2017.\nIlya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR, 2019.\nLiangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive Gradient Methods with Dynamic Bound of Learning Rate. In 7th International Conference on Learning Representations, ICLR, 2019.\nJerry Ma and Denis Yarats. Quasi-hyperbolic momentum and Adam for deep learning. In 7th International Conference on Learning Representations, ICLR, 2019.\nMaren Mahsereci. Probabilistic Approaches to Stochastic Optimization. Ph.D. Thesis, University of Tuebingen, 2018.\nItzik Malkiel and Lior Wolf. MTAdam: Automatic Balancing of Multiple Training Loss Terms. arXiv preprint: 2006.14683, 2020.\nJames Martens and Roger Grosse. Optimizing Neural Networks with Kronecker-Factored Approximate Curvature. In 32nd International Conference on Machine Learning, ICML, 2015.\nMahesh Chandra Mukkamala and Matthias Hein. Variants of RMSProp and Adagrad with Logarithmic Regret Bounds. In 34th International Conference on Machine Learning, ICML, 2017.\nMaximus Mutschler and Andreas Zell. Parabolic Approximation Line Search for DNNs. In Advances in Neural Information Processing Systems 33, NeurIPS, 2020.\nParvin Nazari, Davoud Ataee Tarzanagh, and George Michailidis. DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization. arXiv preprint: 1901.09109, 2019.\nYurii Nesterov. A method for solving the convex programming problem with convergence rate O(1/kˆ2). Soviet Mathematics Doklady, 27, 1983.\nFrancesco Orabona and Dávid Pál. Scale-Free Algorithms for Online Linear Optimization. In Algorithmic Learning Theory - 26th International Conference, ALT, 2015.\nAntonio Orvieto, Jonas Köhler, and Aurélien Lucchi. The Role of Memory in Stochastic Optimization. In 35th Conference on Uncertainty in Artificial Intelligence, UAI, 2019.\nB. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5), 1964.\nKonpat Preechakul and Boonserm Kijsirikul. CProp: Adaptive Learning Rate Scaling from Past Gradient Conformity. arXiv preprint: 1912.11493, 2019.\nSashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the Convergence of Adam and Beyond. In 6th International Conference on Learning Representations, ICLR, 2018.\nHerbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3), 1951.\nMichal Rolínek and Georg Martius. L4: Practical loss-based stepsize adaptation for deep learning. In Advances in Neural Information Processing Systems 31, NeurIPS, 2018.\nArnold Salas, Samuel Kessler, Stefan Zohren, and Stephen Roberts. Practical Bayesian Learning of Neural Networks via Adaptive Subgradient Methods. arXiv preprint: 1811.03679, 2018.\nPedro Savarese, David McAllester, Sudarshan Babu, and Michael Maire. Domain-independent Dominance of Adaptive Methods. arXiv preprint: 1912.01823, 2019.\nTom Schaul and Yann LeCun. Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients. In 1st International Conference on Learning Representations, ICLR, 2013.\nTom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. In 30th International Conference on Machine Learning, ICML, 2013.\nFanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, Dacheng Tao, and Licheng Jiao. VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning. IEEE Trans. Knowl. Data Eng., 32(1), 2020.\nLeslie N. Smith. Cyclical Learning Rates for Training Neural Networks. In IEEE Winter Conference on Applications of Computer Vision, WACV, 2017.\nLeslie N. Smith and Nicholay Topin. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. arXiv preprint: 1708.07120, 2017.\nHuikang Sun, Lize Gu, and Bin Sun. Adathm: Adaptive Gradient Method Based on Estimates of Third-Order Moments. In 4th IEEE International Conference on Data Science in Cyberspace, DSC, 2019.\nWonyong Sung, Iksoo Choi, Jinhwan Park, Seokhyun Choi, and Sungho Shin. S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima. arXiv preprint: 2009.02479, 2020.\nConghui Tan, Shiqian Ma, Yu-Hong Dai, and Yuqiu Qian. Barzilai-Borwein Step Size for Stochastic Gradient Descent. In Advances in Neural Information Processing Systems 29, NIPS, 2016.\nZeyi Tao, Qi Xia, and Qun Li. A new perspective in understanding of Adam-Type algorithms and beyond, 2019.\nBrian Teixeira, Birgi Tamersoy, Vivek Singh, and Ankur Kapoor. Adaloss: Adaptive Loss Function for Landmark Localization. arXiv preprint: 1908.01070, 2019.\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5—RMSProp: Divide the gradient by a running average of its recent magnitude, 2012.\nQianqian Tong, Guannan Liang, and Jinbo Bi. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM. arXiv preprint: 1908.00700, 2019.\nPhuong Thi Tran and Le Trieu Phong. On the Convergence Proof of AMSGrad and a New Version. IEEE Access, 7, 2019.\nRasul Tutunov, Minne Li, Alexander I. Cowen-Rivers, Jun Wang, and Haitham Bou-Ammar. Compositional ADAM: An Adaptive Compositional Solver. arXiv preprint: 2002.03755, 2020.\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. In Advances in Neural Information Processing Systems 30, NIPS, 2017.\nSharan Vaswani, Aaron Mishkin, Issam H. Laradji, Mark Schmidt, Gauthier Gidel, and Simon Lacoste-Julien. Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates. In Advances in Neural Information Processing Systems 32, NeurIPS, 2019.\nThijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization. In Advances in Neural Information Processing Systems 32, NeurIPS, 2019.\nBao Wang, Tan M. Nguyen, Andrea L. Bertozzi, Richard G. Baraniuk, and Stanley J. Osher. Scheduled restart momentum for accelerated stochastic gradient descent. arXiv preprint: 2002.10583, 2020a.\nDong Wang, Yicheng Liu, Wenwo Tang, Fanhua Shang, Hongying Liu, Qigong Sun, and Licheng Jiao. signADAM++: Learning Confidences for Deep Neural Networks. In International Conference on Data Mining Workshops, ICDM, 2019a.\nGuanghui Wang, Shiyin Lu, Quan Cheng, Weiwei Tu, and Lijun Zhang. SAdam: A Variant of Adam for Strongly Convex Functions. In 8th International Conference on Learning Representations, ICLR, 2020b.\nJiaxuan Wang and Jenna Wiens. AdaSGD: Bridging the gap between SGD and Adam. arXiv preprint: 2006.16541, 2020.\nShipeng Wang, Jian Sun, and Zongben Xu. HyperAdam: A Learnable Task-Adaptive Adam for Network Training. In 33rd AAAI Conference on Artificial Intelligence, AAAI, 2019b.\nLess Wright. Deep Memory. https://github.com/lessw2020/ Best-Deep-Learning-Optimizers/tree/master/DeepMemory, 2020a.\nLess Wright. Ranger. https://github.com/lessw2020/ Ranger-Deep-Learning-Optimizer, 2020b.\nXiaoxia Wu, Rachel Ward, and Léon Bottou. WNGrad: Learn the Learning Rate in Gradient Descent. arXiv preprint: 1803.02865, 2018.\nCong Xie, Oluwasanmi Koyejo, Indranil Gupta, and Haibin Lin. Local AdaAlter: CommunicationEfficient Stochastic Gradient Descent with Adaptive Learning Rates. arXiv preprint: 1911.09030, 2019.\nChen Xing, Devansh Arpit, Christos Tsirigotis, and Yoshua Bengio. A Walk with SGD. arXiv preprint: 1802.08770, 2018.\nYangyang Xu. Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization. arXiv preprint: 2006.00425, 2020.\nMinghan Yang, Dong Xu, Yongfeng Li, Zaiwen Wen, and Mengyun Chen. Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems. arXiv preprint: 2006.09606, 2020.\nZhewei Yao, Amir Gholami, Sheng Shen, Kurt Keutzer, and Michael W. Mahoney. ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning. arXiv preprint: 2006.00719, 2020.\nYang You, Igor Gitman, and Boris Ginsburg. Large Batch Training of Convolutional Networks. arXiv preprint: 1708.03888, 2017.\nYang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In 8th International Conference on Learning Representations, ICLR, 2020.\nJihun Yun, Aurelie C. Lozano, and Eunho Yang. Stochastic Gradient Methods with Block Diagonal Matrix Adaptation. arXiv preprint: 1905.10757, 2019.\nManzil Zaheer, Sashank J. Reddi, Devendra Singh Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive Methods for Nonconvex Optimization. In Advances in Neural Information Processing Systems 31, NeurIPS, 2018.\nMatthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint: 1212.5701, 2012.\nGuodong Zhang, Shengyang Sun, David Duvenaud, and Roger Grosse. Noisy Natural Gradient as Variational Inference. In 35th International Conference on Machine Learning, ICML, 2018.\nJian Zhang and Ioannis Mitliagkas. YellowFin and the Art of Momentum Tuning. In Machine Learning and Systems, MLSys, 2019.\nJiawei Zhang and Fisher B. Gouza. GADAM: Genetic-Evolutionary ADAM for Deep Neural Network Optimization. arXiv preprint: 1805.07500, 2018.\nJingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank J Reddi, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? In Advances in Neural Information Processing Systems 33, NeurIPS, 2020.\nMichael R. Zhang, James Lucas, Geoffrey Hinton, and Jimmy Ba. Lookahead Optimizer: k steps forward, 1 step back. Advances in Neural Information Processing Systems 32, NeurIPS, 2019.\nZijun Zhang, Lin Ma, Zongpeng Li, and Chuan Wu. Normalized Direction-preserving Adam. arXiv preprint: 1709.04546, 2017.\nShen-Yi Zhao, Yin-Peng Xie, and Wu-Jun Li. Stochastic Normalized Gradient Descent with Momentum for Large Batch Training. arXiv preprint: 2007.13985, 2020.\nBingxin Zhou, Xuebin Zheng, and Junbin Gao. ADAMT: A Stochastic Optimization with Trend Correction Scheme. arXiv preprint: 2001.06130, 2020.\nZhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Weinan Zhang, and Yong Yu. AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods. In 7th International Conference on Learning Representations, ICLR, 2019.\nLiu Ziyin, Zhikang T. Wang, and Masahito Ueda. LaProp: a Better Way to Combine Momentum with Adaptive Gradient. arXiv preprint: 2002.04839, 2020." } ]
2,020
DESCENDING THROUGH A CROWDED VALLEY — BENCHMARKING DEEP LEARNING OPTIMIZERS
SP:86a3f8091d534d50e25612cbb933819d2a090941
[ "Recently, pretrained Transformer language models have been shown to capture world knowledge (using testbeds containing facts). What if you want to update a fact, for example, with the current president of USA? This paper investigates different approaches to update the weights of a Transformer model such that the model works for the modified facts but does not catastrophically forget unmodified facts. The main proposal is a simple regularization technique (which they call constrained fine-tuning) to minimize weight changes while fine-tuning on the supporting factual sentences that represent the modified facts." ]
Large Transformer models have achieved impressive performance in many natural language tasks. In particular, Transformer based language models have been shown to have great capabilities in encoding factual knowledge in their vast amount of parameters. While the tasks of improving the memorization and generalization of Transformers have been widely studied, it is not well known how to make transformers forget specific old facts and memorize new ones. In this paper, we propose a new task of explicitly modifying specific factual knowledge in Transformer models while ensuring the model performance does not degrade on the unmodified facts. This task is useful in many scenarios, such as updating stale knowledge, protecting privacy, and eliminating unintended biases stored in the models. We benchmarked several approaches that provide natural baseline performances on this task. This leads to the discovery of key components of a Transformer model that are especially effective for knowledge modifications. The work also provides insights into the role that different training phases (such as pretraining and fine-tuning) play towards memorization and knowledge modification.
[]
[ { "authors": [ "Su Lin Blodgett", "Solon Barocas", "Hal Daumé III", "Hanna Wallach" ], "title": "Language (technology) is power: A critical survey of “bias", "venue": "in NLP. arXiv preprint arXiv:2005.14050,", "year": 2020 }, { "authors": [ "Tolga Bolukbasi", "Kai-Wei Chang", "James Y Zou", "Venkatesh Saligrama", "Adam T Kalai" ], "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Shikha Bordia", "Samuel Bowman" ], "title": "Identifying and reducing gender bias in word-level language models", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners, 2020", "venue": null, "year": 2020 }, { "authors": [ "Nicola De Cao", "Michael Schlichtkrull", "Wilker Aziz", "Ivan Titov" ], "title": "How do decisions emerge across layers in neural models? interpretation with differentiable masking", "venue": "arXiv preprint arXiv:2004.14992,", "year": 2020 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Úlfar Erlingsson", "Jernej Kos", "Dawn Song" ], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "In 28th USENIX Security Symposium,", "year": 2019 }, { "authors": [ "Yung-Sung Chuang", "Shang-Yu Su", "Yun-Nung Chen" ], "title": "Lifelong language knowledge distillation", "venue": "arXiv preprint arXiv:2010.02123,", "year": 2020 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Bhuwan Dhingra", "Manzil Zaheer", "Vidhisha Balachandran", "Graham Neubig", "Ruslan Salakhutdinov", "William W Cohen" ], "title": "Differentiable reasoning over a virtual knowledge base", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hady Elsahar", "Pavlos Vougiouklis", "Arslen Remaci", "Christophe Gravier", "Jonathon Hare", "Frederique Laforest", "Elena Simperl" ], "title": "T-REx: A large scale alignment of natural language with knowledge base triples", "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC),", "year": 2018 }, { "authors": [ "Manaal Faruqui", "Jesse Dodge", "Sujay Kumar Jauhar", "Chris Dyer", "Eduard Hovy", "Noah A Smith" ], "title": "Retrofitting word vectors to semantic lexicons", "venue": "In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2015 }, { "authors": [ "Vitaly Feldman" ], "title": "Does learning require memorization? a short tale about a long tail", "venue": "In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2020 }, { "authors": [ "Vitaly Feldman", "Chiyuan Zhang" ], "title": "What neural networks memorize and why: Discovering the long tail via influence estimation", "venue": "arXiv preprint arXiv:2008.03703,", "year": 2020 }, { "authors": [ "Thibault Févry", "Livio Baldini Soares", "Nicholas FitzGerald", "Eunsol Choi", "Tom Kwiatkowski" ], "title": "Entities as experts: Sparse memory access with entity supervision", "venue": "arXiv preprint arXiv:2004.07202,", "year": 2020 }, { "authors": [ "Kelvin Guu", "Kenton Lee", "Zora Tung", "Panupong Pasupat", "Ming-Wei Chang" ], "title": "Realm: Retrievalaugmented language model pre-training", "venue": "arXiv preprint arXiv:2002.08909,", "year": 2020 }, { "authors": [ "Neil Houlsby", "Andrei Giurgiu", "Stanislaw Jastrzebski", "Bruna Morrone", "Quentin De Laroussilhe", "Andrea Gesmundo", "Mona Attariyan", "Sylvain Gelly" ], "title": "Parameter-efficient transfer learning for nlp", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Shaoxiong Ji", "Shirui Pan", "Erik Cambria", "Pekka Marttinen", "Philip S Yu" ], "title": "A survey on knowledge graphs: Representation, acquisition and applications", "venue": "arXiv preprint arXiv:2002.00388,", "year": 2020 }, { "authors": [ "Zhengbao Jiang", "Frank F. Xu", "Jun Araki", "Graham Neubig" ], "title": "How can we know what language models know", "venue": "arXiv preprint arXiv:1911.12543,", "year": 2020 }, { "authors": [ "Nanda Kambhatla" ], "title": "Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations", "venue": "pp. 22–es, USA,", "year": 2004 }, { "authors": [ "Nora Kassner", "Hinrich Schütze" ], "title": "Bert-knn: Adding a knn search component to pretrained language models for better qa", "venue": "arXiv preprint arXiv:2005.00766,", "year": 2020 }, { "authors": [ "Urvashi Khandelwal", "Omer Levy", "Dan Jurafsky", "Luke Zettlemoyer", "Mike Lewis" ], "title": "Generalization through memorization: Nearest neighbor language models", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": null, "year": 1909 }, { "authors": [ "Omer Levy", "Minjoon Seo", "Eunsol Choi", "Luke Zettlemoyer" ], "title": "Zero-shot relation extraction via reading comprehension", "venue": "arXiv preprint arXiv:1706.04115,", "year": 2017 }, { "authors": [ "Patrick Lewis", "Ethan Perez", "Aleksandara Piktus", "Fabio Petroni", "Vladimir Karpukhin", "Naman Goyal", "Heinrich Küttler", "Mike Lewis", "Wen-tau Yih", "Tim Rocktäschel" ], "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "venue": null, "year": 2005 }, { "authors": [ "Tianlin Liu", "Lyle Ungar", "João Sedoc" ], "title": "Continual learning for sentence representations using conceptors", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Fei Mi", "Liangwei Chen", "Mengjie Zhao", "Minlie Huang", "Boi Faltings" ], "title": "Continual learning for natural language generation in task-oriented dialog systems", "venue": "arXiv preprint arXiv:2010.00910,", "year": 2020 }, { "authors": [ "Pandu Nayak" ], "title": "Understanding searches better than ever before, 2019", "venue": "URL https://blog. google/products/search/search-language-understanding-bert/", "year": 2019 }, { "authors": [ "Matthew E. Peters", "Mark Neumann", "Robert Logan", "Roy Schwartz", "Vidur Joshi", "Sameer Singh", "Noah A. Smith" ], "title": "Knowledge enhanced contextual word representations", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Fabio Petroni", "Tim Rocktäschel", "Patrick Lewis", "Anton Bakhtin", "Yuxiang Wu", "Alexander H Miller", "Sebastian Riedel" ], "title": "Language models as knowledge bases", "venue": null, "year": 1909 }, { "authors": [ "Fabio Petroni", "Aleksandra Piktus", "Angela Fan", "Patrick Lewis", "Majid Yazdani", "Nicola De Cao", "James Thorne", "Yacine Jernite", "Vassilis Plachouras", "Tim Rocktäschel" ], "title": "KILT: a benchmark for knowledge intensive language tasks", "venue": "arXiv preprint arXiv:2009.02252,", "year": 2020 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Roshan Rao", "Nicholas Bhattacharya", "Neil Thomas", "Yan Duan", "Peter Chen", "John Canny", "Pieter Abbeel", "Yun Song" ], "title": "Evaluating protein transfer learning with tape", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam Roberts", "Colin Raffel", "Noam Shazeer" ], "title": "How much knowledge can you pack into the parameters of a language model", "venue": "arXiv preprint arXiv:2002.08910,", "year": 2020 }, { "authors": [ "Dan Roth", "Wen-tau Yih" ], "title": "Probabilistic reasoning for entity & relation recognition", "venue": "In COLING 2002: The 19th International Conference on Computational Linguistics,", "year": 2002 }, { "authors": [ "Sainbayar Sukhbaatar", "arthur szlam", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Fan-Keng Sun", "Cheng-Hao Ho", "Hung-Yi Lee" ], "title": "Lamol: Language modeling for lifelong language learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yu-Kun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "ERNIE: enhanced representation through knowledge integration", "venue": null, "year": 1904 }, { "authors": [ "Mihai Surdeanu", "Heng Ji" ], "title": "Overview of the english slot filling track at the tac2014 knowledge base population evaluation", "venue": null, "year": 2014 }, { "authors": [ "Betty van Aken", "Benjamin Winter", "Alexander Löser", "Felix A Gers" ], "title": "How does BERT answer questions? A layer-wise analysis of transformer representations", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Pat Verga", "Haitian Sun", "Livio Baldini Soares", "William W. Cohen" ], "title": "Facts as experts: Adaptable and interpretable neural memory over symbolic knowledge", "venue": "arXiv preprint arXiv:2007.00849,", "year": 2020 }, { "authors": [ "J. Zelle", "R. Mooney" ], "title": "Learning to parse database queries using inductive logic programming", "venue": "In AAAI/IAAI,", "year": 1996 }, { "authors": [ "Luke S Zettlemoyer", "Michael Collins" ], "title": "Learning to map sentences to logical form: structured classification with probabilistic categorial grammars", "venue": "In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence,", "year": 2005 }, { "authors": [ "Zhengyan Zhang", "Xu Han", "Zhiyuan Liu", "Xin Jiang", "Maosong Sun", "Qun Liu" ], "title": "ERNIE: Enhanced language representation with informative entities", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Roberts" ], "title": "2020), we use the versions of T-REx and zsRE from LAMA (Petroni et al., 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Large-scale Transformer based language models (Vaswani et al., 2017; Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) have not only pushed state-of-the-art on standard natural language processing (NLP) benchmarks such as GLUE and SQuAD, but they have also been crucial for improving various real-world systems (see, e.g., Nayak, 2019; Rao et al., 2019).\nGiven that these models are pretrained on a large corpora of text such as Wikipedia and BookCorpus (Zhu et al., 2015), it’s quite conceivable that they are able to implicitly memorize the factual knowledge in their large number of parameters. Recent works (Petroni et al., 2019; Roberts et al., 2020) have verified this hypothesis by evaluating the pretrained language models on factual knowledge based tasks. This line of work shows that pretrained large Transformer based language models achieve non-trivial performance on various open-domain question answering (QA) tasks that probe the factual knowledge stored in the model parameters.\nThe aforementioned memorization capability of Transformers opens up many exciting opportunities. In addition to improving generalization with better language understanding, Transformers may also replace or assist traditional knowledge bases (KBs) that are either manually curated or require significant amount of supervision (Roth & Yih, 2002; Kambhatla, 2004; Surdeanu & Ji, 2014). Different from conventional KBs that explicitly memorize factual knowledge, Transformers implicitly memorize knowledge in their model parameters. As a result, Transformers lack one key advantage of the conventional databases: efficiently modifying the factual knowledge stored in the model. Unlike Transformers, in conventional databases such as SQL and NoSQL that explicitly store knowledge in the forms of structured tables, key-value pairs, wide columns, graphs, or documents, updating knowledge is straightforward. Knowledge-augmented Transformers, which leverage factual knowledge bases to improve their feature representations, cannot effectively modify their predictions by only updating the symbolic knowledge as it causes conflict with the implicit memorization in their parameters (Verga et al., 2020).\nThis raises the natural question: Can Transformers cope with the ever-changing world where knowledge is continuously being added, updated, and deprecated? To answer this question, we propose a new task of explicitly modifying specific factual knowledge in Transformer models while ensuring that model performance does not degrade on the unaltered facts. This task is useful in many scenarios. For example, the factual knowledge stored by the model can become stale over time, which\nneeds to be updated periodically, e.g., a sports player may play with different teams over time. Users may ask a Transformer-based assistant model to update certain knowledge (factual or otherwise) that they asked model to memorized in the past, e.g., their favorite tourist destination. In the context of privacy one may need to overwrite unintendedly memorized sensitive information without retraining the model (Carlini et al., 2019). Furthermore, language models are susceptible to various biases present in the large corpora of text used for their training, and such biases may need to be eliminated to ensure a fair application of such models in real-world (Bolukbasi et al., 2016; Bordia & Bowman, 2019; Blodgett et al., 2020).\nTo the best of our knowledge, this is the first work studying reliable and efficient modification of the factual knowledge memorized by Transformers. The paper makes the following contributions.\n• We create a new benchmark to evaluate the ability of a candidate method to modify the factual knowledge of a Transformer model as desired while preserving the model’s performance on the unmodified factual knowledge (§ 3.1).\n• We formulate the knowledge modification as a constrained optimization problem with a constraint on the loss on the unmodified facts and explore better baseline methods to approximately enforce this constraint (§ 3.3).\n• We show that constrained layer-wise fine-tuning is a simple yet effective way to modify the knowledge memorized by Transformers (§ 4).\n• We find that it is not necessarily easier to modify factual knowledge in the models that employ explicit memory modules, e.g., FaE (Verga et al., 2020), as compared to those Transformer models that solely rely on implicit memorization." }, { "heading": "2 RELATED WORKS", "text": "Traditionally, KBs are commonly utilized to store and access the relational knowledge in NLP domain (Ji et al., 2020; Zelle & Mooney, 1996; Zettlemoyer & Collins, 2005, inter alia). However, the recent success of Transformer-based language models on a multitude of NLP tasks has fueled an increasing number of efforts on exploring the ability of these language models to serve as unstructured/non-symbolic KBs.\nLanguage models as a source of factual knowledge. To assess the performance of off-the-self modern language models as KBs, Petroni et al. (2019) introduced LAMA (LAnguage Model Analysis) probing method that convert various facts and fact-seeking question-answer pairs into cloze sentences. Petroni et al. (2019) concluded that pretrained BERT (Devlin et al., 2018) shows factual knowledge that is competitive with KBs generated using some of the traditional off-the-self techniques. Further, Roberts et al. (2020) probed the knowledge within T5 models (Raffel et al., 2019) and found very promising results. Another line of work (Sun et al., 2019; Zhang et al., 2019; Peters et al., 2019) focuses on leveraging the readily available structured KBs to further complement the knowledge possessed by language models. Earlier works on retrofitting improves word representation learning with relation information (Faruqui et al., 2015). Recently, there have been attempts to develop novel Transformer models and/or training procedures that aim to leverage both available high-quality KBs and large corpora of (unstructured) text (Dhingra et al., 2019; Guu et al., 2020; Lewis et al., 2020), further broadening the scope of factual knowledge. However, unlike structured KBs, which are accompanied by infrastructure for querying, inferring, or updating facts, neural language models do not possess such capabilities directly. Jiang et al. (2020) explored designs for better prompts to query the knowledge implicitly stored in the model parameters of a neural language model. To the best of our knowledge, however, there has been no work on designing efficient ways for modifying knowledge in a neural language model, which is the focus of our present work.\nMemory augmented models. Multiple recent research efforts augment the Transformer models with explicit long-term memory modules to increase their factual knowledge. Use of knowledge augmented neural networks had been explored in pre-Transformer era as well (Weston et al., 2014; Sukhbaatar et al., 2015). More recently, in the context of Transformers, Févry et al. (2020) utilized an explicit key-value memory to store entity representations, which are trained along with the rest of model in an end-to-end manner. Verga et al. (2020) build on Févry et al. (2020), and introduced Facts as Expert (FaE) model with explicit symbolic memory of (subject, relation, object) triples based on end-to-end trained entity representations. Notably, one of the motivations behind FaE is\nthe ease of updating knowledge by directly modifying the content of the explicit symbolic memory. However, even though FaE has successfully demonstrated injecting new facts to its knowledge base, it exhibits poor performance when one tries to modify the facts that the model encountered during the training due to contradictions between the implicit knowledge of the underlying Transformer model and explicit content of the symbolic memory (Verga et al., 2020, §5.3). Modifying the value tokens in the datastore of kNN-LM (Khandelwal et al., 2020) is another non-parametric method to update the facts. However, this approach tends to cause wrong predictions for all other facts that shared the same object before modification, resulting in low accuracy on the unmodified facts (cf. Appendix F). Thus, our work on modifying the implicit memory of Transformer models also has utility for the task of updating knowledge in memory augmented Transformer models.\nGeneralization often requires memorization. In general, without specifically focusing on language models, Feldman (2020); Feldman & Zhang (2020) have demonstrated both theoretical results and empirical evidences to imply that close-to-optimal generalization requires memorization of labels for samples from the low-frequency sub-populations. This line of work is further supported by the recent efforts on adding the k-NN component to language models to improve their generalization via memorization (Kassner & Schütze, 2020; Khandelwal et al., 2020). We believe that our work on modifying the implicit memories in Transformer models can improve their generalization by boosting their factual knowledge in specific domains.\nMemory modification vs. continual learning. Continual learning, with recent extensions to language models (Sun et al., 2020; Liu et al., 2019; Mi et al., 2020; Chuang et al., 2020), aims to learn a new task while preserving the performance on the previous tasks without access to their data. Similar to continual learning, memory modification also expects the predictions to be updated efficiently (potentially without access to the unmodified facts) while preserving the accuracy for the unmodified facts. In this case, both settings suffer from catastrophic forgetting (Kirkpatrick et al., 2017), but memory modification further requires the model to memorize new facts that conflict with previously learned facts, posing new challenges to existing continual learning approaches, e.g., we may need to update the Gradient Episodic Memory (Lopez-Paz & Ranzato, 2017) or the Conceptors (Liu et al., 2019). Furthermore, our benchmark and the evaluated models are at larger scales as compared to the works mentioned above, posing a stricter requirement on the scalability of the proposed solution." }, { "heading": "3 MODIFYING IMPLICIT FACTUAL KNOWLEDGE OF TRANSFORMER MODELS", "text": "In this section, we define a new knowledge modification task. We then present several approaches to solve this task with different computational costs. We focus on a constrained optimization-based approach that is highly effective and efficient." }, { "heading": "3.1 MODIFICATION OF IMPLICIT KNOWLEDGE", "text": "We propose a new task of modifying specific pieces of knowledge in a model that are stored implicitly in its weights. Specifically, we would like to change the model’s weights in a way so that a pre-selected subset of its knowledge is updated, while the rest of its knowledge is preserved. Such modifications can be challenging as each fact is stored non-locally across a large number of weights and each weight can affect a large number of implicitly memorized facts.\nMore formally, a pretrained Transformer based language model is defined by its parameters θ0 ∈ Θ, which encodes a collection of facts F that the model has implicitly memorized. We would like to update a desired subset of facts S ⊂ F to a new set of facts M. At the end of the modification process, we should arrive at a model θnew that implicitly stores the collection F ′ = { F\\S } ∪M. Ideally, the new model θnew not only stores the desired modified knowledge, but also retains the performance of θ0 on the unmodified knowledge F\\S . For example, a Transformer model may have memorized ‘Eliud Kipchoge’ given the context ‘The marathon world record is held by [MASK]’. When another athlete breaks this record, we will need to update this specific piece of knowledge while keeping most of the remaining knowledge intact." }, { "heading": "3.2 BASELINE APPROACHES", "text": "In this subsection we discuss several natural baseline approaches and setup our notation.\nRetraining the model on modified training set. A natural and reliable approach to solve the aforementioned knowledge modification task is to update all the training data, including both the pretraining corpora and the fine-tuning dataset, to be consistent with the new facts, and then fine-tuning the model on the modified training set or even training a new model from scratch to potentially obtain higher success rate. This approach, however, is not practical for modifying a small amount of knowledge: identifying and updating the modified facts in the unstructured datasets is highly non-trivial and retraining the model from scratch is too expensive. Further, the test performance on the modified facts should be approximately the same as the test performance on other facts in expectation, which means we may not achieve high accuracy on the modified facts if the model does not have high overall accuracy in the beginning.\nFine-tuning on modified facts. Another natural and efficient approach is to fine-tune the model on the supporting evidences for the modified facts DM. Such a collection of evidence is not necessarily from the training set; it can be constructed from the modified facts just to change the model’s prediction. With θ0 as the initialization, we solve:\nminimizeθ∈Θ 1\nm ∑ x∈DM L(x; θ), (1)\nwhere m = |DM| denotes the number of supporting evidences corresponding to the facts to be modified; and L(x; θ) denotes per-instance loss employed during the fine-tuning process. This approach indeed achieves high accuracy on the modified facts. But due to overfitting and catastrophic forgetting, the model’s knowledge about the unmodified facts F\\S can significantly degrade, as we demonstrate in our experimental studies (cf. § 4.5.1).\nFine-tuning on a mixture of modified and unmodified batches. To obtain a higher-than-average accuracy onM while preserving the accuracy on F \\S , another natural baseline is to use evidences of both M and F \\ S in every iteration to fine-tune the model. As detailed in Appendix B, this biases the optimization trajectory towards the modified facts. Due to such imbalance, catastrophic forgetting still happens when only using mixed batches in our preliminary experiments. However, when used together with the constrained fine-tuning (cf. § 3.3), this approach could improve the results (cf. Table 4)." }, { "heading": "3.3 CONSTRAINED FINE-TUNING ON SUPPORTING EVIDENCES FOR MODIFIED FACTS", "text": "We explore a simpler yet more effective approach for knowledge modification, where we fine-tune the original model only on the modified facts DM while using explicit constraints on the weights θ to achieve minimum interference with the unmodified facts1. With the complexity that scales only with the number of modifications, this approach works surprisingly well in memorizing the new knowledge while preserving the unmodified facts.\nIn the ideal scenario, instead of (1), the model should learn the new facts while keeping the loss small on unmodified facts:\nminimizeθ∈Θ 1\nm ∑ x∈DM L(x; θ) subject to 1 n ∑ x′∈DF\\S ( L(x′; θ)− L(x′; θ0) ) ≤ δ. (2)\nWith a small positive constant δ, we aim to add a constraint on the model’s performance on all n = |DF\\S | training samples that provide supporting evidences for the unmodified facts F \\ S. However, it is expensive to enforce this constraint. So we approximate the constraint by using local continuity of the loss around θ0 to obtain the following program:\nminimizeθ∈Θ 1\nm ∑ x∈DM L(x; θ) subject to ‖θ − θ0‖ ≤ δ, (3)\nwhere ‖ · ‖ denotes any suitable norm in the parameter space. We tried `2 and `∞ norms in our experiments, where `∞ consistently leads to more stable results for knowledge modification. We\n1We also extend constrained fine-tuning to the mixture of modified and unmodidfied batches (cf. Appendix B).\nsolve this problem with projected gradient descent, see Appendix D for details. We also provide a potentially better yet more costly alternative using the Fisher information in Appendix C.\nNote that, if we use a very small δ, the model will not change much and the accuracy on the modified facts will be low while the accuracy on the unmodified facts will remain high. If δ is too large, we are essentially solving (1) which results in almost zero accuracy on the unmodified facts. Therefore, δ is an important design parameter that needs to be chosen carefully.\nFine-tuning specific Transformer blocks. When fine-tuning large models on a small amount of data, a commonly used approach is to fine-tune only a small portion of the model (e.g., one layer) while keeping the rest of the model frozen. Note that, with appropriately chosen δ to avoid overfitting, full-model fine-tuning and 1-layer fine-tuning will explore very different functional spaces and the later is not contained in the former.\nWe found that fine-tuning the initial and final Transformer blocks of Transformers results in better adaptation to the modified facts and better preservation of performance on the unmodified facts (cf. § 4). This approach, interestingly, outperforms the case when the whole network is updated. This is partially consistent with Houlsby et al. (2019), who demonstrated that fine-tuning top layers of BERT-Base is the best approach for certain tasks, except that we are also interested in retaining the memorization of the unmodified facts. For more work related to the roles of different layers on QA tasks, see e.g. van Aken et al. (2019); Cao et al. (2020). Here, we found that sometimes initial layers give better results.\n4 EXPERIMENTS\nWe now conduct a systematic experimental evaluation of different approaches to modifying the knowledge implicitly stored in the parameters of the Transformer model. Similar to prior works on probing the knowledge of language models (Petroni et al., 2019; Roberts et al., 2020), we rely on factual knowledge-based datasets. From two such datasets, we create two new benchmarks for the knowledge modification tasks (cf. § 4.1). We compare the\nperformance of the constrained finetuning approach against several baselines (cf. § 3.2) on models such as BERT (Devlin et al., 2018) and ALBERT (Lan et al., 2019). We also test the FaE model (Verga et al., 2020) modifying its implicit and explicit symbolic memory. A summary of the best results of each model is listed in Table 2." }, { "heading": "4.1 DATASETS AND BENCHMARKS", "text": "We construct the benchmark of modified facts from two datasets, T-REx (Elsahar et al., 2018) and Zero-shot Relation Extraction (zsRE) (Levy et al., 2017). Each fact, in the form of (subject, relation, object) triples, is supported by multiple evidences. We modify a relatively small subset of facts by changing their objects and consistently updating all their evidences. For illustration, let’s look at an example from the zsRE dataset:\nFact: (Della Pia Glacier, continent, Antarctica)\nMasked evidence (training): What is the continent that Della Pia Glacier is located? [MASK]\nMasked evidence (test): What continent is Della Pia Glacier found on? [MASK]\nThe masked word here is “Antarctica”. When we modify this fact, we would consistently replace its object “Antarctica” with a similar entity, e.g. “Asia”, which is sampled from all objects that share the same relation, according to their frequency in the training set. Note that the training evidence is phrased differently from the test question, reducing the impact of over-fitting to spurious correlations. Please refer to Appendix A for more details of the benchmark construction process." }, { "heading": "4.2 PERFORMANCE MEASURE", "text": "As the model updates its memory with the modified facts, its memory on the unmodified facts may suffer undesirable changes. For example, finetuning a pretrained model on only modified facts with-\nout constraints gives high accuracy on them, but almost zero accuracy on the other facts. Therefore, an ideal metric should take both of these accuracies into account. In this work, we use their average as the performance metric:\nĀ = ( AM + AF\\S ) /2, (4)\nwhere AM is the accuracy on the modified facts while AF\\S is the accuracy on the unmodified facts. The trade-off between AM and AF\\S can be strongly affected by certain hyperparameters, such as the constraint δ (cf. (3)) in the constrained optimization approach. In this cases we select the hyperparameter that optimizes Ā." }, { "heading": "4.3 MODEL ARCHITECTURES", "text": "We work with three Transformer based language models for our experimental study:\nBERT (Devlin et al., 2018). We evaluate both the uncased BERT-Base and BERT-Large models without whole word mask training, as released by the official repository2. The two models have 12/24 Transformer blocks with 768/1024 hidden dimension and 110M/340M parameters, respectively.\nALBERT (Lan et al., 2019). We only evaluate ALBERT-XXLarge model, which is the largest ALBERT model from Lan et al. (2019). It has a total of 235M parameters. The weights are shared in each transformer block, so the only option here is to finetune all its blocks on the modified facts.\nFaE (Verga et al., 2020). FaE adds symbolic memories to BERT-Base. It inherits the entity memory module from EaE (Févry et al., 2020) and adopts an additional fact memory to enhance the representation of the facts. The EaE part already has 367M parameters, comparable to BERT-Large, so FaE is even larger than BERT-Large." }, { "heading": "4.4 NOTATIONS AND SETUPS", "text": "We start from an. off-the-shelf language model pretrained on a large corpus by default. Afterward, we often finetune our model first on the unmodified T-REx or zsRE. This enables the model to achieve reasonable performance on all the original facts before modification. BERT-Base, BERTLarge, ALBERT-XXLarge, and FaE achieve the accuracy of 50.50%, 51.39%, 47.96%, and 60.38% after this process. We use FT to denote such a finetuned model.\nThere are two natural ways to train a model to update specific memorized facts. The first approach is to train it only on the modified facts DM, which we denote by FTM. We can also train it with a mixture of modified facts and unmodified facts, sampled from DF ′ in each minibatch. We denote this setting as FTA, since we have access to all facts." }, { "heading": "4.5 RESULTS", "text": "We now present the results for different approaches and models on our new knowledge modification benchmarks. The best results are summarized in Table 2. A major theme across this section is combating catastrophic forgetting of unmodified facts when we update the model on the modified facts. We compared multiple ways to alleviate this. Finetuning on the modified facts (FTM) with `∞ constraints (cf. (3)) on the model’s weights seem to work the better than other natural strategies, such as finetuning on a mixture of modified and unmodified facts (FTA). Furthermore, this strategy works even better when applied only to specific layers of the model rather than the full model. In this section we discuss various aspects of these findings with extensive ablation studies." }, { "heading": "4.5.1 FINETUNING ON MODIFIED FACTS WITHOUT CONSTRAINTS", "text": "For T-REx benchmark and BERT-Base, Table 3 presents the results for finetuning on only modified facts without any constraints, i.e., we employ (1) which is also equivalent to constrained finetuning (3) with δ = ∞. Note that these results are for a setting where we modify |M| = 32 facts from the T-REx benchmark. We present results for modifying a randomly initialized model (RI+FTM), a pretrained model (FTM), and a finetuned pretrained model (FT+FTM) as defined in § 4.4.\n2https://github.com/google-research/bert.git\nThe RI models are not pretrained so they have no language understanding ability to begin with. Thus, with limited training data, they exhibits poor accuracy on both the modified and unmodified facts. In contrast, both FTM and FT + FTM models result in non-trivial accuracy on the modified facts. However, they forget unmodified facts. Before FTM, the pretrained model had an accuracy of 28.85% on all the facts and finetuning on the unmodified dataset (FT) improve it to 50.50%. Unconstrained FTM caused their degradation to AF\\S , as reported in Table 3.\nAnother takeaway from Table 3 is that training different layers in a Transformer leads to different outcomes for the knowledge modification task, which also depends on the state of the original model. In Appendix E, we present additional results on the role of different layers for knowledge modification with different numbers of modified facts." }, { "heading": "4.5.2 FINETUNING ON MODIFIED FACTS WITH CONSTRAINTS", "text": "As observed in § 4.5.1, unconstrained finetuning on the modified facts leads to catastrophic forgetting of the unmodified facts. This happens even when we modify a single layer of BERT-Base. As demonstrated in Figure 1 to 3, using a simple `∞ constraint (cf. (3)) on the model’s weights in the modification step (FTM) works surprisingly well in controlling this issue. Recall that we select the constraint strength δ to maximize the average accuracy (cf. § 4.2).\nThese results also demonstrate another interesting effect: the best performances may come from modifying specific layers of the transformer, rather than the entire model3.\nThe conclusion comes from combining results from Figure 1 and Figure 2, as well as the results in Figure 3.\nApplying a constrained FTM strategy on a single Transformer block ensures good accuracy for both modified and unmodified facts, as long as we modify a small number of facts. However, as the number of modified facts increases, performances degrade, with accuracies on unmodified facts taking\n3This is not possible for ALBERT as it employs parameter sharing across layers.\nlarger hits. In Figure 3, we observe similar results with BERT-Base on the zsRE-based benchmark. We believe this is due to the small model capacity resulting from modifying only one layer.\nThe best layer for modification also changes with the number of modified facts and the initial state of the model. From Figure 2 and 3, we can see that in the FT+FTM setting, as the number of modified facts increases, the block with highest Ā changed from the last one (block 11 or 23) to the first one (block 0) for both BERT-Base and BERT-Large. From Table 2, we can see the best block of BERTBase for modifying 32 facts changed from block 11 to block 0 when starting constrained finetuning from a pretrained model instead of a finetuned model." }, { "heading": "4.5.3 FINETUNING ON BOTH MODIFIED AND UNMODIFIED FACTS WITH CONSTRAINTS", "text": "One obvious reason for forgetting the unmodified facts is that they are excluded from the modification training. Thus, we explore another natural baseline from § 3.2 where we perform constrained finetuning based on a mixture of modified and unmodified facts, i.e., FTA in § 4.4. In each minibatch, we use the same number of evidences for modified and unmodified facts. This process implicitly puts more weight on the modified facts since they are usually the minority (cf. Appendix B)4.\nThe results for applying FTA to different Transformer blocks of BERT-Base on the T-REx benchmark are shown in Table 4. This approach improves the best results, but only by a small margin. Moreover, it performs worse in terms of the weighted accuracy when finetuning 0th or 5th block. These results suggest that when we need to achieve high accuracy on the modified facts, due to the biased optimization trajectory, forgetting some of the unmodified facts might be inevitable even when the model can access them, at least when the weight changes are uniformly constrained." }, { "heading": "4.6 MODIFYING SYMBOLIC MEMORIES IN A FINETUNED FAE MODEL", "text": "An important advantage of the models with symbolic memory modules such as FaE (Verga et al., 2020) is that they could be easily updated by modifying the symbolic links. However, since these\n4Note that if we randomly sample minibatches from DF′ , a finetuned pretrained BERT-Base achieves only ∼50% accuracy on the modified facts after training, similar to its accuracy on all facts before modification.\nmodels rely on both the contextual representation and the symbolic links, inconsistency between its implicit memory (realized via contextual representation) and the explicit symbolic memory can result in wrong predictions. In this section, we show that modifying the implicit knowledge is essential for successfully updating these models. We also give results with kNN-LM in Appendix F.\nFaE has three key components: a BERT style Transformer model, symbolic memory modules, and model weight connecting the Transformer model with the symbolic memory. We experiment with modifying various combinations of these components as a means to realize knowledge modification (cf. Table 5). Our results show that finetuning the model parameters of FaE in addition to symbolic memory module is necessary for it to obtain high accuracy for the modified facts. Moreover, with constrained finetuning, FAE inevitably experiences a drop in the accuracy for the unmodified facts F \\ S, similar to the BERT models without explicit memory modules. After modifying the symbolic links stored in its symbolic memory modules, FaE achieves 46.88% accuracy on the modified facts, which is higher than the 30% reported by Verga et al. (2020), and its accuracy on unmodified facts stays unchanged at 60.38%. We find that finetuning only the layers that directly map symbolic memory to the predictions result in the best trade-off (denoted as AWT in Table 5). In particular, after finetuning (AWT), FaE reaches an AM of 75.00% with a drop of 3.00% in AF\\S ; and an AM of 85.00% with a drop of 6.5% in AF\\S using a slightly larger δ. In contrast, BERT-Large can achieve an AM of 77.50% with a drop of less than 4.00% in AF\\S . This indicates that FaE with symbolic memory is not necessarily better than BERT-Large at the knowledge modification task." }, { "heading": "5 CONCLUSION", "text": "We propose a novel task of modifying the factual knowledge implicitly stored in the parameters of a Transformer model. For this task, we introduced two benchmarks based on T-REx and zsRE datasets. We further established the effectiveness of the constrained finetuning approach on the knowledge modification task. We provide comprehensive evaluations for models with and without explicit memory modules, revealing the effect of initial parameters, number of modified facts, and different Transformer blocks on the difficulty of modification. Furthermore, we find that modifying the the Transformer parameters is still necessary for networks with symbolic memory.\nWhile we have explored knowledge modification for models with symbolic fact memory, a more comprehensive exploration of mechanisms to achieve reliable and consistent modification of both implicit and explicit knowledge of such models is an interesting future direction. Another natural future work would be to understand the implications of modifying facts on multi-hop logical inference, i.e. whether the generalization aspect can interact well with modified facts." }, { "heading": "Appendix for “Modifying Memories in Transformer Models”", "text": "" }, { "heading": "A DATASET DETAILS", "text": "We aim to construct datasets with a collection of facts F along with modificationsM for a subset of facts S ⊂ F . We take two fact-based datasets, namely T-REx (Elsahar et al., 2018) and Zero-shot Relation Extraction (zsRE) (Levy et al., 2017), as the source of the original facts F . These datasets contain a large number of facts (cf. Table 1), with each fact being supported by potentially multiple evidences in the form of natural-language masked sentences or cloze-type QA pairs, in which the object of the fact is masked out to serve as a cloze question. This allows a model to memorize a given set of facts by providing such supporting evidences for training. In our experiments, the model learns to predict the masked out object and understands the fact via either memorization of facts from the pretraining datasets (Petroni et al., 2019) or supervised learning on the training sets of T-REx or zsRE. T-REx and zsRE datasets indeed provide different kinds of questions about the same fact. During the test-time, the understanding of a fact by the model is assessed by presenting a cloze-type statement to the model. Note that, it is important to test the model for the given fact using probes that differ from the supporting evidences for the fact in the training set. This is necessary as the model may respond with the correct answer just by overfitting to some spurious correlations present in the pretraining or fine-tuning dataset.\nWe develop two benchmarks for the knowledge modification task based on T-REx and zsRE. To enable better comparisons with existing works on probing the implicit memorization in language models (Petroni et al., 2019; Roberts et al., 2020), we use the versions of T-REx and zsRE from LAMA (Petroni et al., 2019) and KILT (Petroni et al., 2020) benchmarks, respectively. To modifym facts S from F , we update the objects in all the cloze-type statements for those facts, which is just the labels of the [MASK] tokens, in both the training and test sets of T-REx and zsRE. The modified object is sampled from the collection of all objects that are connected to the same relation, according to its frequency in the training set. For example, if the original supporting evidence appears in the form of a QA pair, with the question being “Which country was Charles Darwin born? [MASK]”, we modify the label for the [MASK] token into a random object that appears as someone’s birthplace in the training set, other than United Kingdom.\nT-REx dataset. We consider 41 Wikipedia relations with a total of 34039 facts from Petroni et al. (2019). All the object labels in the dataset can be represented by a single token. In this version of the dataset, each fact has at least one supporting sentence (evidence) from Wikipedia with the object replaced by a [MASK] token, plus a template for each relation to construct an additional cloze-type question. We use the masked sentences and the objects from Wikipedia as the training set, and the cloze-type question constructed from the templates as the test set. To enable better comparisons with existing works on probing the implicit memorization in language models (Petroni et al., 2019; Roberts et al., 2020), we use the versions of T-REx and zsRE from LAMA (Petroni et al., 2019) and KILT (Petroni et al., 2020) benchmarks, respectively.\nOne example of the T-REx dataset:\nFact: (Natalie Lowe, place of birth, Sydney)\nMasked evidence (training): Natalie Lowe (born 15 August 1980), is a professional dancer from [MASK] who has ballroom dancing expertise.\nMasked evidence (test): Natalie Lowe was born in [MASK].\nFor modification, we replace the object Sydney with another random object that appears as the birthplace of another subject, e.g., London, according to the frequency of the birthplace objects in the training set.\nZero-shot Relation Extraction (zsRE) dataset. zsRE is a relation extraction dataset originally formulated as a reading comprehension problem to match each question with a sentence from Wikipedia (Levy et al., 2017). We take the reformulated version of zsRE from KILT (Petroni et al., 2020), which includes multiple template questions for most of the facts. Since the relations in different splits from KILT do not overlap, we construct the modification benchmark from only the training set of zsRE, and split the questions for each fact to obtain the training and test sets for modification. For each fact, we randomly put two of its questions into the test set if it has more than three ques-\ntions, preserve the question in the training set if it has only one question, and put one question into the test set otherwise. When applying the uncased BERT tokenizer, we limit the length of the input sequence to be no longer than 512 and the length of the answer to be no longer than 20. We treat a prediction as correct only when all the predicted tokens match the label. One example from zsRE dataset:\nFact: (Della Pia Glacier, continent, Antarctica)\nMasked evidence (training): What is the continent that Della Pia Glacier is located? [MASK]\nMasked evidence (test): What continent is Della Pia Glacier found on? [MASK]" }, { "heading": "B FINE-TUNING ON A MIXTURE OF MODIFIED AND UNMODIFIED FACTS", "text": "We explore the constrained fine-tuning approach for the knowledge modification task on the T-REx benchmark. Recall that DM and DF\\S denote the supporting evidence for the modified facts M and the unmodified facts F\\S , respectively. The constrained optimization problem becomes\nminimizeθ∈Θ 1 |DM| ∑ x∈DM L(x; θ) + 1 |DF\\S | ∑ x′∈DF\\S L(x′; θ) subject to ‖θ − θ0‖ ≤ δ.\n(5)\nTable 4 presents the result for the setting where |M| = 512. We train the model for 10 epochs with a minibatch size of 128, which results in a total of 112 iterations per epoch on DM. In each iteration, if using the unmodified training samples, we additionally sample 128 samples fromDF\\S , and compute the gradient of the averaged loss based on the 256 samples. This effectively uses around 10% of the samples of DF\\S . Such a mixture of modified and unmodified supporting evidence in every iteration is supposed to achieve high accuracy forM, while also preserving the accuracy for F \\ S . However, as we observe in Table 4, there is no significant improvement by using such mixed minibatches. Though 50% of the training samples are unmodified evidences in each iteration, the optimizer repeatedly loops over DM, which effectively makes the model 10 times as more biased towards minimizing the expected loss on DM (as we train 10 epochs) than on DF\\S . Such a bias can be alleviated by increasing the ratio of unmodified data in each minibatch, but there would be no guarantee that the model achieves the same level of accuracy onDM, even if it is able to improve the accuracy on the unmodified facts." }, { "heading": "C THE SMALL MODIFICATION LIMIT", "text": "In this section we theoretically discuss the small modification limit of the loss constraint in (2), reproduced here:\nminimizeθ∈Θ 1\nm ∑ x∈DM L(x; θ) subject to 1 n ∑ x′∈DF\\S ( L(x′; θ)− L(x′; θ0) ) ≤ δ. (6)\nIt is expensive to evaluate the constraint in (6) over the entire DF ′ . But in the limit where only a small number of facts are modified and the changes to the weights are small, the constraint simplifies to: ∑\nij\n∆θi∆θj 1\n2n ( ∂ ∂θi ∂ ∂θj ∑ x′∈DF\\S L(x′; θ0) ) +O(∆θ3) ≤ δ, (7)\nwhere ∆θ ≡ θ − θ0. Here, because the number of modified facts is small, we can assume that we are still at the minimum of the loss function with respect to the unmodified facts. Thus, the linear term in ∆θ vanishes and the second order term should dominate.\nIf we use cross-entropy loss, then the quantity in the bracket (cf. (6)) is the Fisher metric. Even though the Fisher metric only needs to be computed once, it is still expensive as it is difficult to parallelize this computation across samples. We experimented with an approximation of the Fisher information computed with batch size 128, and found that it did not outperform the `∞ norm with (3). We leave the detailed exploration of the Fisher metric for the memory modification task to future work." }, { "heading": "D SOLVING CONSTRAINED OPTIMIZATION WITH PROJECTED GRADIENT DESCENT", "text": "Algorithm 1 Adam with norm constraint 1: Input: Learning rate {ηt}Tt=1, hyperparameters 0 < β1 < 1, 0 < β2 < 1, > 0, δ > 0, initial parameter θ0\n2: Set m0 = v0 = 0 3: for t = 1 to T do 4: Draw samples St from training set 5: Compute gt = 1|St| ∑ xk∈St\n∇L(xk; θt) 6: mt = β1mt−1 + (1− β1)gt 7: vt = β2vt−1 + (1− β2)g2t 8: θ̃t = θt−1 − ηt √ 1−βt2\n1−βt1 mt√ vt+\n9: θt = Π‖θt−θ0‖≤δ(θ̃t)\nProject gradient descent projects the iterates into the constraint set after each gradient step. In particular, the projection step simply finds the nearest point within the constraint set to the iterate. For the `2 norm constraint, the constraint set is {θ : ‖θ − θ0‖2 ≤ δ}, and the projection operation is\nΠ‖θ−θ0‖2≤δ(θ) = θ0 + (θ − θ0) min { δ ‖θ − θ0‖2 , 1 } . (8)\nFor the `∞ norm constraint, the constraint set is {θ : ‖θ− θ0‖∞ ≤ δ}, and the projection operation is Π‖θ−θ0‖∞≤δ(θ) = θ0 + min { max{θ − θ0,−δ}, δ } , (9)\nwhere max and min operations are applied element-wise. In our implementation, we use Adam for the gradient step, as shown in Algorithm 1." }, { "heading": "E ADDITIONAL RESULTS FOR FINE-TUNING WITHOUT CONSTRAINTS", "text": "We present additional results for fine-tuning without constraints in Figure 4." }, { "heading": "F KNN-LM FOR MODIFICATION?", "text": "kNN-LM (Khandelwal et al., 2020) is originally designed to enhance autoregressive language models with a simple datastore. The datastore is a key-value database, where the keys are the prefix embeddings and the values are the following tokens of the prefixes. During inference, the distribution of the next word is defined as an interpolation between the language model’s predictions and a term that decreases with the kNN distances. Without any further training of the language model, kNN-LM improves the results for several language generation datasets.\nIn this paper, we focus on masked language models like BERT. Since we are interested in predicting the [MASK] token, the datastore of the kNN-LM in our setting should be constructed with the keys being the contextual embeddings of the [MASK] tokens from the supporting evidences in the training set, denoted as c(x; θ0), and the values being the labels of these [MASK] tokens, which is just the object tokens y. The datastore can be constructed on the entire training set, or only constructed for the modified facts to change the model’s predictions. Here we focus on the second approach. Specifically, let f(x; θ0) be the prediction of the original model (e.g., a pretrained BERT-Base). For a given contextual embedding c(x; θ0), we use the prediction from its nearest neighbor in the datastore only when the distance to the nearest neighbor is smaller than in the contextual embedding space. Therefore, the model’s prediction is defined as\nfnn(x; θ0,M) = {\narg min{y′|(z,y′)∈DM}‖c(x; θ0)− c(z; θ0)‖2 if d(x; θ0,M) < , f(x; θ0) otherwise,\n(10)\nwhere d(x; θ0,M) = min(z,y′)∈DM‖c(x; θ0)− c(z; θ0)‖2. The results are listed in Table 6. We can see that even when we set the to a very large value, the model does not have a reasonable accuracy on the modified facts. This indicates that the nearest neighbor does not correspond to the correct fact most of the time, probably caused by the discrepancy between training and test questions regarding the same fact (see the example for the T-REx dataset in Appendix A).\nAnother fundamental limitation of this approach is that it will potentially modify the answers of all facts sharing the same object if the datastore only contains the modified facts. The masked language model is trained to maximize the score of the prediction on the correct object, achieved by (implicitly) minimizing the distance of the contextual embedding of [MASK] to the embedding of the object’s token while maximizing the distance to other tokens through the cross-entropy loss. Therefore, all the contextual embeddings of [MASK] corresponding to the same object should be close if the model makes correct predictions on these samples. If we modify one of the objects, it will conflict with or even lead to wrong predictions on other facts. For example, if we want to modify the birthplace of Charles Darwin from the UK to France, then the kNN-LM will tend to predict France as the birthplace of William Shakespeare as well. Therefore, the tradeoff between the modified and unmodified accuracies is again inevitable in the setting where we only change the values of the datastore of kNN-LM, and it may lead to a worse tradeoff by modifying the predictions on all facts sharing the same object.\nIf the datastore also contains unmodified facts, during modification, we need to identify all the training samples corresponding to the facts from the unstructured texts, which adds to the difficulty. Even if we can find out all the corresponding training samples, only modifying the values tokens will cause conflict with the datastore of other facts sharing the same object. Thus, we can conclude that finetuning is essential for knowledge modification in the kNN-LM as well." } ]
2,020
null
SP:11d9e619756f936a241fb838a78157de03d22344
[ "The authors propose to leverage images to train an unsupervised machine translation (MT) model. Their main idea is that the similarity of images can be used as a proxy for the similarity of sentences describing the images. The sentences, in turn, can be in different languages, and knowledge about their similarity can be exploited as training signal for an unsupervised MT model, i.e., training without parallel sentences. Their model consists of a sentence encoder and an image encoder. For training and evaluation of the model, they use translations (multi-way for the test set) of image captioning datasets." ]
Machine translation in a multi-language scenario requires large-scale parallel corpora for every language pair. Unsupervised translation is challenging because there is no explicit connection between languages, and the existing methods have to rely on topological properties of the language representations. We introduce a framework that leverages visual similarity to align multiple languages, using images as the bridge between them. We estimate the cross-modal alignment between language and images, and use this estimate to guide the learning of cross-lingual representations. Our language representations are trained jointly in one model with a single stage. Experiments with fifty-two languages show that our method outperforms prior work on unsupervised word-level and sentence-level translation using retrieval.
[]
[ { "authors": [ "Mikel Artetxe", "Holger Schwenk" ], "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Mikel Artetxe", "Gorka Labaka", "Eneko Agirre", "Kyunghyun Cho" ], "title": "Unsupervised neural machine translation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Emmanuel Azuh", "David Harwath", "James Glass" ], "title": "Towards bilingual lexicon discovery from visually grounded speech audio", "venue": "Proc. Interspeech 2019,", "year": 2019 }, { "authors": [ "Timothy Baldwin" ], "title": "Low-cost, high-performance translation retrieval: Dumber is better", "venue": "In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics,", "year": 2001 }, { "authors": [ "Timothy Baldwin", "Hozumi Tanaka" ], "title": "The effects of word order and segmentation on translation retrieval performance", "venue": "COLING", "year": 2000 }, { "authors": [ "R. Brown" ], "title": "Transfer-rule induction for example-based translation", "venue": "In Proceedings of the MT Summit VIII Workshop on Example-Based Machine Translation,", "year": 2001 }, { "authors": [ "Ralf D Brown" ], "title": "Example-based machine translation in the pangloss system", "venue": "COLING", "year": 1996 }, { "authors": [ "Ralf D. Brown" ], "title": "Automated dictionary extraction for ”knowledge-free” example-based translation", "venue": "Proceedings of the Seventh International Conference on Theoretical and Methodological Issues in Machine Translation,", "year": 1997 }, { "authors": [ "Andrea Burns", "Donghyun Kim", "Derry Wijaya", "Kate Saenko", "Bryan A Plummer" ], "title": "Learning to scale multilingual representations for vision-language tasks", "venue": "European Conference in Computer Vision,", "year": 2020 }, { "authors": [ "Iacer Calixto", "Qun Liu" ], "title": "Sentence-level multilingual multi-modal embedding for natural language processing", "venue": "In Proceedings of the International Conference Recent Advances in Natural Language Processing,", "year": 2017 }, { "authors": [ "Chris Callison-Burch", "Miles Osborne", "Philipp Koehn" ], "title": "Re-evaluating the role of Bleu in machine translation research", "venue": "In 11th Conference of the European Chapter of the Association for Computational Linguistics,", "year": 2006 }, { "authors": [ "Konstantinos Chatzitheodorou" ], "title": "Improving translation memory fuzzy matching by paraphrasing", "venue": "In Proceedings of the Workshop Natural Language Processing for Translation Memories,", "year": 2015 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Everest Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Yun Chen", "Yang Liu", "Victor OK Li" ], "title": "Zero-resource neural machine translation with multi-agent communication game", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Zewen Chi", "Li Dong", "Furu Wei", "Nan Yang", "Saksham Singhal", "Wenhui Wang", "Xia Song", "Xian-Ling Mao", "Heyan Huang", "Ming Zhou" ], "title": "Infoxlm: An informationtheoretic framework for cross-lingual language model pre-training", "venue": null, "year": 2020 }, { "authors": [ "Alexis Conneau", "Kartikay Khandelwal", "Naman Goyal", "Vishrav Chaudhary", "Guillaume Wenzek", "Francisco Guzmán", "Edouard Grave", "Myle Ott", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Unsupervised cross-lingual representation learning at scale", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Lambros Cranias", "Harris Papageorgiou", "Stelios Piperdis" ], "title": "A matching technique in examplebased machine translation", "venue": "COLING", "year": 1994 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Meiping Dong", "Yong Cheng", "Yang Liu", "Jia Xu", "Maosong Sun", "Tatsuya Izuha", "Jie Hao" ], "title": "Query lattice for translation retrieval", "venue": "In Proceedings of COLING", "year": 2014 }, { "authors": [ "T El-Shishtawy", "A El-Sammak" ], "title": "The best templates match technique for example based machine translation", "venue": "arXiv preprint arXiv:1406.1241,", "year": 2014 }, { "authors": [ "Yuwei Fang", "Shuohang Wang", "Zhe Gan", "Siqi Sun", "Jingjing Liu" ], "title": "Filter: An enhanced fusion method for cross-lingual language understanding, 2020", "venue": null, "year": 2020 }, { "authors": [ "Orhan Firat", "Baskaran Sankaran", "Yaser Al-Onaizan", "Fatos T Yarman Vural", "Kyunghyun Cho" ], "title": "Zero-resource translation with multi-lingual neural machine translation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Xavier Garcia", "Pierre Foret", "Thibault Sellam", "Ankur Parikh" ], "title": "A multilingual view of unsupervised machine translation", "venue": "In Findings of the Association for Computational Linguistics: EMNLP", "year": 2020 }, { "authors": [ "Spandana Gella", "Rico Sennrich", "Frank Keller", "Mirella Lapata" ], "title": "Image pivoting for learning multilingual multimodal representations", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Daniela Gerz", "Ivan Vulić", "Edoardo Maria Ponti", "Roi Reichart", "Anna Korhonen" ], "title": "On the relation between linguistic typology and (limitations of) multilingual language modeling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "David Harwath", "Galen Chuang", "James Glass" ], "title": "Vision as an interlingua: Learning multilingual semantic embeddings of untranscribed speech", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Donghyun Kim", "Kuniaki Saito", "Kate Saenko", "Stan Sclaroff", "Bryan A Plummer" ], "title": "Mule: Multimodal universal language embedding", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "T. Kimura", "J. Matsuoka", "Y. Nishikawa", "Y. Lepage" ], "title": "Analogy-based machine translation using secability", "venue": "In 2014 International Conference on Computational Science and Computational Intelligence,", "year": 2014 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Guillaume Lample", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Unsupervised machine translation using monolingual corpora only", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Guillaume Lample", "Alexis Conneau", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Word translation without parallel data", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Phrase-based & neural unsupervised machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Chunyang Liu", "Qi Liu", "Yang Liu", "Maosong Sun" ], "title": "Thutr: A translation retrieval system", "venue": "In Proceedings of COLING 2012: Demonstration Papers,", "year": 2012 }, { "authors": [ "Yinhan Liu", "Jiatao Gu", "Naman Goyal", "Xian Li", "Sergey Edunov", "Marjan Ghazvininejad", "Mike Lewis", "Luke Zettlemoyer" ], "title": "Multilingual denoising pre-training for neural machine translation", "venue": null, "year": 2001 }, { "authors": [ "Makoto Nagao" ], "title": "A framework of a mechanical translation between japanese and english by analogy principle", "venue": "Artificial and human intelligence,", "year": 1984 }, { "authors": [ "Jason Phang", "Phu Mon Htut", "Yada Pruksachatkun", "Haokun Liu", "Clara Vania", "Iacer Calixto", "Katharina Kann", "Samuel R. Bowman" ], "title": "English intermediate-task training improves zero-shot crosslingual transfer too", "venue": "In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 9th International Joint Conference on Natural Language Processing,", "year": 2020 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI blog", "year": 2019 }, { "authors": [ "Peter H Schönemann" ], "title": "A generalized solution of the orthogonal procrustes problem", "venue": null, "year": 1966 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb. org/anthology/P16-1162", "year": 2016 }, { "authors": [ "Piyush Sharma", "Nan Ding", "Sebastian Goodman", "Radu Soricut" ], "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "venue": "In Proceedings of ACL,", "year": 2018 }, { "authors": [ "Gunnar A. Sigurdsson", "Jean-Baptiste Alayrac", "Aida Nematzadeh", "Lucas Smaira", "Mateusz Malinowski", "João Carreira", "Phil Blunsom", "Andrew Zisserman" ], "title": "Visual grounding in video for unsupervised word translation", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Lucia Specia", "Stella Frank", "Khalil Sima’an", "Desmond Elliott" ], "title": "A shared task on multimodal machine translation and crosslingual image description", "venue": "In Proceedings of the First Conference on Machine Translation:", "year": 2016 }, { "authors": [ "Yuanhang Su", "Kai Fan", "Nguyen Bach", "C-C Jay Kuo", "Fei Huang" ], "title": "Unsupervised multi-modal neural machine translation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hagai Taitelbaum", "Gal Chechik", "Jacob Goldberger" ], "title": "A multi-pairwise extension of procrustes analysis for multilingual word translation", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Katharina Wäschle", "Stefan Riezler" ], "title": "Integrating a large, monolingual corpus as translation memory into statistical machine translation", "venue": "In Proceedings of the 18th Annual Conference of the European Association for Machine Translation,", "year": 2015 }, { "authors": [ "Jonatas Wehrmann", "Douglas M Souza", "Mauricio A Lopes", "Rodrigo C Barros" ], "title": "Languageagnostic visual-semantic embeddings", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Peter Young", "Alice Lai", "Micah Hodosh", "Julia Hockenmaier" ], "title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event", "venue": "descriptions. TACL,", "year": 2014 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nMachine translation aims to learn a mapping between sentences of different languages while also maintaining the underlying semantics. In the last few years, sequenceto-sequence models have emerged as remarkably powerful methods for this task, leading to widespread applications in robust language translation. However, sequenceto-sequence models also require large data sets of parallel corpora for learning, which is expensive to collect and often impractical for rare language pairs.\nWe propose to leverage the synchronization between language and vision in order to learn models for machine translation without parallel training corpora. Instead of learning a direct mapping between languages, we present a model that aligns them by first mapping through a visual representation. We show how vision creates a transitive closure across modalities, which we use to establish positive and negative pairs of sentences without supervision. Since the visual appearance of scenes and objects\nwill remain relatively stable between different spoken languages, vision acts as a “bridge” between them. Our approach integrates these transitive relations into multi-modal contrastive learning.\nIn our experiments and visualizations we show that the transitive relations through vision provide excellent self-supervision for learning neural machine translation. Although we train our approach without paired language data, our approach is able to translate between 52 different languages better than several baselines. While vision is necessary for our approach during learning, there is no dependence on vision during inference. After learning the language representation, our approach can translate both individual words and full sentences using retrieval.\nThe contributions of this paper are three-fold. First, we propose a method that leverages crossmodal alignment between language and vision to train a multilingual translation system without any parallel corpora. Second, we show that our method outperforms previous work by a significant margin on both sentence and word translation, where we use retrieval to test translation. Finally, to evaluate and analyze our approach, we release a federated multi-modal dataset spanning 52 different\nlanguages. Overall, our work shows that grounding language in vision helps developing language processing tools that are robust across languages, even in cases where ground truth alignment across languages is not available. Code, data, and pre-trained models will be released." }, { "heading": "2 RELATED WORK", "text": "Our unsupervised joint visual and multilingual model builds on recent progress in both the natural language processing and computer vision communities. We briefly summarize the prior work.\nUnsupervised language translation has been studied as a word representation alignment problem in Lample et al. (2018b), where the distribution of word embeddings for two unpaired languages is aligned to minimize a statistical distance between them. Lample et al. (2018a); Artetxe et al. (2018); Lample et al. (2018c); Lample & Conneau (2019) build on top of this idea, and train an encoderdecoder structure to enforce cycle-consistency when translating from one language to another and back to the first one. This method achieves strong unsupervised word translation results, but does not scale beyond two languages. It also does not leverage visual information in learning.\nMulti-language models are general language models that develop language-independent architectures that work equally well for any language (Gerz et al., 2018). Lample & Conneau (2019); Conneau et al. (2020); Artetxe & Schwenk (2019); Devlin et al. (2019); Liu et al. (2020); Phang et al. (2020) share the same token embeddings across different languages, showing that this improves language modeling both for general downstream single-language NLP tasks and also for supervised language translation across multiple languages. Lample & Conneau (2019); Conneau et al. (2020); Artetxe & Schwenk (2019) use a shared Byte Pair Encoding (BPE), which we use in our work. We loosely follow the architecture of Conneau et al. (2020) in that we train a transformer-based (Vaswani et al., 2017) masked language model with BPE.\nVision as multi-modal bridge implies using vision as an interlingua between all languages. Using a third language as a pivot to translate between pairs of languages without source-target paired corpora has been studied for the past few years (e.g. Firat et al., 2016; Johnson et al., 2017; Garcia et al., 2020). Harwath et al. (2018); Azuh et al. (2019) use vision for the same purpose, and they work directly on the speech signal instead of text. Chen et al. (2018) use images to help translate between languages in the text modality. Their model involves both generation and reinforcement learning, which makes optimization difficult, and they do not generalize to more than two languages. Sigurdsson et al. (2020) also use vision as a pivot for unsupervised translation. However, our approach works for multiple languages at once (instead of just two) and also obtains an explicit cross-lingual alignment. We share a single word embedding and language model for all languages, and use different training strategies. Our experiments quantitatively compare the two approaches, showing that our approach performs better both in word and sentence translation.\nOther work views the input image as extra information for translation (e.g. Calixto & Liu, 2017; Su et al., 2019), and we refer readers to Specia et al. (2016) for an extensive overview on this topic. Instead of using images as a bridge, paired data between languages is used. There has also been research on training multilingual language representations for downstream vision tasks, in general leveraging visual-language correspondence, but without translation as a goal. Unlike this paper, they make use of ground truth language pairs (Wehrmann et al., 2019; Gella et al., 2017; Kim et al., 2020; Burns et al., 2020).\nTranslation by retrieval. We evaluate the representations using retrieval-based machine translation (Baldwin & Tanaka, 2000; Liu et al., 2012), which is often used in the context of example-based machine translation (e.g. Brown, 1996; 2001; 1997; Cranias et al., 1994; El-Shishtawy & El-Sammak, 2014), analogy-based translation (e.g. Nagao, 1984; Kimura et al., 2014), or translation memories (e.g. Chatzitheodorou, 2015; Dong et al., 2014; Wäschle & Riezler, 2015; Baldwin, 2001). While there are also generative-based translation approaches, they are difficult to automatically evaluate. There is generally no well-defined metric for what consists of a good generative translation (Callison-Burch et al., 2006). Instead, we evaluate our approach using translation-by-retrieval, allowing for rigorous experimental validation of the cross-lingual alignment in the representation.\nState-of-the-art cross-lingual retrieval approaches rely on supervised language pairs, and range from training the models in a standard contrastive learning setting (Chi et al., 2020) to more complex combinations of the language pairs such as using cross-attention (Anonymous, 2021) or introducing custom fusion layers (Fang et al., 2020). Our approach does not require supervised language pairs." }, { "heading": "Text Network", "text": "" }, { "heading": "3 METHOD", "text": "We present an approach that learns to map words and sentences from one language to semantically similar words and sentences from different languages, for a large number of languages simultaneously. Our approach does not require any paired data between languages, and instead only depends on image-language pairs. Fig. 2 provides an overview of our framework." }, { "heading": "3.1 SENTENCE EMBEDDING", "text": "Our approach learns an aligned embedding space for sentences across languages. Let zli ∈ RD be the learned embedding of sentence i, obtained by processing the text through a language network Θl. Moreover, let βij be the similarity between sentences zli and z l j , for example through the cosine similarity. Our goal is to learn the parameters of the embedding z such that sentences with the same meaning are mapped to similar positions in the embedding space despite coming from different languages. After learning, we will have a sentence embedding zli that we can use for a variety of tasks, such as retrieving or generating sentences in different languages.\nWe learn the parameters of the embedding space z by optimizing the contrastive learning problem:\nLt = − ∑ i ∑ j 6=i αij log exp(βij/τ)∑ k 6=i exp (βik/τ) with βij = sim ( zli, z l j ) (1)\nIn contrastive learning, we need to define which pairs of examples should be close in the learned embedding space (the positives), and which pairs of examples should not (the negatives). In the above formulation, the scalar αij ∈ [0, 1] indicates this assignment. However, since we are in an unsupervised translation setting, we do not have ground truth pairs. Our main idea, which we introduce in the next section, is that we can use the visual modality to discover these pairs." }, { "heading": "3.2 TRANSITIVE RELATIONS", "text": "Estimating the similarity for sentences of different languages is challenging without labels. Unsupervised machine translation approaches typically rely on topological properties, such as distributional alignment or back-translation (Lample et al., 2018b; Lample & Conneau, 2019). However, these constraints provide a noisy gradient for learning, which makes large-scale optimization difficult.\nWe propose to take advantage of a transitive relation through the visual modality in order to estimate the similarity in language space αij . Given a dataset of images and their corresponding captions, we estimate both a cross-modal (sentence-image) similarity as well as a cross-image (image-image) similarity. Let αxii be the cross-modal similarity, which indicates the alignment between image i and its corresponding caption i. We also let αvij be the cross-image similarity, indicating the perceptual similarity between image i and another image j. This provides the transitive relation as the product\nof similarities:\nαij = f(α x ii · αvij · αxjj), where f(x) = max(0, x−m)/(1−m), (2)\nand m is a margin that we set to m = 0.4, which prevents pairs with low similarity from being used as positives. Note that αij = αji. The transitive similarity causes two sentences from different languages to be similar if they appear in similar visual contexts.\nSince both αxii ∈ [0, 1] and αvij ∈ [0, 1], the final similarity is in the same range, αij ∈ [0, 1]. Only when there is a strong alignment between an image and its caption, and there is also another image with close perceptual similarity, will a transitive relation be formed. In realistic scenes, the correspondence for some image and caption pairs may be difficult to establish in the presence of noise, which our formulation handles by breaking the transitive relation. In other words, we only consider paths with high total similarity as positives for the contrastive objective, and discard those paths with low total similarity, since their sentences likely do not match." }, { "heading": "3.3 LEARNING", "text": "In order to optimize Equation 1, we need to estimate αxii and α v ij . We parameterize both with a neural network, and we train them to directly estimate the similarity also with contrastive learning.\nVisual Similarity: We jointly learn a visual feature space using contrastive learning (Chen et al., 2020) in order to estimate αvij . For every image, we perform two random augmentations, resulting in two different versions of the same image. These two transformed images are run through the image network, along with the other N − 1 pairs (in a batch of N samples). This results in 2N feature maps. For every pair (i, j) of images with representations zvi and z v j , we compute a contrastive loss, where all the other 2(N − 1) images are the negatives. We use the loss function:\nLv = − ∑ ij log exp (αvij/τ)∑ k 6=i exp (α v ik/τ) where αvij = sim(z v i , z v j ). (3)\nzvi represents the learned features for image i, obtained by processing the images through an image network Θv . We augment images using random image cropping, random Gaussian blurring, and random color distortions, following Chen et al. (2020).\nCross-Modal Similarity: We also need to estimate the similarity between images and their corresponding captions αxii. The visual representation anchors inter-language alignment, and this similarity constrains the sentence embedding for each language to share the same space as the image embedding. We learn this similarity metric through the contrastive objective:\nLx = − ∑ i\n( log\nexp (αxii/τ)∑ j exp (α x ij/τ) + log exp (αxii/τ)∑ j exp (α x ji/τ)\n) with αxij = sim(z v i , z l j). (4)\nToken Cloze: We finally also train the model with a token cloze task in order to make the language representation contextual. We follow the same loss and objective as BERT (Devlin et al., 2019) over the sentence input. We label this loss Lc. Full Objective: The final objective we optimize is the combination of all four losses defined above:\nmin Θ Lt + λ1Lv + λ2Lx + λ3Lc (5)\nwhere Θ are the neural network parameters, and λ are scalar hyper-parameters to the balance the terms. Over the course of optimization, the model will be estimating an aligned multi-lingual representation β jointly with the transitive similarity α. As learning progresses, αij will form soft positive and negative pairs, which the model will use to learn the aligned multi-language representation. The quality of the multi-language representation will depend on the quality of transitive alignments αij our model discovers. However, since the contrastive objective relies on statistical patterns over a large dataset, our approach is fairly robust to noise, which our experiments support." }, { "heading": "3.4 REFINING WORD-LEVEL ALIGNMENT", "text": "Our approach learns a common embedding space between vision and sentences in multiple languages, which our experiments will show provides a robust representation for unsupervised ma-\nchine translation. This representation is aligned well at the sentence level. We can further refine the representation by aligning them along words as well.\nTo obtain word-level alignment, we use the Procrustes algorithm (Schönemann, 1966) on the learned word embeddings. We find a linear transformation from the word embeddings of one language to the word embeddings of another language. To estimate the linear transformation, we follow standard practice and identify the anchor points by finding the k = 5 mutual nearest neighbors between the word embeddings across languages. We then proceed with the Procrustes approach from Taitelbaum et al. (2019), which extends the original algorithm to more than two distributions. To translate words, we then directly use the transformed word embeddings." }, { "heading": "3.5 ARCHITECTURE", "text": "Our method uses a two-branch architecture, which extracts text and image features that share the same semantic embedding space. We briefly describe the network architecture choices below. We refer readers to the supplemental material for complete details.\nImage network: To extract visual features, we apply a convolutional network over the images, which we label Θv . We use a ResNet-18, initialized with ImageNet features (He et al., 2016; Deng et al., 2009), and we add a prediction head after the last hidden layer of the ResNet.\nText network: We use a neural network to embed a sentence, which we label Θl. We use a single encoder with shared word embeddings across all languages, which has been shown to scale well to the multilingual setting (Artetxe & Schwenk, 2019; Conneau et al., 2020). All languages share the same vocabulary created using Byte Pair Encoding (Sennrich et al., 2016), which improves the alignment of embedding spaces across languages that share the same alphabet (Lample et al., 2018a). We then use a transformer from Vaswani et al. (2017), shared by all the languages. To produce outputs, we add a prediction head, and normalize the outputs so that ||z||2 = 1." }, { "heading": "4 THE GLOBETROTTER DATASET", "text": "In order to train and evaluate our approach, we have collected a federated dataset of images and captions that span 52 different languages. The full list of languages is in the footnote.1 We combined three captioning datasets and translated them using Amazon Translate from Amazon Web Services. We use captions and images from the Flickr30k (Young et al., 2014), MSCOCO (Lin et al., 2014) and Conceptual Captions (Sharma et al., 2018) datasets. The language in the federated dataset is diverse, covering both captions from human annotators and captions harvested from the web. The dataset contains a total of 4.1M image-caption pairs, with an English sentence mean length of 10.4 words. We will publicly release this dataset.\nWe split our dataset into a train, validation, and testing set. We make the partition ensuring that they each contain a disjoint set of images and sentences. We use 3.15M unique text-image pairs for training, 787k for validation, and 78.7k for testing. The training and validation splits contain samples corresponding to all languages, and each image only has one language associated with it. The testing set is translated to all languages (the same samples), to have ground truth alignment." }, { "heading": "5 EXPERIMENTAL EVALUATION", "text": "Our experiments analyze the language translation capabilities of our model, and quantify the impact of vision on the learning process. We call our model Globetrotter.\n1Afrikaans, Albanian, Amharic, Arabic, Azerbaijani, Bengali, Bosnian, Bulgarian, Chinese, Croatian, Czech, Danish, Dari, Dutch, English, Estonian, Finnish, French, Georgian, German, Greek, Hausa, Hebrew, Hindi, Hungarian, Indoniesian, Italian, Japanese, Korean, Latvian, Malay, Norwegian, Persian, Pashto, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Somali, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese.\nUnder review as a conference paper at ICLR 2021\nTable 1 Supervised Chance Text Only Lample & Conneau (2019) Sigurdsson et al. (2019) without Lx without Lv without Lt without Lc Full Model\nSupervised Chance\nText Only Lample & Conneau (2019)\nSigurdsson et al. (2019) without Lx without Lv without Lt without Lc Full Model\nPercentage of Retrieved Positives\n0% 20% 40% 60% 80% 100%\n68.4% 63.3%\n9.5% 9.0%\n92.5%\n72.3%\n56.7% 15.6% 8.9% 0.5%\nGlobetrotter (ours)\n1\nFigure 3: We evaluate our translations at the sentence-level. Our approach outperforms several unsupervised translation baselines. While unsupervised approaches are still no match for fully supervised methods, our approach uses significantly less supervision.\nTable 1\nHuman translations\nMachine translations\nChance 0.0045 0\nText Only 0.119 0.0061\nLample & Conneau (2019)\n0.152 0.01\nSigurdsson et al. (2019)\n0.51 0.04\nGlobetrotter (Ours)\n0.708 0.026\nSentence-level Supervision\n0.031 0.911\nChance Text Only\nLample & Conneau (2019) Sigurdsson et al. (2019)\nGlobetrotter (Ours)" }, { "heading": "Sentence-level Supervision", "text": "Percentage of Retrieved Positives\n0% 20% 40% 60% 80% 100%\n+3.1%91.1% +2.6%\n+4.0% +1.0%\n+0.6% 70.8% 51% 15.2% 11.9% 0.45%5%\n+3.1%\nChance\nText Only\nLample & Conneau (2019)\nSigurdsson et al. (2019)\nGlobetrotter (Ours)\nPercentage of Retrieved Positives\n0% 20% 40% 60%\n+4.0%\n+1.0%\n+0.6%\n70.8%\n51%\n15.2%\n11.9%\n0.45%\nHuman-generated test set Machine-generated test set\nChance Text Only\nLample & Conneau (2019) Sigurdsson et al. (2019)\nGlobetrotter (Ours)" }, { "heading": "Sentence-level Supervision", "text": "Percentage of Retrieved Positives\n0% 100% 200% 300% 400% 500%\nFigure 4: We evaluate our translations at the sentence-level with a human-generated test set. Fluent speakers for 11 of the languages manually annotated translations in the test set. Our approach outperforms several unsupervised translation baselines on this test set as well." }, { "heading": "5.1 BASELINES", "text": "Sigurdsson et al. (2020): The closest approach to ours is Sigurdsson et al. (2020), which is a state-of-the-art approach for unsupervised word translation using cross-modal information. Their original model is trained to translate between just two languages, and our experiments work with fifty languages. We therefore extended their method to multiple languages by creating a different word embedding and adapting layer for each language, which we use as the baseline. We use the same vocabulary as in our method, but train separate word embeddings for different languages.\nLample & Conneau (2019): We also compare to the state-of-the-art unsupervised translation approach that does not use visual information. We experimented with several baselines, and chose the one that performs the best. This baseline uses a cycle-consistency (or back-translation) loss between pairs of languages. We train their method on our dataset, for all M languages simultaneously. We originally experimented with adding cycle-consistency constraints for all M2 language pairs, but this resulted in poor performance. We randomly select a total of 5M pairs, where each language appears five times as the source and five times as the target. We also experimented with Lample et al. (2018b), but this performed worse than Lample & Conneau (2019).\nText-only model: To quantify the impact of vision, we also train a version of our model where all images and image-related losses are removed, as in Devlin et al. (2019). This model is capable of learning some basic cross-lingual concepts by having different languages using the same tokens.\n6\nUnder review as a conference paper at ICLR 2021\nTable 1 All Vocab with P Disjoint Vocab with P Chance 0.27 0 0.27 0 Text Only 25.46 2 11.1 2.56 Lample & Conneau (2019) 27.67 0.47 13.5 2.32 Sigurdsson et al. (2019) 14.77 11.38 17.15 10.19\nGlobetrotter (Ours)\n44.05 2.84 26.20 6.87\nSupervised 39.57 1.98 18.24 8.71\nChance\n0.5%\n1\nFully Supervised: To understand the gap between unsupervised and supervised approaches, we train our method with paired language corpora. We use our same framework, except we set the values of α to 1 for paired sentences, and 0 for unpaired sentences.\nCommon Evaluation Setup: Throughout our experiments, we adopt a common evaluation setup to evaluate all models. We train all models for 200 epochs and select the best model on the held-out validation set. In all cases, vision is not used during testing." }, { "heading": "5.2 SENTENCE-LEVEL TRANSLATION", "text": "We evaluate sentence translation using held-out data that contains a set of sentences translated to all languages. We produce translations by retrieving the nearest examples given a query. From the test set, we randomly select 200 captions, for all M languages, with a total of 200M sentences. Each one of these sentences is used as a query during test, and it has M − 1 positives (same sentence in different languages). The metric we report is the percentage of positives the model ranks in the top M − 1, among all the 200M − 1 possible options. In order to rank target sentences, we compute the similarity between them and the query sentence, and rank them according to this value. We show results in Fig. 3. Our method outperforms all baselines by a significant margin, underscoring the utility of transitive relations across modalities.\nFig. 3 also reports ablations of our framework when not training with each one of the four losses in Eq. 5. Training without losses Lv (Eq. 3) or Lx (Eq. 4) implies breaking the transitive closure represented in Fig. 2, which results in a drastic decrease in performance. Lt (Eq. 1) is the loss that makes the cross-lingual alignment explicit, but importantly it is not required to close the transitive relation through the visual modality. Training without it represents a considerable drop in accuracy, but the results are still better than baselines. Finally, Lc also contributes to the final performance, consistently with prior work (Lample & Conneau, 2019; Liu et al., 2020).\nWe show some examples of our sentence translations in Tab. 1. Our approach works on all language pairs and we simply select a few for visualization purposes. These examples show how our method aligns languages following their visual semantics.\nOur method does not rely on artifacts from machine-generated translations and generalizes to human-translated data. In order to prove it, we additionally collect a test set of 200 English captions translated by fluent speakers to 11 different languages, for a total of 2200 human-generated translations.2 We report results in Fig. 4, where we show the accuracy values both for human-translated and machine-translated texts. We use the same metric as before, now for M = 11. Our approach outperforms the unsupervised baselines on the human-generated test as well. While all methods experience a small decrease in performance when tested in human-translated data, the small difference between the results in the two test sets validates the quality of the evaluation." }, { "heading": "5.3 WORD-LEVEL TRANSLATION", "text": "Following the evaluation in Sigurdsson et al. (2020), we also evaluate the word-level translation. Since we lack ground truth translation at this level, we obtain ground truth for evaluation by automatically matching words across languages. For every language pair, we find which words co-occur frequently in a sentence between the two languages. See Appendix B.2. Then we test each pair of languages separately. For every translation, we evaluate retrieval in both directions. Fig. 5 reports the average Recall@10 for all pairs of translations and all pairs of languages. In the right column, we exclude from the list of pairs those where the token is the same in the two languages. Even the model trained with text only – which performs poorly on sentence-level translation – obtains strong results, highlighting the importance of using a shared vocabulary. We show some examples of word translation in Tab. 2." }, { "heading": "5.4 ANALYSIS", "text": "Visualizing transitive matches: Fig. 6 shows examples of estimated transitive similarity values. We show predicted αv (inter-image similarity), αx (cross-modal similarity), and β (inter-sentence similarity). Fig. 6a and 6b show examples where both the similarity between images and the crossmodal similarity are high, resulting in a large α. If these pairs were to be used for training, they would be positives. The model correctly predicts a high β value between the two texts. Fig. 6c demonstrates the importance of using αx in addition to αv to create language pairs. In this case, the visual content between the two images corresponds, and the model detects that correctly with a high αv value. However, because web data is not always clean, the caption in the left does not correspond to the visual content. This is correctly captured in the small αx value. If we were using this pair for\n2The 11 languages with ground-truth human translations are: Dutch, French, Hebrew, Hindi, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish.\ntraining, it would be considered a negative example despite significant visual similarity. Thus, the misalignment noise is not propagated to the cross-lingual loss. Finally, Fig. 6d shows an example where both sentences accurately describe their corresponding image, but the images do not match. As expected, this would result in a negative pair.\nFailure cases: We show three prototypical examples of failure cases in Tab. 3. In the first example, the caption is not related to any visual concept, causing our model to translate it incorrectly. The second example shows how some words are related to incorrect concepts due to spurious correlations in the training set. In this specific case, the phrase “new concept” is strongly associated to cars, since it appears in training in the context of “concept cars”, i.e. vehicles from car companies to explore new designs. Therefore, the model retrieves sentences referring to cars, even though they do not have any relation to the phrase “new concept”. Finally, the third failure case shows a sentence with a new word (“tabby”), where the model is overreliant on context to translate instead.\nTranslation difficulty by language: We itemize the performance of sentence-level translation by language in Fig. 7. Languages from the same family are often easier to translate between. The most difficult language is Tamil, the only Dravidian language in our dataset." }, { "heading": "6 CONCLUSION", "text": "Leveraging a transitive relation between language and vision, our experiments show our framework learns a strong representation for both sentence-level and word-level machine translation without parallel corpora. We believe vision will continue to be valuable for learning robust language models." }, { "heading": "APPENDIX", "text": "We divide the appendix in two sections. In Section A we show more results, and in Section B we provide more information about the implementation of our method." }, { "heading": "A ADDITIONAL RESULTS", "text": "" }, { "heading": "A.1 FEATURE GENERALIZATION", "text": "Training a language model, as opposed to a text representation only designed for image retrieval, has the crucial advantage that it can be finetuned to perform downstream NLP tasks. In this work we are interested in evaluating how well the representations generalize across languages, after training on a downstream task. We evaluate our model on sentence correspondence: we split sentences in two, and half of the times we swap the second half of the sentences with other sentences of the same language. The model has to determine whether or not a sentence is coherent and the beginning of the sentence corresponds to the end of the sentence. We control for uppercase, word breaks, length of sentences etc. so that the model cannot find an easy shortcut (cheat), and has to rely on the semantic and syntactic structure of the sentence. We show examples of the test in Tab. 4 for English.\nWe train all the models for one epoch on half of the languages in the testing split (first half in alphabetical order), and test on both held-out samples from that half, and on the languages from the other half (new languages the sentence correspondence downstream task has not seen). We train a single transformer layer on top of our representation, with one head. For Sigurdsson et al. (2020), we do not apply the max-pooling over words in order to have a representation for each word. We show results on Tab. 5. The results show that methods trained with language models are much better at performing language tasks. It also shows that our method, trained with alignment, not only performs better on the languages the downstream task has been trained on, but it also generalizes better to other languages the sentence correspondence task has never seen, indicating that the model has a very aligned representation across languages. The relative decrease in accuracy is computed as the percentage decrease of the difference between the accuracy and the chance accuracy." }, { "heading": "A.2 ADAPTATION TO A NEW LANGUAGE", "text": "We test how well our framework can adapt to incoming languages. For this purpose, we test on English and Chinese (separately), which were held out during training. To do so, we precompute features for images and texts from the languages we used during training, and finetune the model for the new language using the same losses as before. We train for one epoch.\nAfter finetuning for English and Chinese, we repeat the same experiments performed for the other languages, showing that our system is able to adapt to new languages without losing the multilingual alignment. See Tab. 6 for translation results, and Tab. 7 for sentence correspondence results. For the sentence correspondence test, we use the head we trained before (without finetuning on the new languages)." }, { "heading": "A.3 MORE RESULTS ON TRANSLATION DIFFICULTY PER LANGUAGE", "text": "Similarly to Fig. 7, we show in Fig. 8 the word translation accuracy matrix for every pair of languages. As expected, languages that share an important part of their vocabulary are the ones with highest similarity scores. Specifically, there is a very high similarity between Bosnian, Croatian and\nSerbian, since the three of them are standardized varieties of the Serbo-Croatian language. Also, Indonesian is very close to Malay, as the former is a standardized variety of the latter. A final example is the Czech and Slovak pair: the two of them are languages from the Czech–Slovak group. This shows the importance of cognates across languages. We can find similar patterns for languages that are not as close, but that share the same family or alphabet.\nWe also show in Fig. 9 the sentence-level translation values from Fig. 7, but now we plot A − AT . Instead of illustrating which language pairs are close, or are easier to work with, it shows which language pairs are asymmetric in the difficulty of the translation. Rarer languages —e.g. languages that are far from the others in the linguistic tree such as Somali, Tamil or Hindi— are easier to translate from than to translate to." }, { "heading": "A.4 CLUSTERING IN THE REPRESENTATION SPACE", "text": "In this experiment, we show how differently the representation space is clustered when we train with and without visual alignment. We extract features for the test set examples both for the full model and the text-only model, and cluster these features using k-means, with k = 50 clusters. In Fig. 10 we show three sentences belonging to each one of the first three clusters (the selection of both the sentences and the clusters is arbitrary). When training with visual alignment the clusters have a semantic meaning, and when training without it the clusters are language-specific, proving that cross-modal alignment is necessary to obtain good semantic representations." }, { "heading": "A.5 GENERATED TRANSLATIONS", "text": "The learned representations are not only good to do translation by retrieval, but also to generate translations. In order to do so, we use a GPT-2 decoder (small version) from Radford et al. (2019), pretrained on English. Next, we finetune it on English sentences from our dataset, and after that we finetune it yet again but conditioning it on feature vectors from the English finetuned model from Appendix A.2. To do this we use an extra linear layer at the input, and we concatenate the results with the input word embeddings. After that, we obtain a GPT-2 model that generates sentences in English based on the input representation. We then test it for translation by inputting representations obtained from other languages, and generating English translations for them. The sentences we used in the test were not used for any of the GPT-2 finetuning stages. We show results in Fig. 11. We selected the first 10 translations that were generated, without any cherry-picking. Interestingly, while our framework is not able to do an accurate literal translation, it does base the translation on the contextual knowledge provided by vision.\nB IMPLEMENTATION DETAILS" }, { "heading": "B.1 TRAINING AND ARCHITECTURE DETAILS", "text": "We train a transformer network with 4 attention heads and M = 4 hidden layers, with a hidden size of d = 512. The size of the embeddings at the output of the heads (where the contrastive losses are computed) is D = 128. We use a batch size of 800. We set all the λ values in Eq 5 to λ = 0.2. We train with an Adam optimizer and a learning rate of 1e− 4. As mentioned in Section 3.5, we normalize the feature values z so that ||z||2 = 1. Then the similarity value is computed with a dot product, resulting in the cosine similarity. After that, we scale the value so that the range of the similarity is in [0, 1], instead of [−1, 1].\nCluster 1: Savannah animals (Arabic): يه گورخر که داره به يه گورخر ديگه نگاه ميکنه پايين يه مسير خاکي (Croatian): popodne provedeno igrajući se sa slonovima (Georgian): ფართო გასროლა, ჟირაფები სავანას გავლით\nCluster 2: Wedding (Bengali):উইন্েডােত নববধূ এবং বর (Slovenian): nevesta v meri obleko, ki ima roza šopek (Urdu): شخص آپ کی شادی کے دن خواب سچ ہو بنانے دو !\nCluster 3: Bicycle/Motorcycle (Swedish): en cykel kastad ner i sanden på en strand. (Japanese): 砂地の隣にモーターバイクが駐車しています。 (Tamil): உட�பயி�சி ைப� மீ� ெப�. .\nCluster 1: French un grand éléphant se tient près d'une clôture motif circulaire sur fond rouge homme silhouette à la plage\nCluster 3: Greek ποταμός είναι ένα δημοφιλές σημείο για κανό. παλιά πόρτα σε ένα ξεχασμένο κήπο πράσινα ψάρια στο γύρο ενυδρείο.\nCluster 2: Hindi हाथ का एक सेट - िडजाइन के िलए �यारा फल ख�चा. एक मॉडल घटना के दौरान फैशन शो म� रनवे चलता एक पतली परत िप�ा ितमाही टुकड़� म� िवभािजत।\nClusters in full model Clusters in text-only model\nFigure 10: Clustering in the representation space. When trained without visual alignment the clusters are language-specific, and when trained with visual correspondence the clusters have a semantic meaning." }, { "heading": "B.2 GROUND TRUTH FOR WORD TRANSLATION", "text": "In order to generate the ground truth translations at the token level, we use the split of the dataset that is translated to all the languages. We then create ground truth token translations for every language\n(Russian) кошка отдыхает на обочине в солнечный летний день (cat resting on the curb in sunny summer day)\n(German) Hardrock-Künstler treten während des Musikfestivals auf (hard rock artists perform during music festival)\n(Croatian) Nekoliko snowboardera koji su poletjeli niz snijeg prekriveno brdo. (A few snowboarders taking off down a snow covered hill.)\ncat lying on the grass\nartist performs on stage at festival.\nsome people skiing in the snow\nOriginal sentence Generated English translation\npair separately. In order to do that, we follow the tf-idf algorithm. We exploit the fact that we have alignments of languages at the group-of-words (sentence) level. The idea is that if the word “car” appears in an English sentence every time that the word “voiture” (car in French) appears in its French translation, they probably mean the same. In the following explanation, assume we are looking for the translation of a specific token tAi from language A into some token t B j from language B. We just redefine the concept of “document” in the classical tf-idf algorithm to be the collection of all the words (with repetition) in language B that appear in the same (translated) sentence as tAi . We call this collection (document) d.\nFirst, we create a count of tokens in language B that appear in the document d, and compute the term frequency (tf) using this count:\ntfj,d = fj,d∑\nj′∈d fj′,d , (6)\nwhere fj,d is the count of the token tBj in a document d. Second, we compute the inverse document frequency, that takes into account how usual a token is in general, for all D documents:\nidfj = log |D|\n|d ∈ D : tBj ∈ d| . (7)\nMultiplying the tf and idf terms we get a value for each (i, j) pairs of tokens (the value is not symmetric). We store tokens tAi and t B j as ground truth translation if and only if t B j is in the top 5 for the tf-idf value of (i, j), for all j, and tAi is in the top 5 for the tf-idf value of (j, i), for all i.\nThe following are some examples of translations we obtain between Spanish and English: (electr, electr), (fotograf, ograph), (ción, ction), (grande, lar), (atas, jam), (pare, couple), (decor, decor), (ventana, window), (deportivo, team), (1950, 1950), (form, form), (30, 30), (casa, hom), (lave, key), (1960, 1960), (del, the), (libro, ok), (kara, kara), (ola, surfer), (fan, fan), (viol, viol), (%, %), (dar, standard), (segundo, sec), (equipo, sports), (rojo, red), (árbol, tree), (hierba, gras), (durante, dur), (bron, ze), (mani, demonstr), (pequeño, sm), (tı́, typ), (turı́stica, attra), (corre, run), (mus, muse), (atrac, tour), (baño, bat), (mam, mom), (una, on), (element, element), (ijo, son), (ant, ol), (mural, mural), (chocola, chocola), (iste, sad), (cinta, bon), (carro, cart), (edif, bu), (planta, plant), (óc, broccoli), (prim, st), (camina, runway), (cerca, close), (pop, artist), (nacional, nation), (ustr, alian),\n(vest, dress), (motocic, motorc), (perro, dog), (largo, ong), (+, +), (ates, tom), (fram, rasp), (camina, wal), (inta, inta)." }, { "heading": "B.3 TEXT NETWORK DETAILS", "text": "The input to the text network is a sequence of tokens {[SEQ], w1, . . . , wi} that represent a sentence in any language (Devlin et al., 2019). Before inputting tokens to the transformer, we encode them with a fixed-length vector representation. To embed input tokens, we use a V × d word embedding matrix φw, where V is the size of the vocabulary considered by the tokenizer. We use V = 30, 000. We augment the input encoding with positional information (word index), translating the encoding by a learned vector: φtxt(wi) = φTwwi + φpos(wi) where φpos encodes the word position of wi.\nWe then input the augmented tokens to the transformer. A transformer block (Vaswani et al., 2017) consists of a multi-headed self-attention layer followed by a linear layer, that outputs a hidden representation for every token in the input sequence. These transformer blocks are concatenated in series to get deeper representations. Let Hm ∈ Rd×j be the d dimensional hidden vectors at layer m. The transformer first computes vectors for queries Q = Wmq H m, keys K = Wmk H m, and values V = W tvH m where each W∗ ∈ Rd×d is a matrix of learned parameters. Using these queries, keys, and values, the transformer computes the next layer representation by attending to all elements in the previous layer:\nHm+1 = SV where S = softmax ( QKT√\nd\n) . (8)\nIn practice, the transformer uses multi-head attention, which repeats Equation 8 once for each head, and concatenates the results. The network produces a final representation {hM[SEQ], h M 1 . . . , h M i } for a stack of M transformer blocks.\nAs mentioned in Section 3.5, we also add a prediction head. This head takes as input the final hidden representation for the [SEQ] token, hM[SEQ]." }, { "heading": "B.4 DATASET DETAILS", "text": "To collect the dataset, we used captions from the Flickr30k (Young et al., 2014), MSCOCO (Lin et al., 2014) and Conceptual Captions (Sharma et al., 2018) datasets. Flickr30k and MSCOCO are image captioning datasets that have been carefully curated and annotated in a controlled setting, so the text descriptions are accurate and thorough. However, most of the images in our datasets come from Conceptual Captions, which consists of captions harvested from the web, so the visuallanguage alignment is more noisy.\nWe randomly split each dataset into 52 equally sized parts, one for each language supported by the machine translation service we use. Each split is assigned a unique language, and splits with the same language across datasets are combined. The split which is assigned the English language is set aside and translated into all 51 other languages, and only used in testing. We also set aside the split translated into Chinese for fine tuning experiments. The remaining 50 splits have their original English captions discarded, and are then split 80%-20% into training and validation data. All experiments shown in Section 5 are run on the reserved test data.\nNote that there is no overlap at all (visual or linguistic) between the different splits, except for the test split. Please see Table 8 for more details about the dataset." } ]
2,020
null
SP:1e43e2ad50364f396fa19a2e9d8e9f7244a40178
[ "This paper aims to tackle the matrix completion problem by drawing connection from prior work in image completion domain. It seems to be a combination of prior work: Multi-graph convolution combined with Dirichlet energy on row and column graph laplacian where the input rating matrix is corrupted with noise. The writing and presentation is significantly below par Iclr acceptance in the current form. Also, considering some of the work mentioned below, SOTA results is an overclaim." ]
In this work we present a fully convolutional end to end method to reconstruct corrupted sparse matrices of Non-Euclidean data.The classic example for such matrices is recommender systems matrices where the rows/columns represent items/users and the entries are ratings. The method we present is inspired by the surprising and spectacular success of methods like” deep image prior” and “deep decoder” for corrupted image completion. In sharp contrast to previous Matrix Completion methods wherein the latent matrix or its factors directly serve as the optimization variable, in the method we present, the matrix is parametrized as the weights of a graph neural network acting on a random noisy input. Then we are tuning the network parameters to get a result as close as possible to the initial sparse matrix (using it’s factors) getting that way state of the art matrix completion result. In addition to the conceptual simplicity of our method, which is just non-Euclidean generalization of deep image priors, it holds less parameters then previously presented methods which makes the parameters more trackable and the method more computationally efficient and more applicable for the real world tasks.The method also achieves state-of-the-art results for the matrix completion task on the classical benchmarks in the field. The method also surprisingly shows that untrained convolutional neural network can use a good prior not only for image completion but also for Matrix Completion when redefined for graphs.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Hrayr Harutyunyan", "Nazanin Alipourfard", "Kristina Lerman", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "URL http://arxiv.org/abs/1905.00067", "year": 1905 }, { "authors": [ "M. Belkin", "P. Niyogi" ], "title": "Laplacian eigenmaps for dimensionality reduction and data representation", "venue": "Neural Computation,", "year": 2003 }, { "authors": [ "Mikhail Belkin", "Niyogi Partha" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "R.M. Bell", "J. Bennett", "Y. Koren", "C. Volinsky" ], "title": "The million dollar programming prize", "venue": "IEEE Spectrum,", "year": 2009 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "CoRR, abs/1611.08097,", "year": 2016 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on", "venue": "graphs. international conference on learning representations,", "year": 2014 }, { "authors": [ "R. Cabral", "F. De la Torre", "J.P. Costeira", "A. Bernardino" ], "title": "Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition", "venue": "IEEE International Conference on Computer Vision, pp. 2488–2495,", "year": 2013 }, { "authors": [ "D. Cai", "X. He", "J. Han", "T.S. Huang" ], "title": "Graph regularized nonnegative matrix factorization for data representation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2011 }, { "authors": [ "Emmanuel J. Candès", "Benjamin Recht" ], "title": "Exact matrix completion via convex optimization", "venue": "CoRR, abs/0805.4471,", "year": 2009 }, { "authors": [ "Jianfei Chen", "Jun Zhu", "Le Song" ], "title": "Stochastic training of graph convolutional networks with variance reduction", "venue": "ICML, pp", "year": 2018 }, { "authors": [ "Yuejie Chi", "Yue M. Lu", "Yuxin Chen" ], "title": "Nonconvex optimization meets low-rank matrix factorization: An overview", "venue": "CoRR, abs/1809.09573,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "CoRR, abs/1606.09375,", "year": 2016 }, { "authors": [ "Inderjit S. Dhillon", "Yuqiang Guan", "Brian J. Kulis" ], "title": "Weighted graph cuts without eigenvectors: A multilevel approach", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),", "year": 2007 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "CoRR, abs/1808.03965,", "year": 2018 }, { "authors": [ "M. Ghassemi", "A. Sarwate", "N. Goela" ], "title": "Global optimality in inductive matrix completion", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "CoRR, abs/1704.01212,", "year": 2017 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "CoRR, abs/1706.02216,", "year": 2017 }, { "authors": [ "David K. Hammond", "Pierre Vandergheynst", "Rémi Gribonval" ], "title": "Wavelets on graphs via spectral graph theory", "venue": "Applied and Computational Harmonic Analysis,", "year": 2011 }, { "authors": [ "Reinhard Heckel", "Paul Hand" ], "title": "Deep decoder: Concise image representations from untrained nonconvolutional networks. CoRR, abs/1810.03982, 2018", "venue": null, "year": 2018 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "CoRR, abs/1506.05163,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Wilfried Imrich", "S Klavzar" ], "title": "Product Graphs, Structure and Recognition", "venue": null, "year": 2000 }, { "authors": [ "Prateek Jain", "Inderjit S. Dhillon" ], "title": "Provable inductive matrix completion", "venue": "CoRR, abs/1306.0626,", "year": 2013 }, { "authors": [ "G. Karypis", "V. Kumar" ], "title": "A fast and high quality multilevel scheme for partitioning irregular graphs", "venue": "SIAM Journal on Scientific Computing,", "year": 1999 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "CoRR, abs/1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Personalized embedding propagation: Combining neural networks on graphs with personalized pagerank", "venue": "CoRR, abs/1810.05997,", "year": 2018 }, { "authors": [ "Y. Lecun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Stamatios Lefkimmiatis" ], "title": "Non-local color image denoising with convolutional neural networks", "venue": "CoRR, abs/1611.06757,", "year": 2016 }, { "authors": [ "Wu-Jun Li", "Dit-Yan Yeung" ], "title": "Relation regularized matrix factorization", "venue": "pp. 1126–1131,", "year": 2009 }, { "authors": [ "Hsueh-Ti Derek Liu", "Alec Jacobson", "Maks Ovsjanikov" ], "title": "Spectral coarsening of geometric operators", "venue": "CoRR, abs/1905.05161,", "year": 2019 }, { "authors": [ "Hao Ma", "Denny Zhou", "Chao Liu", "Michael R. Lyu", "Irwin King" ], "title": "Recommender systems with social regularization", "venue": "Proceedings of the fourth ACM international conference on Web search and data mining,", "year": 2011 }, { "authors": [ "M. Mardani", "G. Mateos", "G.B. Giannakis" ], "title": "Distributed nuclear norm minimization for matrix completion", "venue": "IEEE 13th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC),", "year": 2012 }, { "authors": [ "Federico Monti", "Michael M. Bronstein", "Xavier Bresson" ], "title": "Geometric matrix completion with recurrent multi-graph neural networks. CoRR, abs/1704.06803, 2017", "venue": "URL http://arxiv. org/abs/1704.06803", "year": 2017 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs. CoRR, abs/1605.05273, 2016", "venue": "URL http://arxiv.org/abs/1605", "year": 2016 }, { "authors": [ "Guillermo Ortiz-Jiménez", "Mario Coutino", "Sundeep Prabhakar Chepuri", "Geert Leus" ], "title": "Sampling and reconstruction of signals on product graphs, 2018", "venue": null, "year": 2018 }, { "authors": [ "Nikhil Rao", "Hsiang-Fu Yu", "Pradeep K Ravikumar", "Inderjit S Dhillon" ], "title": "Collaborative filtering with graph information: Consistency and scalable methods", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Benjamin Recht" ], "title": "A simpler approach to matrix completion", "venue": "CoRR, abs/0910.0651,", "year": 2009 }, { "authors": [ "Jasson D.M. Rennie", "Nathan Srebro" ], "title": "Fast maximum margin matrix factorization for collaborative prediction", "venue": "Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Jianbo Shi", "J. Malik" ], "title": "Normalized cuts and image segmentation", "venue": "IEEE Trans. on Pattern Analysis and Machine Intelligence,", "year": 2000 }, { "authors": [ "David I. Shuman", "Sunil K. Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "Signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular data", "venue": "domains. CoRR,", "year": 2012 }, { "authors": [ "Si Si", "Kai-Yang Chiang", "Cho-Jui Hsieh", "Nikhil Rao", "Inderjit S. Dhillon" ], "title": "Goal-directed inductive matrix completion", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD", "year": 2016 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor S. Lempitsky" ], "title": "Deep image prior", "venue": "International Journal of Computer Vision (IJCV),", "year": 2020 }, { "authors": [ "Tong Zhang W. Huang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "CoRR, abs/1809.05343,", "year": 2018 }, { "authors": [ "Miao Xu", "Rong Jin", "Zhi-Hua Zhou" ], "title": "Speedup matrix completion with side information: Application to multi-label learning", "venue": "URL http://dblp.uni-trier", "year": 2013 }, { "authors": [ "Kai-Lang Yao", "Wu-Jun Li" ], "title": "Convolutional geometric matrix completion", "venue": "CoRR, abs/1803.00754,", "year": 2018 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L. Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "CoRR, abs/1806.01973,", "year": 2018 }, { "authors": [ "Z. Zhu", "Q. Li", "G. Tang", "M.B. Wakin" ], "title": "Global optimality in low-rank matrix optimization", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Matrix completion (MC) consists of estimating the missing entries of an n×m matrixX (usually, of very big dimensions) given its measurements M on a (usually, very sparse) support Ω. An example of such matrices are signals on graphs/manifolds which are Non-Euclidean domains. The classical example of such data are recommender (recommendation) systems, where the ratings are signals on (user item) couple. The most known Matrix Completion problem is the Netflix problem, where a 1M $ prize was offered for the algorithm that can best predict user ratings in a dataset that contained 480k movies × 18k users (8.5B entries), with 0.011% known entries (Bell et al., 2009). Many works focused on solutions for the MC problem. In brief, one wishes to obtain the matrix X given matrix M as the specified input on the support Ω. Then, formally the completion task amounts to the minimization problem\nX̂ = argmin X ‖AΩ ◦ (X −M)‖2F\nwhere AΩ is the observation mask matrix (filled with 1 where data exists in the original problem), ◦ is the Hadamard product and ‖.‖2F is the Frobenius norms (Rennie & Srebro, 2005). Different approaches where presented in order to fill in matrix X . Those approached included imposing different regularization (priors) on the matrix and its factors. The most prominent approach consists of imposing a low rank (Candès & Recht, 2009; Recht, 2009) on the matrix. Then, priors based on collaborative filtering (users/items rating patterns), content based filtering (user/items profile) (Ghassemi et al., 2018; Jain & Dhillon, 2013; Xu et al., 2013; Si et al., 2016) and their combinations. Then Geometric Matrix Completion approaches appeared (Li & Yeung, 2009; Rao et al., 2015; Cai et al., 2011) and proposed describing rows/column graphs which represent similarity, then encoding the structural (geometric) information of those graphs via graph Laplacian regularization (Belkin & Partha, 2002; Belkin & Niyogi, 2003) and imposing smoothness of the data in those graphs\n(Kalofolias et al., 2014; Rao et al., 2015; Ma et al., 2011; Mardani et al., 2012). Those approaches where generally related to the field of signal processing as entries signals on the rows/columns graphs (Shuman et al., 2012). Then Geometric Deep Learning Methods where introduced to learn the domains of geometric data structures (e.g. single graphs or manifolds)(Bronstein et al., 2016; Lefkimmiatis, 2016; Defferrard et al., 2016; Niepert et al., 2016; Gilmer et al., 2017; Hamilton et al., 2017; Velickovic et al., 2017; Chen et al., 2018; W. Huang et al., 2018; Klicpera et al., 2018; Abu-El-Haija et al., 2019; Ying et al., 2018; Gao et al., 2018; Hammond et al., 2011). The current state of the art solution for Matrix completion problem, relies on an extending classical harmonic analysis methods to non-Euclidean domains. When, the geometry of the column/row spaces and their graphs is utilised to provide a Geometric Deep Learning mechanism called the RMGCNN (Monti et al., 2017) that includes a complex combined CNN and RNN(Hochreiter & Schmidhuber, 1997) networks.\nIn this work we present a simplified method for the MC problem: the Matrix Data Deep Decoder that contains a classical end to end GRAPH convolutional neural network and inspired by the leading methods from the field of image completion - the Deep Image Prior (Ulyanov et al., 2020) and the Deep Decoder (Heckel & Hand, 2018). In our method, random noisy input matrix is acted upon by the weights of a neural network (parametrization). By tuning the parameters of the network and minimising the error between its output to the initial corrupted matrix, we find the best candidate for the complete matrix. This method yields state of art results for the MC task. The contributions of our work are:\n• A novel approach for solving the MC Problem, using deep learning with end-to-end pure convolutional network for graphs.\n• State-of-the-art performance for the MC problem in both prediction error (RMSE) and solution running time1. Our method significantly outperforms the previous state of art method - the RMGCNN.\n• We show that a pure graph convolutional neural network is a good prior for the MC problem. This provides a correspondence of convolutionial neural networks methods to MC problems." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 MATRIX COMPLETION NOTATION", "text": "The most prominent prior for the MC problem is assuming the matrix X is of low rank. Low rank is obtained by rank regularization using its nuclear (trace) norm ‖X‖∗ – sum of the singular values ofX . The canonical optimization problem with parameter λ∗, is stated as:\nX̂ = min X\n‖AΩ ◦ (X −M)‖2F + λ∗ ‖X‖∗" }, { "heading": "2.1.1 MATRIX FACTORIZATION", "text": "To alleviate the computational burden for big datasets, we factorize X = W HT , where W ∈ Rm×k, HT ∈ Rk×n. Here, k m and n is the upper bound on the rank of X . With this factorization, the nuclear norm term can be replaced by the sum of the Frobenius norms leading to the following non-convex (but still very well-behaved) problem (Rao et al., 2015):\nX̂ = min W ,HT ∥∥AΩ ◦ (WHT −M)∥∥2F + λ∗2 (‖W ‖2F + ∥∥HT∥∥2F)" }, { "heading": "2.2 GEOMETRIC MATRIX COMPLETION", "text": "We introduce the geometric matrix completion framework, using notations as in RMGCNN (Monti et al., 2017).\n1evaluated on the existing classical benchmark for MC Problems" }, { "heading": "2.2.1 THE ROW/COLUMN GRAPHS", "text": "The matrix X is comprised from signals on non-euclidean domains of rows and columns. We represent those domains by undirected weighted graphs Gr (e.g. items) and Gc (e.g users) respectively, where: Gr/c = (V ,E,W ). Gr/c are built either directly from the ratings matrix X , or based on additional data about the rows/columns (if given). Their structure is encoded in Laplacian matrices which are built from the adjacency matrices Wr/c (definitions are below). This procedure is sketched in figure 1 below." }, { "heading": "2.2.2 THE ADJACENCY MATRIX:", "text": "For a graph G = (V ,E,W ) , the elements of its adjacency matrix (W )ij = wij obey: wij = wji, wij = 0 if (i, j) /∈ E and wi,j > 0 if (i, j) ∈ E. The Adjacency Matrix represents the weights of the proximity between every two vertices and can be built based on the signal patterns or on external features about the rows/columns in methods like euclidean distance of normalized features, Chi square, Gaussian Kernel, K-nn clustering K-means clustering and etc." }, { "heading": "2.2.3 THE GRAPH LAPLCIANS", "text": "The Laplacian matricesLr andLc are based on the adjacency matricesW and are holding inside the internal Graph Structure. The most common constructions of a Laplacian matrix is an n× n matrix defined as L = D −W . where D is degree matrix, an n× n diagonal matrix (D)ii = ∑n j 6=i wij . We adopt the Normalized Graph Laplacian definition as L̃ = D− 1 2LD− 1 2 = I −D− 12WD− 12 ." }, { "heading": "2.2.4 THE OPTIMIZATION PROBLEM", "text": "We use the graph laplacians in the optimization function as an additional prior regularizing the matrix completion problem. We’d like more similar items/users get more similar predictions. Mathematically, regarding the columns x1, . . . ,xn for example as a vector-valued function defined on the vertices Vc, the smoothness assumption implies that xj ≈ xj′ if (j, j′) ∈ Ec. Stated differently, we want the following entity (Trace norm or Dirichlet semi-norm (Kalofolias et al., 2014)):∑\ni,j w c i,j ‖xi − xj‖ 2 2 = tr ( XLcX T )\nto be as small as possible, leading to the following optimization problem:\nX̂ = argmin x∈Rm×n\n‖PΩ ◦ (X −M)‖2F + λ∗ ‖X‖∗ + λr tr ( XT LrX ) + λc tr ( XLcX T ) ,\nwhich, if we will look at the factorized model will be equivalent to,\nX̂ = argmin W∈Rn×k,HT∈Rk×n ∥∥PΩ ◦ (WHT −M)∥∥2F +λ∗ ∥∥WHT∥∥∗+λr tr (W TLrW )+λc tr (HTLcH)) From this perspective, the estimation of the left and the right factors ofX is considered as diffusion of the input fields on the row and column graphs, respectively. This separable form allows no accommodation for the low rank constraint (which pertains to the product of the graphs)." }, { "heading": "2.3 DEEP NEURAL NETWORKS", "text": "In the recent years, deep neural networks and, in particular, convolutional neural networks (CNNs) (Lecun et al., 1998) based methods have been applied with great success to Image completion tasks. Such methods are based on one of the key properties of CNN architectures - the ability to extract the important local stationary patterns of Euclidean data. Image completion with untrained networks (when only the corrupted image is the input with no other training examples) can be seen as parallel to the \"Matrix Completion\" task. Two recent works, applying un-trained deep neural networks on corrupted images, showed state-of the art results for this task. We were inspired by those methods and our goal was to generalize them to the Non-Euclidean Domain." }, { "heading": "2.3.1 DIP – DEEP IMAGE PRIOR", "text": "The method suggests to feed the network with random input Z, forward pass the random input through the network and check how close the output is to the corrupted image, while tuning the network parameters weights. This operation surprisingly reconstructs the clean image (see Ulyanov et al. (2020))." }, { "heading": "2.3.2 DEEP DECODER", "text": "The Deep Decoder method showed results even better then the DIP (see Heckel & Hand (2018)). The method proposed to take a small sample of noise, and pass it through a network, while making some non-linear operations on it and up-sample, then check how far the result is from the corrupted image while fixing the network parameters. This method showed that a deep decoder network is a very concise image prior. The number of parameters it needs to completely specify that image is very small, providing a barrier for over-fitting (catching only the most important image features (natural structures) and ignore noise) and allowing the network to be amenable to theoretical analysis." }, { "heading": "2.4 GEOMETRIC DEEP LEARNING OR DEEP LEARNING ON GRAPHS", "text": "In contrast to image matrices, the notion of convolution and pooling for Non-Euclidean matrices needs to be re-defined to give the Non-Euclidean stracture the special meaning that convolutional networks are based on. When those operations are redefined, we can build a \"graph convolutional neural network\" which is parallel to some classical neural network and find the estimate forX ." }, { "heading": "2.4.1 CONVOLUTION FOR GRAPHS", "text": "SINGLE GRAPH CONVOLUTION: To perform a meaningful convolution operation keeping the operation translation invariant, we perform spectral graph convolution with spectral graph filters. The spectral graph theory suggest that spectral filters on matrix Z can be well approximated by smooth filters in a form of a truncated expansion in terms of Chebyshev polynomials Tk upon the rows/columns graph Laplacians (∆̃) up to some pre-defined order K (Defferrard et al., 2016). The coefficients of those polynomials θk are the network parameters that we learn. Using that filter approximation, we can define the Single Graph Convolution operation of a signal matrix Z with a filter y:\nZ ~ y = ( K∑ k θkTk ( ∆̃ ) Z )\nFULL MULTI GRAPH CONVOLUTION: In RMGCNN the authors propose that when a Matrix includes signals on a product of two graphs (rows/columns graph) Multi-Graph convolution is used. Full Multi Graph convolution is suitable for small matrices. When we pass signal matrix Z through a graph convolutional network layers, the output matrix after the convolution operation of matrix Z in layer l with each filter q and after adding the bios parameters βq . ( j, j′ are the degree of the Chebyshev polynomials of the rows and columns on filters q of layer l):\nZ̃lq = Zl ~ yq = pl∑ j,j′ θjj′,lqTj ( ∆̃rl ) Zl Tj′ ( ∆̃cl )+ βq\nFACTORIZED MULTI GRAPH CONVOLUTION: Factorized Graph convolution is used to alleviate the computational burden for big signal matrices, using matrix factorization and then performing Single Graph convolutions on each of the factors (proposed in Bruna et al. (2014); Monti et al. (2017); Kipf & Welling (2016); Henaff et al. (2015)). It can be done in different Matrix Factorization techniques (Cabral et al., 2013; Chi et al., 2018; Zhu et al., 2018). In this model, the matrix is decomposed to it’s factors with SVD: Ẑq = ŴqĤqT . After factorizing, to get the full Multidimensional signal we pass each factor through the network separately and only then multiply back to get the full Multi-dimensional signal. The convolution for each factor is a Single-Graph convolution on each of the W and H matrices:\nŴq = P∑ j=0 ( θjTj ( ∆̃r ) Ŵq + βrq ) , Ĥq = P∑ j′=0 ( θj′Tj′ ( ∆̃c ) Ĥq + βcq )" }, { "heading": "2.4.2 POOLING FOR GRAPHS", "text": "When we talk about graph pooling we talk about reducing the graph resolution grid (zoom-out operation). In our work the graph pooling is done in two steps, those steps are described below:\nSTEP 1: GRAPH COARSENING PROCESS (DHILLON ET AL., 2007; KARYPIS & KUMAR, 1999; SHI & MALIK, 2000)- We do bi-partition of the Graph G, to form clusters of couples. At each coarsening level, we pick an unmarked vertex i and match it with one of its unmarked neighbors j that maximizes the local normalized cut Wij = Aij Dii + Aij Djj ( A is graph adjacency matrix and D is graph degree matrix). Then we mark the clasters as the vertices of the next level coarsened graph. The edge weight on this coarsened graph are set as the sum of the weights of the edges that connect the vertices between the clusters as described in figure 2 :\nSTEP 2: GRAPH POOLING / DOWN SAMPLING STRUCTURES SAVING- Defferrard et al. (2016) suggest to save the coarsening structure of the previous step in a balanced binary tree for both row/column graphs. Practically, for each row/column graph, on each layer l, we save in matrices Url, Ucl which vertices where down sampled from which parent vertices in the previous layer. We rearrange the vertices of each adjacency matrix on each level according to this tree order, and use those matrices to constract the graph Laplacians i.e ∆̃rl and ∆̃cl for each network level." }, { "heading": "2.5 UP-SAMPLING FOR GRAPHS", "text": "The up-sampling operation is done by multiplying the signal matrices Ẑl by the parent indicators matrices Url , Ucl , resulting in a Zl+1 level matrix where the row/column parents of a specific “child” get a linear combination of its children values.We will denote this operation as:\nẐl+1 = Upsample ( Ẑl ) = UrlẐlUcl\nIn the factorized case, matrices H and W are up sampled separately in the same method Ŵl+1 = Upsample ( Ŵl ) , Ĥl+1 = Upsample ( Ĥl ) , and then multiplied to get Zl+1 = Wl+1HTl+1" }, { "heading": "3 THE MATRIX DATA DEEP DECODER METHOD", "text": "In this section we present our method, the Matrix Data Deep Decoder. The core idea of our approach was to take the state of the art untrained learning model for image completion \"deep decoder\" and \"translate\" its network so it would feet for Matrix completion." }, { "heading": "3.1 NETWORK PREPERATION", "text": "1. Input corrupted m× n rating matrixM andAΩ observation mask. 2. Input the hyper parameters: lr - learning rates, L - number of layers (1 - the smallest down\nsampled ∆̃rl, ∆̃cl), Prl/Pcl - the degree of row/column Chebyshev polynomials on layer l (i.e: the number of neighbours we’d like to consider on each layer) and finally ql = q1, . . . , qL - number of neurons on each layer l. Each nueron learns Prl/Pcl coefficients of the Chebyshev polynoomials of the row/column Laplacians. 3. Build Initial row/cols graph adjecency matrices (ArL, AcL), based on: M , row/columns properties and selected distance function (we used the threshold clustering: for every couple of rows/columns, saving the Euclidean distance between their attributes only if it is smaller then threshold R, otherwise 0. Alternatively, for specific data combinations of rating patters and user/item attributes can be taken as featurs.)\n4. Build Initial row/cols normalized graph Laplacians (\n∆̃rl, ∆̃cl ) based on (ArL, AcL)\n5. For each l = L− 1, . . . 1 , for each rows/col graph adjacency matrices do: 5.1. Build the rows/cols coarsed edge weights matrix (Wl)ij = (Al)ij (Dl)ii + (Al)ij (Dl)jj\nAl is the graph adjacency matrix and Dl is the graph degree matrix. 5.2. Cluster couple of every two closest vertices to Wl distance and enumerate the clusters as the\nnew graph vertices in pooling matrices Url, Ucl 5.3. Build reduced adjacency matrices Arl, Acl when the matrix instances (edges weights) are the\nsum if the distances between the clusters members in matrix Wl as described in figure 2 and rearrange in the order of Url, Ucl. 5.4. Build the row/cols normalized graph Laplacians ( ∆̃rl, ∆̃cl ) based on Arl, Acl\n6. Take a matrix Ẑ1 in the size of ( ∆̃r1 × ∆̃c1 ) and input (infuse) a random noise in it." }, { "heading": "3.2 NETWORK LEARNING PROCESS ALGORITHMS", "text": "" }, { "heading": "3.3 STOPPING CRITERIA", "text": "In this work we run the algorithm for each set for T=10000 iterations as in RMGCNN. We use the Iteration weights on which we’ve obtained the best RMSE for the test set. Thus, the number of iterations is another hyper parameter that should be tuned for best performance." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Up to date, the state of art results for Matrix Completion where obtained by the RMGCNN method, hence, in the experimental results we compare our results to are the results of the RMGCNN. The information about the other methods was taken from the RMGCNN work and can be found in the work of [30]. Our tested datasets started with a small simple synthetic dataset – Synthetic Netflix. Then we have continued by evaluating our method on real banchmark datasets: MovieLens 100k, Flixster, Douban and YahooMusic following Monti et al. (2017). The tested datasets statistics described in table 1 below:" }, { "heading": "4.1 BENCHMARK RESULTS ON THE TESTED DATASETS", "text": "In this section we describe the data preparation parameters of the MDDD method for all tested datasets and then present in a summerizing tables the Results compared to other known methods.\nTHE NETFLIX CHALLANGE- On the Synthetic Netflix dataset we tested both, the Non-Factorised and the factosised algorithms. In MDDD we used row degree 1,column degree 5. The graphs where constructed using 10 nearest neighbours. Our Non-Fuctorised MDDD model achieved the best accuracy, followed by our Factorised MDDD model as described in table3 (a) below .\nTHE MOVIELENS, FLIXTER, DOUBAN AND YAHOOMUSIC CHALLENGE- For the real datasets, Movielens_100k, Flixter, Douban and YahooMusic the network was prepared as following: For all datasets only the MDDD Factorised model was used, due to the big datasets size. In all settings learning rate was set to 0.01. and fully connected layer was used to translate the rows and columns embedding to the Ratings space. For all datasets the training and tests sets where taken exactly as in Monti et al. (2017). For the Movielens_100k Dataset (the most familiar benchmark dataset), the rows/column graphs where built using the threshold method described in section 3.2.2. The network had 2-layers with 32 filters on each layer and filter size of 3x3. For all other datasets, we have used the same graphs that were used in the RMGCNN work [30]. As for the YahooMusic/Douban datasets only columns or rows features were available respectively (tracks fatures or users features), to build the missing Graphs for those databasets we used the training set of the corrupted matrix with 10-nn neighbors. For example for Douban, we have used 7.5% of the ratings that we’ve marked as the training set. In the Douban dataset, we used a 2-layer network, with 64 filters on each layer, of size 2x2. In the Flixter dataset, we used a deeper network of 4 layers (64,64,32,32) filters on each with filter sizes of (1x1,40x40,1x1,1x1) on each layer. Table 2 summarizes the performance of different methods compared to MDDD." }, { "heading": "4.2 RESULTS DISCUSSION", "text": "For the Synthetic Netflix dataset our Non-Fuctorised MDDD model achieves the best accuracy, followed by our Factorised MDDD model (table 2 (a)) . For the real datasets (Movielens, Flixter, Douban and YahooMusic) , MDDD (the Factorized Model) outperforms the competitors (Monti et al., 2017; Rao et al., 2015; Yao & Li, 2018) in all the experiments (table 2 (b),(c). Our algorithm also gets the result in much less running time.For example On the Movielens dataset our algorithm converges after 1800 Iterations, compared to 25,000 in RMGCNN algorithm ( 5 minutes compared to 30 minutes - (table 2 (b))). The algorithm has shown an improvement in state of the art result in 7% , and can be farther improved (appendix A and B) but most importantly, we got the results very quickly. We consider the reason for that the model simplicity and the small amount of parameters that make it easier for the algorithm to first re-construct the natural graph structures, then the noise." }, { "heading": "5 CONCLUSIONS", "text": "In this work we addressed the problem of Matrix Completion on Non-Euclidean domains, where sparse signal, which lies on a grid of two non- Euclidean domains (Graphs or manifolds), should be completed. We introduced a new method for the Matrix Completion Problem solution: The Matrix Data Deep Decoder - a simple, intuitive under-parametrized yet powerful method for Matrix Completion inspired by the Deep Decoder method for Image Completion. As far as we know this is the 1st method for non-Euclidean matrix data completion that is end to end based on fully convolutional network. Despite it’s simplicity the method shows state of art results on current known benchmarks in both predictions error ( 7% improvement) and runnig time in ( 6 times faster). Because of the method simplicity, it can be applicable in variety of fields and real life problems like recommendations systems (the Netflix Problem), pattern recognition, community detection Biological consequences on gene data or DNA structure, Chemical reactions, Physical applications , Events prediction, traffic detection, stocks prediction and many more. It can also be expended to higher dimensional spaces like tensors instead of matrices and new research directions (appendix A and B).\nOur method can suggest that when we are looking at the problem of Matrix Completion from the geometric point of view (as a sparse signal that lies on underlying rows/columns or their product graphs structures), convolutional neural networks can use a very strong prior for that problem solution. For future research and applications it means a continuous improvement in the field of Non-Euclidean learning networks, in parallel to the improvement in the field of the classical learning networks." }, { "heading": "A APPENDIX A", "text": "FUTURE WORK In this work we have presented the results of the described method that was the basis for us in order to get new state of the art results. But, we do believe that the results can be farther improved by the following future research : tuning he hyper parameters, using novel methods for Graph Convolution on the convolutional layers for example different graph coarsening (Liu et al., 2019), using methods for learning the best Metric for the initial Adjacency Matrices and the Laplacians in the same or separate network, using Multi-Graph convolution for the smaller network layers and factorized convolution on the bigger levels, using other different methods for matrix factorization and using Product Graph Laplacians instead of the row/columns (Imrich & Klavzar, 2000; Ortiz-Jiménez et al., 2018).\n.\nIn future work we also believe that there can be an expansion of the results to a higher dimentional space (from matrix to tensor completion). One of our side experiments was testing the method on a real live mobile adds platform that shared Terabytes of information with us, and included a tensor of User(gamer), Content(game) and Context(page appearance and details in which the game advertised), and got prediction results for the content downloading much better and faster then RMGCNN and better then every company algorithm.\nThe data included: The shared data included the following files: a. Apps - 183,199 Applications and Host Applications. They included different properties of the application like genre, developer details, price, OS and etc.\nb. Hosts – like Apps – the same 183,199 Applications and Host Applications. They included different properties of the application like genre, developer details, price, OS and etc.\nc. Users - 12,160,088 Users that included users’ properties like their origin, country, locale, interests and other user specific properties.\nd. Events - 632,849,289 Events that included the user, application, context (application host), and the specific event that occurred by the user (impression, click, install and etc.).\nWe have predicted the Clicks of users on the apps – but it could work with any other kind of event.\nThe results were as following:\nIn addition, Dynamic inference (in which the matrix itself describes a time process and the geometry of the row/column spaces has some non-trivial dynamics) can be a great extension to the algorithm applications but requires a separate future research in different settings and benchmarks .\nFinally, in future work we intend and strongly recommend to other scientists to explore if new Learning methods that are state of art in the field of single image completion problem (in the 2-D Euclidean domain) may yield new state of art results also for the solution of the Parallel NonEuclidean Matrix Completion problem in the way we have translated the Deep Decoder network in this article, for Non-Euclidean Domains." }, { "heading": "B APPENDIX B", "text": "SIDE RESEARCH - SMALL MATRICES COMPLETION IN PRODUCT SPACE One of the experiments that we did as part of our work was to test the idea of matrix completion when we input not only the ratings matrix, but the whole product space. To test this, we took the Movielens 100k database and created random cuts from the matrix X of 100x100 size. We took 30% of the data as the observed data and 7.5% of the observed data as the training data. Our goal was to complete the whole 100x100 ratings matrix (see example run in figure 5). First we ran the RMGCNN algorithm as described in Monti et al. (2017) . Then we ran the same algorithm on the product space instead, meaning, that our input was the full product space matrix (as described in figure 4). In all experiments we constructed the Laplacian matrices using only the features of users/movies and a partial rating column. When we compared the RMSE in all cuts,the result was a higher RMSE of 20-40 % in case we inputted the whole product space (see table 4 for all cuts). In addition, not only the ratings where completed but also all other features (like age, gender etc.). The drawback of this method was that the learning took a long time and the time grows exponentially with every user/item couple. The advantage of this method is that it can be applicable for real life instances when small sparse matrices with important data should be completed (like small businesses or detective purpose etc.). It also worthwhile to test different estimators for the product space. Future research in this direction might yield astonishing results." } ]
2,020
null
SP:b2f40913d778d27c888d81bec337aa81a1acb46c
[ "The proposed SLIM algorithm organizes graph neural networks around substructures surrounding \"landmarks\" in the graph. In addition to presenting the three steps of the SLIM algorithm (sub-structure embedding, sub-structure landmarking, and \"identity-preserving\" graph pooling), the authors compare to other approaches on a graph classification problem. A large part of the paper is also given over to a high-level discussion of \"resolution dilemmas.\"" ]
Graph neural networks are promising architecture for learning and inference with graph-structured data. However, generating informative graph level features has long been a challenge. Current practice of graph-pooling typically summarizes a graph by squeezing it into a single vector. However, from complex systems point of view, properties of a complex system are believed to arise largely from the interaction among its components. In this paper, we analyze the intrinsic difficulty in graph classification under the unified concept of “resolution dilemmas” and propose “SLIM”, an inductive neural network model for Structural Landmarking and Interaction Modelling, to remedy the information loss in graph pooling. We show that, by projecting graphs onto end-to-end optimizable, and well-aligned substructure landmarks (representatives), the resolution dilemmas can be resolved effectively, so that explicit interacting relation between component parts of a graph can be leveraged directly in explaining its complexity and predicting its property. Empirical evaluations, in comparison with state-of-the-art, demonstrate promising results of our approach on a number of benchmark datasets for graph classification.
[ { "affiliations": [], "name": "RESOLUTION DILEMMAS" } ]
[ { "authors": [ "Nesreen K. Ahmed", "Ryan Rossi", "John Boaz Lee", "Theodore L. Willke", "Rong Zhou", "Xiangnan Kong", "Hoda Eldardiry" ], "title": "Learning role-based graph embeddings", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Rami Al-Rfou", "Dustin Zelle", "Bryan Perozzi" ], "title": "Ddgk: Learning graph representations for deep divergence graph kernels", "venue": "In Proceedings of the 2019 World Wide Web Conference,", "year": 2019 }, { "authors": [ "Uri Alon" ], "title": "Network motifs: theory and experimental approaches", "venue": "Nature Reviews Genetics,", "year": 2007 }, { "authors": [ "Jure Leskovec Austin R. Benson", "David F. Gleich" ], "title": "Higher-order organization of complex", "venue": null, "year": 2016 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: Going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "D.M. Camacho", "K.M. Collins", "R.K. Powers", "J.C. Costello", "J.J. Collins" ], "title": "Next-generation machine learning for biological", "venue": "networks. Cell,", "year": 2018 }, { "authors": [ "C. Cangea", "P. Velicković", "N. Jovanović", "T. Kipf P", "Lió" ], "title": "Towards sparse hierarchical grap classifiers", "venue": "In preprint arXiv:1811.01287,", "year": 2018 }, { "authors": [ "Paul Cilliers" ], "title": "Complexity and Postmodernism: Understanding Complex Systems", "venue": null, "year": 1998 }, { "authors": [ "Adam Coates", "Andrew Y. Ng" ], "title": "Learning feature representations with k-means", "venue": "Neural Networks: Tricks of the Trade, pp", "year": 1993 }, { "authors": [ "Nicolas Debarsy", "Stéphane Cordier", "Cem Ertur", "Franc̨ois Nemo", "Déborah Nourrit-Lucas", "Gérard Poisson", "Christel Vrain" ], "title": "Understanding Interactions in Complex Systems, Toward a Science of Interaction", "venue": null, "year": 2017 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "D.L. Donoho", "X. Huo" ], "title": "Uncertainty principles and ideal atomic decomposition", "venue": "IEEE Trans. Inf. Theory,", "year": 2001 }, { "authors": [ "Simon S Du", "Kangcheng Hou", "Russ R Salakhutdinov", "Barnabas Poczos", "Ruosong Wang", "Keyulu Xu" ], "title": "Graph neural tangent kernel: Fusing graph neural networks with graph kernels", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "H. Gao", "S. Ji" ], "title": "Graph u-net", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "L.H. Hartwell", "J.J. Hopfield", "S. Leibler", "A.W. Murray" ], "title": "From molecular to modular cell biology", "venue": "Nature, 2:47–52,", "year": 1999 }, { "authors": [ "Amir Hosein Khasahmadi", "Kaveh Hassani", "Parsa Moradi", "Leo Lee", "Quaid Morris" ], "title": "Memorybased graph networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Nikolaus Kriegeskorte" ], "title": "Deep neural networks: A new framework for modeling biological vision and brain information processing", "venue": "Annual Review of Vision Science,", "year": 2015 }, { "authors": [ "J.B. Lee", "R.A. Rossi", "Xiangnan Kong", "X. Kong", "E. Koh S. Kim", "A. Rao" ], "title": "Graph convolutional networks with motif-based attention", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Junhyun Lee", "Inyeop Lee", "Jaewoo Kang" ], "title": "Self-attention graph pooling", "venue": "In International Conference of Machine Learning,", "year": 2019 }, { "authors": [ "Mahdi Marsousi", "Kaveh Abhari", "Paul S. Babyn", "Javad Alirezaie" ], "title": "An adaptive approach to learn overcomplete dictionaries with efficient numbers of elements", "venue": "IEEE Transactions on Signal Processing,", "year": 2014 }, { "authors": [ "Nishant A Mehta", "Alexander G. Gray" ], "title": "Sparsity-based generalization bounds for predictive sparse coding", "venue": "In Proceedings of the 30th International Conference on International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Neural and Information Processing System,", "year": 2013 }, { "authors": [ "R. Milo", "S. Shen-Orr", "S. Itzkovitz", "N. Kashtan", "D. Chklovskii", "U. Alon" ], "title": "Network motifs: Simple building blocks of complex networks", "venue": "In Science,", "year": 2002 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L. Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "In The Thirty-third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "M. Neumann", "R. Garnett", "C. Bauckhage", "K. Kersting" ], "title": "Propagation kernels: efficient graph kernels from propagated information", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Vitaly Maiorov Ron Meir" ], "title": "Distortion bounds for vector quantizers with finite codebook size", "venue": "IEEE Transactions on Information Theory,", "year": 1999 }, { "authors": [ "Jonathan Schmidt", "Mário R.G. Marques", "Silvana Botti", "Miguel A.L. Marques" ], "title": "Recent advances and applications of machine learning in solid-state materials science", "venue": "NPJ Computational Material,", "year": 2019 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "In Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan van Leeuwen", "Kurt Mehlhorn", "Karsten M. Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Jonathan M. Stokes", "Kyle Swanson Kevin Yang", "Wengong Jin", "Andres Cubillos-Ruiz", "Nina M. Donghia", "Craig R. MacNair", "Shawn French", "Lindsey A. Carfrae", "Zohar Bloom-Ackermann", "Victoria M. Tran", "Anush Chiappino-Pepe", "Ahmed H. Badran", "Ian W. Andrews", "Emma J. Chory", "George M. Church", "Eric D. Brown", "Tommi S. Jaakkola", "Regina Barzilay", "James J. Collins" ], "title": "A deep learning approach to antibiotic", "venue": "discovery. Cell,", "year": 2020 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Lichen Wang", "Bo Zong", "Qianqian Ma", "Wei Cheng", "Jingchao Ni", "Wenchao Yu", "Yanchi Liu", "Dongjin Song", "Haifeng Chen", "Yun Fu" ], "title": "Inductive and unsupervised representation learning on graph structured objects", "venue": "In International COnference on Learning Representations,", "year": 2019 }, { "authors": [ "S. Wernicke" ], "title": "Efficient detection of network motifs", "venue": "IEEE/ACM Transactions on Computational Biology and Bioinformatics,", "year": 2006 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S. Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "J. Xie", "R. Girshick", "A. Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Pinar Yanardag", "SVN V N Vishwanathan" ], "title": "Deep graph kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Carl Yang", "Mengxiong Liu", "Vincent W. Zheng", "Jiawei Han" ], "title": "Node, motif and subgraph: Leveraging network functional blocks through structural convolution", "venue": "In IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining,", "year": 2018 }, { "authors": [ "Rex Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In The Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Z. Zhang", "P. Cui", "W. Zhu" ], "title": "Deep learning on graphs: A survey", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2020 }, { "authors": [ "Jie Zhou", "Ganqu Cui", "Zhengyan Zhang", "Cheng Yang", "Zhiyuan Liu", "Maosong Sun" ], "title": "Graph neural networks: A review of methods and applications", "venue": "In ArXiv. Arixv,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Complex systems are ubiquitous in natural and scientific disciplines, and how the relation between component parts gives rise to global behaviour of a system is a central research topic in many areas such as system biology (Camacho et al., 2018), neural science (Kriegeskorte, 2015), and drug and material discoveries (Stokes et al., 2020; Schmidt et al., 2019). Recently, graph neural networks provide a promising architecture for representation learning on graphs – the structural abstraction of a complex system. State-of-the-art performances are observed in various graph mining tasks (Bronstein et al., 2017; Defferrard et al., 2016; Hamilton et al., 2017; Xu et al., 2019; Velickovic et al., 2017; Morris et al., 2019; Wu et al., 2020; Zhou et al., 2018; Zhang et al., 2020). However, due to the non-Euclidean nature, important challenges still exist in graph classification. For example, in order to generate a fixed-dimensional representation for a graph of arbitrary size, graph pooling is typically adopted to summarize the information from each each node. In the pooled form, the whole graph is squeezed into a “super-node”, in which the identities of the constituent sub-graphs and their interconnections are mixed together. Is this the best way to generate graph-level features? From a complex system’s view, mixing all parts together might make it difficult for interpreting the prediction results, because properties of a complex system arise largely from the interactions among its components (Hartwell et al., 1999; Debarsy et al., 2017; Cilliers, 1998).\nThe choice of the “collapsing”-style graph pooling roots deeply in the lack of natural alignment among graphs that are not isomorphic. Therefore pooling sacrifices structural details for feature (dimension) compatibility. Recent years, substructure patterns1 draw considerable attention in graph mining, such as motifs (Milo et al., 2002; Alon, 2007; Wernicke, 2006; Austin R. Benson, 2016) and graphlets (Shervashidze et al., 2009). It provides an intermediate scale for structure comparison or counting, and has been applied to node embedding (Lee et al., 2019; Ahmed et al., 2018), deep graph kernels (Yanardag & Vishwanathan, 2015) and graph convolution (Yang et al., 2018). However, due to combinatorial nature, only substructures of very small size (4 or 5 nodes) can be considered (Yanardag\n1Informally, substructure in this paper means a connected subgraph and will be used interchargeably with it.\n& Vishwanathan, 2015; Wernicke, 2006), greatly limiting the coverage of structural variations; also, handling substructures as discrete objects makes it difficult to compensate for their similarities, and so the risk of overfitting may rise in supervised learning scenarios (Yanardag & Vishwanathan, 2015).\nWe view these intrinsic difficulties as related to resolution dilemmas in graph-structured data processing. Resolution is the scale at which measurements can be made and/or information processing algorithms are conducted, and here we will discuss two types of resolution and related dilemmas: the spatial resolution (dilemma) and the structural resolution (dilemma).\nSpatial resolution relates to the geometrical scale of the “component” that can be identified from the final representation of a graph (based on which the prediction is performed). In GNN, since graph pooling compresses the whole graph into a single vector, node and edge identities are mixed together and the spatial resolution drops to the lowest. We call this vanishing spatial resolution (dilemma). Structural resolution is the fineness level in differentiating between two substructures. Currently practice of exact matching makes it computationally intractable to handle the exponentially many sub-graph instances, and the risk of overfitting may also rise as observed in deep graph kernels (Yanardag & Vishwanathan, 2015) and dictionary learning (Marsousi et al., 2014). We will call this over-delicate substructure profiling an exploding structural resolution (dilemma). In fact, these two resolution dilemmas are not isolated. They have a causal relation and the origin is the way we perform identification and comparison of discrete substructures (more in Section 2.3).\nOur contribution. Inspired by the well-studied science of complex systems, and in particular the importance of the interacting relation between component parts of a system, we propose a simple neural architecture called “Structural Landmarking and Interaction Modelling” - or SLIM. It allows graphs to be projected onto a set of end-to-end optimizable, well-aligned structural landmarks, so that identities of graph substructures and their interactions can be captured explicitly to explain the complexity and improve graph classification. We show that, by resolving the two resolution dilemmas, and subsequently respecting the structural organization of complex systems, SLIM can be empirically very promising and offers new possibilities in graph representation learning.\nIn the rest of the paper, we will first define the resolution dilemmas of graph classification in Section 2, together with the discussion of related works. We then cover in Section 3, 4 and 5 the design, analysis, and performance of SLIM, respectively. Finally, the last section concludes the paper." }, { "heading": "2 RESOLUTION DILEMMAS IN GRAPH CLASSIFICATION", "text": "A complex system is often composed of many parts interacting with each other in a non-trivial way. Since graphs are structural abstraction of complex systems, accurate graph classification depends on how global properties of a system relate to its structure. It is believed that property of a complex system arises from interactions among its components (Debarsy et al., 2017; Cilliers, 1998). Consequently, accurate interaction modelling should benefit prediction. However, it is non-trivial due to resolution dilemmas, as described in the following subsections." }, { "heading": "2.1 SPATIAL RESOLUTION DIMINISHES IN GRAPH POOLING", "text": "Graph neural networks (GNN) for graph classification typically involves two key blocks, graph convolution and graph pooling (Kipf & Welling, 2017; Hamilton et al., 2017; Xu et al., 2019), at significantly different spatial resolutions. The goal of convolution is to pass information among neighboring nodes in the general form of\nhv = AGGREGATE ({hu, u ∈ Nv}) , where Nv is the neighbors of v (Hamilton et al., 2017; Xu et al., 2019). Here, the spatial resolution is controlled by the number of convolution layers: more layers capture lager substructures/sub-trees and can lead to improved discriminative power (Xu et al., 2019). In other words, the spatial resolution in the convolution stage can be controlled easily, and multiple resolutions may be even combined together via CONCATENATE function (Hamilton et al., 2017; Xu et al., 2019) for improved modelling.\nThe goal of graph pooling is to generate compact, graph- or subgraph-level representations that are compatible across graphs. Due to the lack of natural alignment between non-isomorphic graphs, graph pooling typically “squeezes” a graph G into a single vector (or “super-node”) in the form of\nhG = READOUT ({f(hv),∀v ∈ V}) ,\nwhere V is the vertex set of G. The readout functions include: max-pooling (Cangea et al., 2018), sumpooling (Xu et al., 2019), some other pooling functions (Hamilton et al., 2017), or deep sets (Zaheer et al., 2017); attention has been used to evaluate node importance in attention pooling (Lee et al., 2019) and gPool (Gao & Ji, 2019); hierarchical pooling has also been investigated (Ying et al., 2018). The spatial resolution drops significantly in graph pooling, as shown in Figure 1. Since all the nodes (and their representation) are mixed into one vector, subsequent classifier can no longer identify any individual substructure regardless of the spatial resolution used in the convolution stage. We call this “diminishing spatial resolution”. A diminishing spatial resolution will mix the identity of sub-structures (e.g., functional modules of a molecule), making it non-trivial to trace the behaviour of the classifier back to meaningful parts of the graph for interpretation." }, { "heading": "2.2 STRUCTURAL RESOLUTION EXPLODES IN SUBSTRUCTURE IDENTIFICATION", "text": "Substructures are the basic unit to accommodate interacting relations. A global criterion to identify and align substructures is the key to preserving substructure identities and comparing the inherent interactions across graphs. Again, the granularity in determining whether two substructures are “similar” or “different” is subject to a wide spectrum of choices, which we call “structural resolution”.\nWe illustrate the concept in Figure 2. The right end denotes the finest resolution in differentiating substructures: exact matching, as we manipulate motif/graphlet (Milo et al., 2002; Alon, 2007; Wernicke, 2006; Yang et al., 2018; Shervashidze et al., 2009). The exponential configuration of subgraphs will finally lead to an “exploding” structural resolution, because maintaining a large number of unique substructures is computationally infeasible, and easily overfits (Yanardag & Vishwanathan, 2015). The left end of the spectrum treats all substructures the same and underfits the data. We are interested in a medium structural resolution, where similar substructures are mapped to the same identity, which we believe can benefit the generalization performance (empirical evidence in Fig. 5)." }, { "heading": "2.3 RELATION BETWEEN SPATIAL AND STRUCTURAL RESOLUTION DILEMMAS", "text": "The two resolution dilemmas have a causal relation, and the logic chain is as follows.\n1. Due to difficulty of characterizing discrete subgraphs, exact matching is typically adopted. 2. As a result, an exploding structural resolution (dilemma) is caused. 3. Such an over-delicate granularity makes it infeasible to compare substructure across graphs. 4. As a result, a collapsing-style graph pooling has to be adopted that summarizes the whole\ngraph into a single vector, serving as compatible graph-level features. 5. As a result, a vanishing spatial resolution (dilemma) caused finally.\nNamely, the exploding structural resolution makes (collapsing-style) graph pooling an inevitable choice, which in turn leads to diminishing spatial resolution. Since the root cause of exploding structural resolution is how we typically manipulate discrete sub-structures, i.e., exact matching, we will replace it with structural landmarking in SLIM, so that both dilemmas are coordinately solved." }, { "heading": "3 STRUCTURAL LANDMARKING AND INTERACTION MODELLING (SLIM)", "text": "The key idea of SLIM is to compute landmarks (or representatives) from the distribution of substructures (embedded in a continuous space) across different graphs. By doing this, identification and comparison of sub-structures become much easier, and so an identity-preserving graph pooling becomes applicable that explicitly models interaction between component parts of a graph.\nProblem Setting. Give a set of labeled graphs {Gi, yi}’s for i = 1, 2, ..., n, with each graph defined on the node/edge set Gi = (Vi,Ei) with adjacency matrix Ai ∈ Rni×ni where ni = |Vi|, and yi ∈ {±1}. Assume that nodes are drawn from c categories, and the node attribute matrix for Gi is Xi ∈ Rni×c. Our goal is to train an inductive model to predict the labels of the testing graphs. The SLIM network has three main steps: (1) substurcture embedding, (2) substructure landmarking, and (3) identity-preserving graph pooling, as shown in Figure 3. Detailed discussion follows." }, { "heading": "3.1 SUBSTRUCTURE EMBEDDING", "text": "The goal of substructure embedding is to extract substructure instances and embed them in a metric space. One can employ multiple layers of graph convolution (Hamilton et al., 2017; Xu et al., 2019)\nto model substructures (in fact, rooted sub-trees growing from each node). In Figure 3, sub-graphs in each shaded circle represents a substructure instance associated with one atom.\nMore specifically, we extract one sub-graph instance from each node using an h-hop breadth-first search. Let A(k)i be the k\nth-order adjacency matrix, i.e., the pqth entry equals 1 only if node p and q are within k-hops away. Since each sub-graph is associated with one node, the sub-graphs extracted from Gi can be represented as Zi = A(k)i Xi, whose jth row is a c-dimensional vector summarizing counts of the c node-types in the sub-graph around the jth node. Again, different variations of graph convolution (Kipf & Welling, 2017; Hamilton et al., 2017) can be adopted (see Appendix A).\nNext we consider embedding the substructure instances (i.e., rows of Zi’s) from each graph jointly into a latent space so that statistical manipulations become feasible. The embedding should preserve important proximity relations to facilitate subsequent landmarking: if two substructures are similar, or they frequently inter-connect with each other, their embeddings should be close. In other words, the embedding should be smooth with regard to both structural similarities and geometrical interactions.\nA parametric transform on Zi’s with controlled complexity can guarantee the smoothness of embedding w.r.t. structural similarity, e.g., an autoencoder (with one hidden-layer as example)\nf(Zi) = σ (Zi ·T + b) . (1) Here T and b are the transform matrix and bias term of the autoencoder. Let Hi = f(Zi) ∈ Rni×d be the embedding of the ni sub-graph instances extracted from Gi. To maintain the smoothness of Hi’s w.r.t. geometric interaction, we maximize the log-likelihood of the co-occurrence of substructure instances in each graph, similar to word2vec (Mikolov et al., 2013)\nmax n∑ i=1 ni∑ j=1 ∑ l∈N ij log ( exp〈Hi(j, :),Hi(l, :)〉∑ l′ exp〈Hi(j, :),Hi(l′, :)〉 ) (2)\nHere Hi(j, :) is the jth row of Hi, 〈, 〉 is inner product, and N ij are the neighbors of node j in graph Gi. This loss function tends to embed strongly inter-connecting substructures close to each other." }, { "heading": "3.2 SUBSTRUCTURE LANDMARKING", "text": "The goal of structural landmarking is to identify a set of informative structural landmarks in the continuous embedding space such that these landmarks have: (1) high statistical coverage, namely, they should faithfully recover distribution of the substructures from the input graphs, so that we can generalize them to new substructure examples from the distribution; and (2) high discriminative power, namely the landmarks should be able to reflect discriminative interaction patterns for classification.\nLet U = {µ1,µ2, ...,µK} be the set of structural landmarks. In order for them to be representative of the substructures from the input graphs, it is desirable that each sub-graph instance can be faithfully approximated by the closest landmark in U. Thus, we minimize the following distortion loss\nn∑ i=1 ni∑ j=1 min k=1,2,...,K ‖Hi(j, :)− µk‖2. (3)\nHere Hi(j, :) denotes the jth row (substructure) from graph Gi. In practice, we implement soft assignment by using a cluster indicator matrix Wi ∈ Rni×k for each graph Gi, whose jkth entry is the probability that the jth substructure of Gi belongs to the kth landmark µk. Inspired by deep embedding clustering (Xie et al., 2016), Wi is parameterized by a Student’s t-distribution\nWi(j, k) = (1 + ‖Hi(j, :)− µk‖2/α)− α+1 2∑\nk′(1 + ‖Hi(j, :)− µ′k‖2/α)− α+1 2\n,\nand the loss function can be greatly simplified by minimizing the KL-divergence\nmin U,H′is ∑ i KL ( Wi,W̃i ) , s.t. W̃i(j, k) =\nW2i (j, k)/ ∑\nl Wi(l, k)∑ k′ [W 2 i (j, k ′)/ ∑ l Wi(l, k ′)] . (4)\nHere, W̃i is a self-sharpening version of Wi, and minimizing the KL-distance forces each substructure instance to be assigned to only a small number of landmarks similar to sparse dictionary learning. Besides the unsupervised regularization in (3) or (4), learning of the structural landmarks is also affected by the classification loss, guaranteeing the discriminative power of the landmarks." }, { "heading": "3.3 IDENTITY-AND-INTERACTION-PRESERVING GRAPH POOLING", "text": "The goal of identity-and-interaction-preserving graph pooling is to project structural details of each graph onto the common space of landmarks, so that a compatible, graph-level feature can be obtained that simultaneously preserves the identity of the parts (substructures) and models their interactions.\nThe structural landmarking mechanism allows computing rich, compatible graph-level features.\n1. Substructure distribution in each graph. The density of the K substructure landmarks in graph Gi can be computed as\npi = W ′ i · 1ni×1. (5)\n2. First-order moment of substructures in each graph. The mean of the substructures belonging to each of the K landmarks in graph Gi is\nMi = X ′ i ·Wi ·P−1i (6)\nwith Pi = diag(pi): kth column of Mi is the mean of sub-graphs with the kth landmark. 3. Sub-structure interaction in each graph. We can model how the K landmarks interact with\neach other in graph Gi. To do this, we can project the adjacency matrices Ai’s onto the landmark sets and obtain an RK×K interaction matrix\nCi = Wi ·Ai ·W′i, (7) which encodes the interacting relations (geometric connections) among the K structural landmarks. We can further normalize this interaction as C̃i = P−1i CiP −1 i .\nThese features can be used together for final classification. For example, they can be concatenated to feed into fully-connected layer. One can also combine them together and transform each graph Gi into a constant-sized “landmark” graph with node feature Mi, node weight pi, and edge weights Ci. Then the standard graph convolution can be applied to the landmark graphs to generate graph-level features (without pains of graph alignment anymore). In experiments, we simply use normalized sub-structure interaction matrix C̃i (re-shaped to vector) as graph-level feature (more details in Appendix H).\nWe illustrate the architecture of SLIM in Figure 4. Here, the final graph features are fed into one fully-connected layer with dimension 64 for final prediction. Globally, the objective function includes the supervised part, namely the cross-entropy loss for classification, and the unsupervised part, namely the node2vec loss (2) reflecting geometric connections between substructures within each graph, and the clustering loss (4) reflecting the sub-structure similarity across different graphs. Weights for unsupervised loss terms in (2) and (4) are 1 and 10, respectively. More details are in Section 5." }, { "heading": "4 DISCUSSIONS", "text": "" }, { "heading": "4.1 STRUCTURAL RESOLUTION FROM CODING PERSPECTIVE", "text": "The structural resolution is controlled by the landmark size K, which can be deemed as the number of basis in dictionary learning. Note that graphs can be considered as constructed from (inter-connected)\nsubstructure instances, each instance z represented by the landmarks as z = ∑\nk αkµk. In other words, the structural landmarks can be deemed as code vectors (or basis) in a dictionary.\nIn dictionary learning, it is known that in general neither too small nor too large dictionary is desirable. A too small number of code-vectors fail to recover basic data structures, whereas too many basis may result in overfitting (Marsousi et al., 2014). In particular, when the redundancy of the basis vectors, as measured by the “mutual coherence”, exceeds a certain range, the probability of a faithful signal recovery diminishes due to instability of sparse coding (Donoho & Huo., 2001; Mehta & Gray, 2013).\nWe have very similar observations on the choice of structural resolution K. Note that in case of exact sub-structure matching, which corresponds to a maximal K due to the combinitorial nature of substructures, the redundancy in structural landmarks increases significantly, which violates the recovery condition and leads to inferior performance. This is why we prefer a reasonable control on the structural resolution, in contrast to exact sub-structure matching as in graphlets or motif discovery. In the Appendix B (Theorem 1), we provide a more detailed theoretic analysis on how the mutual coherence of the structural landmarks depends on the choice of structural resolution K." }, { "heading": "4.2 COMPARISON WITH RELATED METHODS", "text": "Graph Isomorphism. GNNs have great potential in graph isomorphism test by generating injective node embedding, thanks to the theoretic foundation in (Xu et al., 2019; Morris et al., 2019). SLIM provides new insight here: (1) it finds a tradeoff in handling similarity and distinctness; while graph isomorphism network (Xu et al., 2019) tries to discern even the slightest difference of sub-structures, SLIM network groups similar sub-structures (using tunable granularity), so that graphs can be projected onto structural landmarks; (2) it explores new ways of generating graph-level features: instead of aggregating all parts together, it taps into the vision of complex systems so that interaction relation is leveraged to explain the complexity and improve the learning (more in Appendix I).\nGraphlets, Graph Kernels, and Embedding Methods. Graphlets and Graph kernels both exploit substructures to characterize graphs and their similarity. The SLIM network has some key differences. First, we consider sub-structure landmarks that are end-to-end optimizable for generating discriminative, graph-level interacting patterns, while graph kernels or graphlets typically enumerate sub-structures offline. Second, SLIM models interaction between sub-structures, which is very different from graph kernels. Recently, Deep Divergence Graph Kernel (Al-Rfou et al., 2019) is proposed for unsupervised graph similarity learning. The authors designed isomorphism attention mechanism to compare nodes from source and target graphs, while SLIM uses global clustering across a batch of (training) graphs. Finally, embedding-type methods mostly focus on node- or edge-level representations (Mikolov et al., 2013; Ahmed et al., 2018)(see more in Appendix J).\nOther Aggregation or Pooling Methods. Hierarchical pooling (Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019; Khasahmadi et al., 2020) can exploit non-flat graph organization, but the final output is still in the form of a single, aggregated node vector. Sortpooling re-arranges graph nodes in a linear chain and perform 1d-convolution (Zhang et al., 2018); SEED uses distribution of multiple random walks to capture graph structures (Wang et al., 2019); Deep graph kernel evaluates graph similarity by subgraph counts (Yanardag & Vishwanathan, 2015). Explicit modelling of the interactionrelation in graphs is still not considered in these approaches." }, { "heading": "5 EXPERIMENTS", "text": "Benchmark data. We used the following benchmark data sets. (1) MUTAG: chemical compound with 188 instances and two classes; there are 7 node/atom types. (2) PROTEINS: protein molecules with 1,113 instances and three classes/node-types. (3) NCI1: chemical compounds for cancer cell lines with 4,110 instances and two classes. (4) PTC: chemical compounds for toxicology prediction with 417 instances and 8 classes. (5) D&D: enzyme classification with 1,178 instances and two classes. (6) IMDB-B: movie collaboration data set with 1,000 instances and two classes. (7) IMDB-M: movie collaboration data set with 1,500 instances and three classes.(8) COLLAB: scientific collaboration network from 3 physics fields (classes). We list the detailed statistics of the graph data sets in Table 1.\nCompeting methods. We considered (1) Graph neural tangent kernel (GNTK) (Du et al., 2019); (2) Graph Isomorphism Network (GIN) (Xu et al., 2019); (3) End-to-end graph classification (DCGNN) (Zhang et al., 2018); (4) Hierarchical and differential pooling (DiffPool) (Ying et al., 2018); (5) Selfattention Pooling (SAG) (Lee et al., 2019); (6) Convolutional network for graphs (PATCHY-SAN) (Niepert et al., 2016); (7) Graphlet kernel (GK) (Shervashidze et al., 2009); (8) Weisfeiler-Lehman Graph Kernels (WLGK) (Shervashidze et al., 2011); and (9) Propagation kernel (PK) (Neumann et al., 2016). For method (4),(6),(7),(8),(9) we directly cited their reported results due to unavailability of their codes; for other competing methods we run their codes and report the results.\nExperimental setting. Following setting in (Xu et al., 2019) and (Niepert et al., 2016), we evenly split the data into 10 folds and report the average and standard deviation of the accuracies of the 10 rounds. Our spatial resolution is controlled by a BFS with 3 hops; the structural resolution is set to K = 100. Hyper-parameter selection is based on small portion of the training splits as validation set, including: (1) the number of hidden units in the autoencoder Eq.(1) is chosen from {d, d/2, 2d, 8, 16} where d is the input dimension of the encoder; (2) the optimizer is chosen from SGD and Adagrad, with learning rate {1e− 2, 5e− 2, 1e− 3, 5e− 3, 1e− 4}; (3) local graph representation, including node distribution, layer-wise distribution, and weighted layer-wise summation (more in Appendix A).\nStructural Resolution. In Figure 5, we examine the performance of SLIM with different structural resolution (K). As can be seen, the accuracy curve is bell-shaped. When K is either too small (underfitting) or too large (coherent landmarks that overfit), the accuracy is inferior, and the best performance is typically obtained around a medianK value. This justifies our conjecture, as well as the usefulness of structural landmarking in improving graph classification.\nClassification Performance. We compare the performance of different methods in Table 2. We use three social network data sets (IMDBB, IMDB-M, COLLAB) and five bioinformatics data sets (MUTAG, PTC, NCI1 PROTEINS, DD). As can be seen, in Table 2, overall, neural network-based approaches are more competitive than graph kernels, except that graph kernels have lower fluctuations, and the WL-graph kernel perform the best on the NCI1 dataset. For social network data, the SLIM network gains a competitive score on IMDB-B and IMDB-M, but is worse on the COLLAB. We speculate that social network data do not have node features and so the advantage of SLIM might be less significant.\nFrom Table 3, SLIM yields the highest average ranking among all the 8 benchmark datasets.\nAlgorithm Stability. In Figure 6 (more results in Appendix G.1, Figure 9), we plot the evolution of the testing accuracy versus the training epochs, so as to have a more comprehensive evaluation on algorithm stability. As can be seen, our approach has an accuracy curve that converges relatively faster\nand remains more stable with respect to the epochs. This signifies a small variance during the training process and makes it practically easy to determine when to stop training. Other GNN algorithms can also attain a high accuracy on some of the benchmark datasets, but the prediction performance fluctuates significantly across the training epochs (even by using very large mini-batch sizes). In such cases, determining when to stop can be challenging.We speculate that stability of the SLIM network arises from explicit modelling of the sub-structure distributions. In Figure 6, it is also worthwhile to note that on the MUTAG data the proposed method produces a classification with 100% accuracy on more than half of the runs across different folds, and converges to the perfect classification steadily. It demonstrates the power of the SLIM network in capturing important graph-level features.\nImpact of Spatial Resolution and Unsupervised Loss. Impacts of spatial resolution and unsupervised loss are shown in Figure 7. In Figure 7(a), 3-hop sub-graphs tend to be a good choice for spatial resolution in bioifnormatics data. This could be consistent with the scale of meaningful functional modules in a molecule. In Figure 7(b), and (c), both the node2vec loss (2) and the clustering loss (4) are shown to benefit prediction with properly chosen weight. As expected, the node2vec loss promotes smoothness of embedding with regard to geometric interactions between sub-structures, while the clustering loss tends to promotes more “frequent” sub-structure representatives as the landmarks, thus making things more stable. Overall, the unsupervised loss serves as a regularization to benefit the learning task, and we will perform more in-depth studies in future research." }, { "heading": "6 CONCLUSION", "text": "Graph neural networks provide a popular state-of-the-art computational architecture for graph mining. In this paper, we designed the SLIM network that employs structural landmarking to resolve resolution dilemmas in graph classification and capture inherent interactions in graph-structured systems.\nEncouraged by the promising experimental results, we expect this attempt to open up possibilities in designing GNNs with informative structural priors." }, { "heading": "A REPRESENTATION OF SUBSTRUCTURES", "text": "As we have discussed in Section 3.1, the simplest form for quantifying a sub-graph is\nZi = AiXi (8)\nHere we list a few simple variations used for representing the sub-graph that is grown from each node, including (1) emphasize the center node,\nZi = [Xi; AiXi] (9)\nas inspired by (Xu et al., 2019); (2) layer-wise node type distribution\nZi = [à (1) i Xi; à (2) i Xi; ... à (k) i Xi] (10)\n, where Ã(k)i specifies whether two nodes in Gi are exactly k-hops away; or (3) weighted Layer-wise summation\nZi = αk ∑ k à (k) i Xi (11)\nwhere αk’s are non-negative weighting that decays with k, which is a more delicate summary of node-type distributions on each layer of BFS." }, { "heading": "B MUTUAL COHERENCE OF STRUCTURAL LANDMARKS", "text": "In the following, we quantify a lower-bound of the coherence as a factor of the landmark size K in clustering-based basis selection, since the sparse coding and k-means algorithm generate very similar code vectors (Coates & Ng, 1993). Theorem 1. The lower bound of the squared mutual coherence of the landmark vectors increases monotonically with K, the number of landmarks in clustering-based sparse dictionary learning.\nµ2(U) ≥ 1− 4CdCp u2maxK 1 d ⌊(K 2 ) 1 d ⌋−1 + 1 Here, d is the dimension, Cd = 32 (1 + log(d)/d) γdVd, where γd = 1 + d log(d log(d)) and Vd = 2Γ( 12 )\nd/dΓ(d2 ) is the volume of the d-dimensional unit ball; umax is the maximum `2-norm of (a subset) of the landmark vectors µk’s, and Cp is a factor depending on data distribution p(·).\nProof. Suppose we have n spatial instances embedded in the d-dimensional latent space as {z1, z2, ..., zn}, and the landmarks (or codevectors) are defined as µ1,µ2, ...,µK . Let p(z) be the density function of the instances. Define the averaged distance between the instance and the closest landmark point as\ns = 1\nn n∑ i=1 ‖zi − µc(i)‖2, (12)\nwhere c(i) is the index of the closest landmark to instance i. As expected, s will decay with the number of landmarks with the following rate (Ron Meir, 1999)\ns ≤ CdCp ⌊(K 2 ) 1 d ⌋−1 + 1 K− 1d (13) where Cd is a dimension-dependent factor Cd = 32 ( 1 + log(d)d ) γd, with Vd = 2Γ( 12 )\nd/dΓ(d2 ) the volume of the unit ball in k-dimensional Euclidean space and γd = 1 + d log(d log(d)); Cp =(∫\np(z) d d+1 dz\n) d+1 d\nis a factor depending on the distribution p.\nSince s is the average distortion error, we can make sure that there exists a non-empty subset of instances Ωz such that ‖zi − µc(i)‖ ≤ s for i ∈ Ωz . Next we will only consider this subset of instances and the relevant set of landmarks will be denoted by Ωu. For the landmarks µp ∈ Ωu, we make a realistic assumption that there are enough instances so that we can always find one instance z falling in the middle of µp and its closest landmark neighbor µp. In this case, we have then bound the distance between the closest landmark pairs as\n‖µp − µq‖ ≤ ‖µp − z‖2 + ‖µq − z‖2 ≤ 2s.\nFor any such pair, assume that the angle spanned by them is θpq . we can bound the angle between the two landmark vectors by\nsin (θpq) ≤ 2s\n‖µp‖ . (14)\nLet umax = maxµp∈Ωu ‖µp‖2, we can finally low-bound the normalized correlation between close landmark pairs, and henceforth the coherence of the landmarks, as\nµ2(U) ≥ max p,q∈Ωu cos2(θpq)\n= max p,q∈Ωu\n1− sin2(θpq)2\n≥ 1− 4s 2\nu2max\n≥ 1− 4CdK − 1d\nu2max\n⌊(K 2 ) 1 d ⌋−1 + 1 This indicates that the squared mutual coherence of the landmarks has a lower bound that consistently increases when the number of the landmark vectors,K, increases in a dictionary learning process.\nThis theorem provides important guidance on the choice of structural resolution. It shows that when a clustering-based dictionary learning scheme is used to determine the structural landmarks, the size of the dictionary K can not be chosen too large; or else the risk of overfitting can be huge. Note that exact sub-structure matching as is often practiced in current graph mining tasks corresponds to an extreme case where the number of landmarks, K, equals the number of unique sub-structures; therefore it should be avoided in practice. The structural landmarking scheme is a flexible framework to tune the number of landmarks, and to avoid overfitting." }, { "heading": "C CHOICE OF SPATIAL AND STRUCTURAL RESOLUTIONS", "text": "The spatial resolution determines the “size” of the local sub-structure (or sub-graph), such as functional modules in a molecule. Small sub-structures can be very limited in terms of their representation power, while too large sub-structures can mask the right scale of the local components crucial to the learning task. An optimal spatial resolution can be data-dependent. In practice, we will restrict the size of the local sub-graphs to 3-hop BFS neighbors, considering that the “radius” of the graphs in the benchmark data-sets are usually around 5-8. We then further fine-tune the spatial resolution by assigning a non-negative weighting on the nodes residing on different layers from the central code in the local subgraph. Such weighting is shared across all the sub-graphs and can be used to adjust the importance of each layer of the BFS-based sub-graph. The weighting can be chosen as a monotonously decaying function, or optimized through learning.\nThe choice of structural resolution has a similar flavor in that too small or too large resolutions are neither desirable. On the other hand, it can be adjusted conveniently by tuning the landmark set size K based on the validation data. In our experiments, K can be chosen by cross validation; for simplicity, we fix K = 100.\nFinally, note that geometrically larger substructures (or sub-graphs) are characterized by higher variations among instances due to the exponential amount of configuration. Therefore, the structural resolution should also commensurate with spatial resolutions. For example, substructures constructed by 1-hop-BFS may use a smaller landmark size K than those with 3-hop-BFS. In our experiments we do not consider such dependencies yet, but will study it in our future research." }, { "heading": "D HIERARCHICAL VERSION", "text": "D.1 SUBTLETY IN SPATIAL RESOLUTION DEFINITION\nFirst we would like to clarify a subtlety in the definition of spatial resolutions. In physics, resolution is defined as the smallest distance (or interval) between two objects that can be separated; therefore it must involve two scales: the scale of the object, and the scale of the interval. Usually these two scales are proportional. In other words, you cannot have a large intervals and small objects, or the opposite (a small interval and large object). For example, in the context of imaging, each object is a pixel and the size of the pixel is the same as the interval between two adjacent pixels.\nIn the context of graphs, each object is a sub-graph centered around one node, whose scale is manually determined by the order of the BFS-search centered around that node. Therefore, the interval between two sub-graphs may be smaller than the size of the sub-graph. For example, two nodes i and j are direct neighbors, and each of them has a 3-hop sub-graph. Then, the interval between these two subgraphs, if defined by the distance between i and j, will be 1-hop; this is smaller than the size of the two sub-graphs, which is 3-hop. In other words, the two objects/subgraphs indeed overlap with each other, and the scale of the object and the scale of the interval between objects is no longer commensurate (large objects and small interval in this scenario).\nThis scenario makes it less complete to define spatial resolutions just based on the size of the subgraphs (as in the main text), since there are actually two scales to define. To avoid unnecessary confusions, we skip these details. In practice, one has two choices dealing with the discrepancy: (1) requiring that the sub-graphs are not overlapping, i.e., we do not have to grow one k-hop subgraph around each node; instead, we just explore a subset of the sub-graphs. This can be implemented in a hierarchical version which we discuss in the next subsection; (2) we still allow each node to have a local sub-graph and study them together, which helps cover the diversity of subgraphs since theoretically, an ideal choice of the subgraph is highly domain specific and having more sub-graph examples gives a better chance to include those sub-graphs that are beneficial to the prediction task.\nD.2 HIERARCHICAL SLIM\nWe can implement a hierarchical version of SLIM so that sub-graphs of different scales, together with the interacting relation between sub-graphs under each scale, can be captured for final prediction. Note that in (Ying et al., 2018) a hierarchical clustering scheme is used to partition one graph, in a bottom up manner, to less and less clusters. We can implement the same idea and construct a hierarchy of scales each of which will host a number of sub-structures. The structural landmarking scheme will be implemented in each layer of the hierarchy to generate graph-level features specific to that scale. Finally these features can be combined together for graph classification." }, { "heading": "E SEMI-SUPERVISED SLIM NETWORK", "text": "The SLIM network is flexible and can be trained in both fully supervised setting and semi-supervised setting. This is because the SLIM model takes a parametric form and so it is inductive and can generalize to any new samples; on the other hands, the clustering-based loss term in (4) can be evaluated on both labeled samples and unlabeled samples, rendering the extra flexibility to look into the distribution of the testing sample in the training phase, if they are available. This is in flavor very similar to the smoothness constraint widely used in semi-supervised learning, such as the graph-regularized manifold learning (Belkin et al., 2006). Therefore, the SLIM network can be implemented in the following modes\n• Supervised version. Only training graphs and their labels are available during the training phase, and the loss function (4) is only computed on the training samples.\n• Semi-supervised version. Both labeled training graphs and unlabeled testing graphs are available. The loss function (4) will be computed on both the training and testing graphs, wile the classification loss function will only be evaluated on the training graph labels.\nF INTERPRETABILITY\nThe SLIM network not only generates accurate prediction in graph classification problems, but can also provide important clues on interpreting the prediction results, because the graph-level features in SLIM bear clear physical meaning. For example, assume that we use the interaction matrix Ci for the ith graph Gi as its feature representation; and the pqth entry then quantifies the connectivity strength between the pth sub-structure landmark and the qth structure landmark. Then, by checking the K2-dimensional model coefficients from the fully-connected layer, one can then tell which subset of substructure-connectivity (i.e., two substructures are directly connected in a graph) is important in making the prediction. To improve the interpretability one can further imposes a sparsity constraint on the model coefficient.\nIn traditional graph neural networks such as GraphSAGE of GIN, node features are transformed through many layers and finally mingled altogether through graph pooling. The resultant graph-level representation, whose dimension is manually determined and each entry pools the values across all the nodes in the graph, can be difficult to interpret.\nIn Figure 8, we illustrate the sub-structure embedding and the learned landmarks of SLIM network on the MUTAG dataset. Here, molecules belonging to different classes are marked with red and blue edges. As can be observed, the sub-structures landmarks can generate highly discriminative features for graph classification; furthermore, by examining the substructure instances associated with each landmark, domain experts can acquire valuable clues for interpretating the underlying mechanism of classification. In fact, discriminative roles of the landmarks are two-fold. First, landmark themselves can be discriminative by being associated with one class more often than the other. Second, even in scenarios where both classes may share some common sub-structure landmarks (which is possible in molecules due to some common functional modules), the interacting pattern defined among the landmarks can still serve as an effective discriminator. We believe the dual discriminating mechanism using structural landmarks can be quite desirable in solving difficult graph classification problem, which we will investigate in more detail in our future studies." }, { "heading": "G ABLATION STUDY (TEST ACCURACY EVOLUTION)", "text": "G.1 ALGORITHM STABILITY\nHere we have reported the evolution of the testing accuracies for a number of competing algorithms on more all the benchmark training data, as in Figure 9.\nG.2 TIME COMPLEXITY\nWe also report the relationship between the accuracy and time of SLIM and several other methods to complete 200 epochs on the MUTAG and PTC data sets. As in Figure 10 The accuracy of our method is better than most methods on the two data sets.When running for 200 epochs, our method is faster than most methods on the MUTAG data set. The time cost of our approach is mainly on the calculation of spatial resolution, structural resolution and back propagation when updating model parameters. Assuming that the size of each mini-batch size is n, the average number of nodes in each graph is m, and each batch is running for all nodes in the graph to establish the computational spatial resolution required O(nm · |Nk|) time, where |Nk| is the averaged k-hop-neighbor size. Considering that DEC clustering is needed to calculate the structural resolution, this part needs to minimize the KL divergence between it and the auxiliary target distribution, assuming k0 is the number of landmarks, the time complexity is only O(nm · k0) ." }, { "heading": "H RICH GRAPH LEVEL FEATURES", "text": "The structural landmarking mechanism allows computing rich graph-level features by using different approaches to project structural details of each graph onto the common space of landmarks.\n• SLIM-C. Using the (normalized) sub-structure interaction matrix for each graph as its feature, as discussed in Section 3.3.\n• SLIM-Concat. Concatenating the density, mean, and interaction features discussed in Section 3.3 together (after re-shaping into vectors). One could also transform the interaction feature to a smaller matrix via bilateral dimension reduction before reshaped into a vector. Then a fully connected layer follows for the final prediction.\n• SLIM-GNN (Landmark Graph). Each graph Gi can be transformed into a landmark-graph a with fixed number of K (landmark) nodes, with pi and Ci quantifying the weight of each node and the edge between every pair of nodes, and Mi the feature of each node (see definition in Section3.3). Then, this graph can be subject to a graph convolution such as D−1i CiMi generate a fixed-dimensional graph-level feature without having to take care of the varying graph size. We will study this in our future experiments. In Table 4, we compare the performance of using different features of the projected graph. Overall, interaction matrix Ci = Wi ·Ai ·W′i, which encodes the interacting relations (geometric connections) among the K structural landmarks are slightly better than GNN-features and concatenation features, except that SLIM-GNN have lower fluctuations.For most bioinformatics data, the SLIM-C gains a competitive score on MUTAG, PTC, NCI1 and Protein data sets.\nI INTERACTION VERSUS INTEGRATION\nThe SLIM network and existing GNNs represent two different flavors of learning, namely, interaction modelling versus integration approach. Interaction modelling is based on mature understanding of complex systems and can provide physically meaningful interpretations or support for graph classification; integration based approaches bypass the difficulty of preserving the identity of substructures and instead focus on whether the integrated representation is an injective mapping, as typically studied in graph isomorphism testing.\nNote that an ideal classification is different from isomorphism testing and is not injective. In a good classifier, the goal of deciding which samples are similar and which are not are equally important. Here comes the tradeoff between handling similarity and distinctness. The Isomorphism-flavor GNN’s are aimed at preserving the differences between local sub-structures (even just a very minute difference), and then map the resultant embedding to the class labels. Our approach, on the other hand, tries to absorb patterns that are sufficiently close to the same landmark, and then map the landmark-based features to class labels. In the latter case, the structural resolution can be tuned in a flexible way to explore different fineness levels, thus tuning the balance between “similarity” and “distinctness”; in the meantime, the structural landmarks allow preserving sub-structure identities and exploiting their interactions." }, { "heading": "J COMPARISON WITH RELATED METHODS", "text": "Graph kernels are powerful methods to measure the similarity between graphs. The key idea is to compare the sub-structure pairs from the two graphs and compute the accumulated similarity, where examples of substructures include random walks, paths, sub-graphs, or sub-trees. Among them, paths/sub-graphs/sub-trees are deterministic sub-structures in a graph, while random walks are stochastic sequences (of nodes) in a graph.\nAlthough the SLIM network considers sub-structures as basic processing unit, it has a number of important differences. First, we consider sub-structure landmarks that are end-to-end optimizable, aimed at both reconstructing substructure distribution and generating discriminative interacting pattern for graph classification; while sub-structures in graph kernels or graphlets are typically identified offline. Second, graph kernels measure similarity between all possible pairs of sub-structures across two graphs; while SLIM network models interaction between sub-structures, i.e., the functional organization of a graph, which is very different from graph kernels. Third, interpretation is challenging due to nonlinearity of kernel methods and exponential number of candidates in graphlets; while SLIM maintains a reasonable amount of discriminative “landmark” sub-structures easier to interpret.\nIn recent years, embedding algorithms have drawn considerable attention that transform nodes (or subgraphs) into a low-dimensional Euclidian space, as pioneered by the word-to-vector work (Mikolov et al., 2013). It is worthwhile to note that these algorithms focus on node- or edge-level embedding,\nwhile our target is graph-level classification. As a result, our approach emphasizes more on the innovation in modelling the interacting relation between component parts of a graph, as inspired by the views from complex systems. In fact, our most important contribution lies exactly in learning substructure landmarks jointly across graphs to enable identity-preserving graph pooling. This is rarely a consideration in algorithms whose main focus is just to embed the nodes or sub-graphs.\nHere we use Role2Vec (Ahmed et al., 2018) as an example to illustrate the similarity and difference between our approach and embedding-type methods. Similarity: both methods embed nodes or subgraphs, and consider high-order subgraph features. Differences: (1) Tasks are different; Role2Vec is node-level embedding, while ours is graph-level classification; (2) Data are different; Role2Vec focuses on single graph, we simultaneously handle many graphs, i.e., align sub-structures from different graphs and project each graph onto shared structural landmarks - a new framework for graph pooling; (3) Methods are different; Role2Vec designs attributed random walk, we use KL to fit substructure distribution; R2V finds subgraph motifs offline like in graphlets, we optimize discriminative substructure landmarks in an end-to-end fashion.\nRecent years, various hierarchical pooling strategies (Ying et al., 2018; Lee et al., 2019; Khasahmadi et al., 2020) have been proposed to fully exploit non-flat graph organization and show promising results in graph classification. However, despite the hierarchical process that reduces the number of nodes layer by layer, the final representation is still in the form of a single, aggregated node vector for feature compatibility, leading to potential loss of informative graph structural details. Note that hierarchical methods usually perform grouping inside each graph, while in our approach the substructure landmarks are identified by jointly clustering the substructure instances across all the graphs. Therefore our landmark set size is typically larger than the number of clusters in hierarchical methods, in order to accommodate the diversity in sub-structure distribution. Note that some of the hierarchical methods need to sort the nodes of the graph as a pre-processing procedure (Ying et al., 2018; Khasahmadi et al., 2020).\nWe are also aware some other types of aggregation strategies: Sortpooling re-arranges graph nodes in a linear chain and perform 1d-convolution (Zhang et al., 2018); SEED uses distribution of multiple random walks to capture graph structures (Wang et al., 2019); Deep graph kernel evaluates graph similarity by subgraph counts (Yanardag & Vishwanathan, 2015). Again, explicit modelling of the interaction between constituent parts of the graph is not considered in these approaches." } ]
2,020
null
SP:9a8aa745df5a94d693dde585ca37765f9d657978
[ "The proposed method is a federated method allowing to have a certain amount of data shared between all the learners and some data specific to each learner. The targeted field of application is classification for problems where strong privacy is crucial. The method consists in learning a global classifier (with the shared data) as well as local classifiers (one per learner, using the local data). The inference, for each learner, is done with a local expert (another neural network) trained to combine inferences from the local and global models." ]
Federated learning has received attention for its efficiency and privacy benefits, in settings where data is distributed among devices. Although federated learning shows significant promise as a key approach when data cannot be shared or centralized, current incarnations show limited privacy properties and have shortcomings when applied to common real-world scenarios. One such scenario is heterogeneous data among devices, where data may come from different generating distributions. In this paper, we propose a federated learning framework using a mixture of experts to balance the specialist nature of a locally trained model with the generalist knowledge of a global model in a federated learning setting. Our results show that the mixture of experts model is better suited as a personalized model for devices when data is heterogeneous, outperforming both global and local models. Furthermore, our framework gives strict privacy guarantees, which allows clients to select parts of their data that may be excluded from the federation. The evaluation shows that the proposed solution is robust to the setting where some users require a strict privacy setting and do not disclose their models to a central server at all, opting out from the federation partially or entirely. The proposed framework is general enough to include any kinds of machine learning models, and can even use combinations of different kinds.
[]
[ { "authors": [ "Manoj Ghuhan Arivazhagan", "Vinay Aggarwal", "Aaditya Kumar Singh", "Sunav Choudhary" ], "title": "Federated learning with personalization layers", "venue": "arXiv preprint arXiv:1912.00818,", "year": 2019 }, { "authors": [ "Aurélien Bellet", "Rachid Guerraoui", "Mahsa Taziki", "Marc Tommasi" ], "title": "Personalized and private peer-to-peer machine learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Yuyang Deng", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi" ], "title": "Adaptive personalized federated learning", "venue": "arXiv preprint arXiv:2003.13461,", "year": 2020 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "Personalized federated learning: A metalearning approach", "venue": "arXiv preprint arXiv:2002.07948,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Filip Hanzely", "Peter Richtárik" ], "title": "Federated learning of a mixture of global and local models", "venue": "arXiv preprint arXiv:2002.05516,", "year": 2020 }, { "authors": [ "Andrew Hard", "Chloé M Kiddon", "Daniel Ramage", "Francoise Beaufays", "Hubert Eichner", "Kanishka Rao", "Rajiv Mathews", "Sean Augenstein" ], "title": "Federated learning for mobile keyboard prediction, 2018", "venue": "URL https://arxiv.org/abs/1811.03604", "year": 2018 }, { "authors": [ "Chaoyang He", "Salman Avestimehr", "Murali Annavaram" ], "title": "Group knowledge transfer: Collaborative training of large cnns on the edge", "venue": "arXiv preprint arXiv:2007.14513,", "year": 2020 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "Eunjeong Jeong", "Seungeun Oh", "Hyesung Kim", "Jihong Park", "Mehdi Bennis", "Seong-Lyun Kim" ], "title": "Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data", "venue": "arXiv preprint arXiv:1811.11479,", "year": 2018 }, { "authors": [ "Yihan Jiang", "Jakub Konečnỳ", "Keith Rush", "Sreeram Kannan" ], "title": "Improving federated learning personalization via model agnostic meta learning", "venue": null, "year": 1909 }, { "authors": [ "Justin M Johnson", "Taghi M Khoshgoftaar" ], "title": "Survey on deep learning with class imbalance", "venue": "Journal of Big Data,", "year": 2019 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Tao Lin", "Lingjing Kong", "Sebastian U Stich", "Martin Jaggi" ], "title": "Ensemble distillation for robust model fusion in federated learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "arXiv preprint arXiv:0902.3430,", "year": 2009 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Maxime Oquab", "Leon Bottou", "Ivan Laptev", "Josef Sivic" ], "title": "Learning and transferring mid-level image representations using convolutional neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Reza Shokri", "Vitaly Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security,", "year": 2015 }, { "authors": [ "Paul Vanhaesebrouck", "Aurélien Bellet", "Marc Tommasi" ], "title": "Decentralized collaborative learning of personalized models over networks", "venue": "arXiv preprint arXiv:1610.05202,", "year": 2016 }, { "authors": [ "Kangkang Wang", "Rajiv Mathews", "Chloé Kiddon", "Hubert Eichner", "Françoise Beaufays", "Daniel Ramage" ], "title": "Federated evaluation of on-device personalization", "venue": null, "year": 1910 }, { "authors": [ "Z. Wang", "M. Song", "Z. Zhang", "Y. Song", "Q. Wang", "H. Qi" ], "title": "Beyond inferring class representatives: User-level privacy leakage from federated learning", "venue": "In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications,", "year": 2019 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nIn many real-world scenarios, data is distributed over a large number of devices, due to privacy concerns or communication limitations. Federated learning is a framework that can leverage this data in a distributed learning setup. This allows for exploiting both the compute power of all participating clients, and to benefit from a large joint training data set. Furthermore, this is beneficial for privacy and data security. For example, in keyboard prediction for smartphones, thousands or even millions of users produce keyboard input that can be leveraged as training data. The training can ensue directly on the devices, doing away with the need for costly data transfer, storage, and immense compute on a central server (Hard et al., 2018). The medical field is another example area where data is extremely sensitive and may have to stay on premise, and a setting where analysis may require distributed and privacy-protecting approaches. In settings with such firm privacy\nrequirements, standard federated learning approaches may not be enough to guarantee the needed privacy.\nThe optimization problem that we solve in a federated learning setting is\nmin w∈Rd L(w) = min w∈Rd\n1\nn n∑ k=1 E(x,y)∼pk [`k(w; x, y)] (1)\nwhere `k is the loss for client k and (x, y) samples from the kth client’s data distribution pk. A central server is coordinating training between the K local clients. The most prevalent algorithm for solving this optimization is the federated averaging (FEDAVG) algorithm (McMahan et al., 2017). In this solution, each client has its own client model, parameterized bywk which is trained on a local dataset for E local epochs. When all clients have completed the training, their weights are sent to the central server where they are aggregated into a global model, parameterized by wg . In FEDAVG, the k client models are combined via layer-wise averaging of parameters, weighted by the size of their respective local datasets:\nwgt+1 ← ∑ k nk n wkt+1, (2)\nwhere nk is the size of the dataset of client k and n = ∑ k nk. Finally, the new global model is sent out to each client, where it constitutes the starting point for the next round of (local) training. This process is repeated for a defined number of global communication rounds.\nThe averaging of local models in parameter space generally works but requires some care to be taken in order to ensure convergence. McMahan et al. (2017) showed that all local models need to be initialized with the same random seed for FEDAVG to work. Extended phases of local training between communication rounds can similarly break training, indicating that the individual client models will over time diverge towards different local minima in the loss landscape. Similarly, different distributions between client datasets will also lead to divergence of client models (McMahan et al., 2017).\nDepending on the use case, however, the existence of local datasets and the option to train models locally can be advantageous: specialized local models, optimized for the data distribution at hand may yield higher performance in the local context than a single global model. Keyboard prediction, for example, based on a global model may represent a good approximation of the population average, but could provide a better experience at the hands of a user when biased towards their individual writing style and word choices. A natural question arises: when is a global FL-trained model better than a specialized local model? A specialist would be expected to perform better than a global generalist in a pathological non-iid setting, whereas the global generalist would be expected to perform better in an iid setting.\nTo address the issue of specialized local models within the federated learning setting, we propose a general framework based on mixtures of experts of a local and a global model on each client. Local expert models on each client are trained in parallel to the global model, followed by training local gating functions hk(x) that aggregate the two models’ output depending on the input. We show advantages of this approach over fine-tuning the global model on local data in a variety of settings, and analyze the effect that different levels of variation between the local data distributions have on performance.\nWhile standard federated learning already shows some privacy enhancing properties, it has been shown that in some settings, properties of the client and of the training data may be reconstructed from the weights communicated to the server (Wang et al., 2019). To this end, in this paper we will work with a stronger notion of privacy. While existing solutions may be private enough for some settings, we will assume that a client that require privacy for some of its data, needs this data to not influence the training of the global model at all. Instead, our framework allows for complete opting out from the federation with all or some of the data at any given client. Clients with such preferences will still benefit from the global model and retain a high level of performance on their own, skewed data distribution. This is important when local datasets are particularly sensitive, as may be the case in medical applications. Our experimental evaluation demonstrate the robustness of our learning framework with different levels of skewness in the data, and under varying fractions of opt-out clients." }, { "heading": "2 RELATED WORK", "text": "Distributed machine learning has been studied as a strategy to allow for training data to remain with the clients, giving it some aspects of privacy, while leveraging the power of learning from bigger data and compute (Konečnỳ et al., 2016; Shokri & Shmatikov, 2015; McMahan et al., 2017; Vanhaesebrouck et al., 2016; Bellet et al., 2018). The federated averaging technique (McMahan et al., 2017) has been influential and demonstrated that layer-wise averaging of the weights in neural network models trained separately at the clients is successful in many settings, producing a federated model that demonstrates some ability to generalize from limited subsets of data at the clients. However, it has been shown that federated averaging struggles when data is not independent and identically distributed among the clients (the non-iid setting), which shows that there is a need for personalization within federated learning (Kairouz et al., 2019).\nIn general, addressing class imbalance with deep learning is still a relatively understudied problem (Johnson & Khoshgoftaar, 2019). A common approach for personalization is to first train a generalist model and then fine-tune it using more specific data. This approach is used in meta-learning (Finn et al., 2017), domain adaptation (Mansour et al., 2009), and transfer learning (Oquab et al., 2014). This approach was proposed for the distributed setting by Wang et al. (2019) who used federated averaging to obtain a generalist model which was later fine-tuned locally on each client, using its specific training data. Some work has been inspired by the meta-learning paradigm to learn models that are specialized at the clients (Jiang et al., 2019; Fallah et al., 2020). Arivazhagan et al. (2019) combined this strategy and ideas from transfer learning with deep neural networks and presented a solution where shallow layers are frozen, and the deeper layers are retrained at every client.\nZhao et al. (2018) propose a strategy to improve training on non-iid client data by creating a subset of data which is globally shared between all clients. Recent strategies have also explored knowledge distillation techniques for federated learning (Jeong et al., 2018; He et al., 2020; Lin et al., 2020), which show promising results in non-iid settings.\nHanzely & Richtárik (2020) proposed a solution that provides an explicit trade-off between global and local models by the introduction of an alternative learning scheme that does not take the full federation step at every round, but instead takes a step in the direction towards the federated average. Deng et al. (2020) proposed to combine a global model w trained using federated averaging, with a local model v with a weight αi. To find optimal αi they optimize α∗i = argminαi∈[0,1] fi (αiv + (1− αi)w) every communication round. While this weighting scheme will balance the two models, it has no way of adapting to the strengths of the different members of the mix.\nMixture of experts (Jacobs et al., 1991) is the combination of several competing neural networks trained together with a gating network to solve a common task. It was presented as an ensemble method which can be trained end to end using gradient descent. In the current work, we will apply the mixture to leverage the specific strengths of a global model trained with federated averaging, and a local model trained locally on each client." }, { "heading": "3 FEDERATED LEARNING USING A MIXTURE OF EXPERTS", "text": "In this work, we present a framework for federated learning that builds on federated averaging and mixtures of experts. Our framework includes a personalized model for each client, which is included in a mixture together with a globally trained model using federated learning. The local models never leave the clients, which gives strong privacy properties, while the global model is trained using federated averaging and leverages larger compute and data. In our framework as seen in Figure 1, some clients can choose to opt-out from the federation, meaning that no information of their data is leaving the client, ensuring privacy of those clients.\nLet fg be the global model with parameters wg . We denote the index of clients by k and the local models by fkl with parameters w k l . The gating function is denoted by h\nk, parameterized with wkh. Training in the proposed framework is divided into three main parts. First, a global model fg is trained using federated averaging using opt-in data (see Section 3.1). Second, a local model fkl is trained using all available data on a client. Third, fg and fkl are further trained together with a gating\nmodel hk on each client locally, using all available data on the client. Steps one and two may be performed in parallel if allowed by the available resources." }, { "heading": "3.1 PRIVACY GUARANTEES", "text": "The proposed framework allows for a strict form of privacy guarantee. Each client may choose an arbitrary part of their data which they consider being too sensitive to use for federated learning, and no information from this data will ever leave the client. The system will still leverage learning from this data by using it to train the local model fkl and the gating model h\nk. This is a very flexible and useful property. For example, it allows for a user to use the sensitive data in training of the private local models, while transforming it using some privatization mechanism and use the censored version to train the federated model.\nIn general, each client dataset Dk is split into two non-overlapping datasets, DkO and DkI , one of which has to be non-empty. The local model fkl and the gating model h\nk is trained using the whole dataset Dk = DkO ∪ DkI , while the global model fg is trained with FEDAVG using only the opt-in dataset DkI . This is visualized in Figure 1." }, { "heading": "3.2 OPTIMIZATION", "text": "Step 1: Federated averaging. We train the global model using FEDAVG. In other words, globally we optimize\nmin wg∈Rd Lglobal(wg) = min wg∈Rd 1∣∣DkI∣∣ ∑ k∈DkI E(x,y)∼DkI [`k(wg; x, y, ŷg)] (3)\nfor the opt-in dataset DkI . Here `k is the loss for the global model wg on client k for the prediction fg(x) = ŷg , and DkI is the kth clients opt-in data distribution.\nStep 2: Train local models. The local models fkl are trained only locally, sharing no information between clients, minimizing the the local loss over wkl ∈ Rd,\nmin wkl ∈Rd L(wkl ) = min wkl ∈Rd\nE(x,y)∼Dk [ `k(w k l ; x, y, ŷl) ] ∀k = 1, . . . , n. (4)\nHere `k is the loss for the prediction ŷl = fkl (w k l ; x) from the local model on the input x and Dk is the kth clients dataset.\nStep 3: Train local mixtures. The local mixture of experts are trained using the gating models hk, with the prediction error given by weighing the trained models fg and fkl :\nŷh = h k(x)fkl (x) +\n( 1− hk(x) ) fg(x) ∀k = 1, . . . , n. (5)\nIn other words, at the end of a communication round, given fkl and fg , we optimize the mixture equation 5:\nmin wg,wkl ,w k h L(wg, wkl , wkh) = min wg,wkl ,w k h\nE(x,y)∼Dk [ `k(wg, w k l , w k h; x, y, ŷh) ] , (6)\nlocally for every client k = 1, . . . , n. Here `k is the loss from predicting ŷ for the label y given the input x with the model from equation 5 over the data distribution Dk of client k. A summary of the method is described in Algorithm 1.\nAlgorithm 1\n1: input: Models participating in FEDAVG w1, . . . , wk, local expert models wkl , local gate wkh, learning rate lr, decay rates β1, β2 2: Randomly initialize w1, . . . , wk with the same seed. 3: Randomly initialize wkl and w k h. 4: wg ← FEDAVG(w1, . . . , wk) // Train for E local epochs and G communication rounds 5: for client k do 6: wkl ← Adam(wkl , lr, β1, β2) // Train local model on client k 7: wg, wkl , w k h ← Adam(wg, wkl , wkh, lr, β1, β2) // Train mixture of experts on client k 8: end for" }, { "heading": "3.3 EXPERIMENTAL SETUP", "text": "Datasets. Our experiments are carried out on the datasets CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and Fashion-MNIST (Xiao et al., 2017). In order to simulate heterogeneous client data without labels overlapping between clients, we partition the data into 5 clients for CIFAR-10 and Fashion-MNIST, and 50 clients for CIFAR-100. For CIFAR-100 we used a client sampling strategy where we in each communication round in the federation randomly sampled a fraction of 0.1 of clients to participate.\nSampling non-iid. The datasets are sampled in such a way that each client’s datasets contains two majority classes which together form p% of the client data and the remaining classes form (1− p)% of the client data. We perform experiments where we vary p to see what effect the degree of heterogeneity has on performance. In the extreme case p = 1.0, each client only has two labels in total. Note that this is an extreme partitioning, where there is no overlap of majority class labels between clients, i.e. the local data distributions Pi ∩ Pj = ∅ for all pairs of clients i, j. Also note that, we perform experiments with very little data in our experimental setup. A summary of the experimental set-up can be seen in Table 1.\nOpt-out factor. Some users might want to opt-out from participating to a global model, due to privacy reasons. These users will still receive the global model. To simulate this scenario in the experimental evaluation, we introduce an opt-out factor denoted by q. This is a fraction deciding the number of clients participating in the FEDAVG optimization. The clients that participate in the federated learning optimization have all their data in DkI , while the rest of the clients that optout have all their data in DkO. q = 0 means all clients are opt-in and participating. We perform experiments varying q, to see how robust our algorithm is to different levels of client participation. In Figure 1 we visualize how the opt-out factor can be used.\nModels. In our setup, both the local model fl and the global model fg are CNNs with the same architecture. However, they are not constrained to be the same model and could be implemented any two differentiable models. The CNN has two convolutional layers with a kernel size of 5, and two fully-connected layers. All layers have ReLU activations. The gating function hk has the same architecture as fg and fl, but with a sigmoid activation in the last layer. We use Adam (Kingma & Ba, 2014) to optimize all models, with a learning rate of 0.0001 in all experiments.\nBaselines. We use three different models as baselines. First, the locally trained model fkl for each client. Second, FEDAVG. Third, the final model output from FEDAVG fine-tuned for each client on its own local data. We train fkl , the fine-tuned model and the mixture using early stopping for 200 epochs, monitoring validation loss on each client.\nEvaluation. For evaluation we have a held-out validation set for each client. For both CIFAR-10 and CIFAR-100 we have n = 400 data points for evaluation per client, sampled with the same majority class fraction p. We report an average accuracy over all clients." }, { "heading": "4 RESULTS", "text": "For the sake of reproducibility, all code will be made available.\nIn Table 2 we report accuracies and standard deviations on CIFAR-10 for all models when data is highly non-iid, i.e. for p = {0.8, 0.9, 1.0}. In Figure 3 we report accuracies over all majority fractions p. By comparing Figures 3a, 3b and 3c, we see that FEDAVG performs substantially worse when the opt-out fraction increases, i.e. when more clients opt-out from the federation. As a consequence of a weakened global model, the fine-tuned baseline decreases much in performance as well. However, our proposed mixture leverages both the global and local models and does not degrade as much in performance. In Figures 2 and 4 we see that we see that the mixture performs just as well, or better, than the fine-tuned baseline for CIFAR-100 and Fashion-MNIST, respectively.\nA common client sampling strategy to mitigate scalability issues in federated learning is to sample a fraction of clients to participate in each communication round of FEDAVG. In Figure 2, we see experiments performed with two different sampling fractions on CIFAR-100. In Figure 2a a client fraction of 1.0 was used, and in Figure 2b a client fraction of 0.1 was used. It can be seen that the difference in validation accuracy is low between these two, showing that our proposed method is robust to client sampling.\nExperiments were also carried out to see what effect training data size has on performance. This is summarized as a heatmap in Figure 5a for CIFAR-10 and in Figure 5b for Fashion-MNIST, where the difference in validation accuracy between the mixture and the fine-tuned baseline is shown for different majority class fractions p and different client train set sizes. We see here that the mixture model outperforms the baseline most of the time, a trend which gets stronger when the training set size increases." }, { "heading": "5 DISCUSSION", "text": "To address the problems of learning a personalized model in a federated setting when the client data is heterogeneous, we have proposed a novel framework for federated mixtures of experts where a global model is combined with local specialist models. We find that with skewed non-iid data on the clients, our approach outperforms all other baselines, including FEDAVG, a locally trained model, and models trained first with FEDAVG and then fine-tuned on each local client in most settings. The experimental evaluation for CIFAR-10 shows that our approach outperforms all other methods, including the strong fine-tuning baseline (see Figure 3). In Figure 5 we see that our proposed model outperforms the fine-tuned baseline in most settings for CIFAR10 and Fashion-MNIST, especially when the number of data points per client increase. For CIFAR-100, the proposed framework outperforms all other methods, regardless of the level of skewness (see Figure 2). In this setting, a large part of the training data for each client comes from a very limited set of the available classes: two out of 100. As such, very few training examples will be available from the minority classes. This is a crucial result: the proposed framework is very robust to extremely skewed training data.\nThe framework also gives strong privacy guarantees, where clients are able to opt-out from federation, keeping their data private. The experiments show that our proposed solution is robust to a high opt-out fraction of users, as seen in Figures 3 and 4, whereas the fine-tuned baseline is not." }, { "heading": "6 CONCLUSIONS", "text": "In this work, we have presented a framework for federated learning that builds on mixtures of experts. This framework allows us to learn a model that balances the generalist nature of the global federated model and the specialist nature of the local models.\nOur approach is not only an intuitive approach for the generalist vs specialist balance, but also allows for varying participation of the different clients in the federation. Clients may either opt-in and participate in the federation, or opt-out entirely by training only a local model with all its local data but still receive a global model from the opt-in participants. This gives a flexible solution for strong privacy guarantees in real-world settings. Our experiments show that in the setting where many clients opt-out from federation, the fine-tuned baseline degrades in performance whereas our proposed mixture model does not.\nThe proposed framework is compatible with any gradient-based machine learning models, and can incorporate combinations of these, strengthening the potential of this direction of research, and leveraging the beneficial properties of ensembles of various machine learning models.\nThe experimental evaluation conducted in this work showed the proposed solution to achieve stateof-the-art results in three different benchmark datasets when data is highly skewed, and when parts of the clients in the federation opts out from the training." } ]
2,020
FEDERATED LEARNING USING A MIXTURE OF EXPERTS
SP:235d680e5cfac85db6704ba1d79eb7b728da8d08
[ "The paper proposes two modifications to SELU activation function to improve it with regards to preserving forward-backward signal propagation in neural networks. The work builds on top of the mean-field theory literature and provides a modified self-normalization property (additional constraints compared to SELU). Further, it discusses some heuristics (mixup, weight centralization) to improve performance in practice." ]
The approaches that prevent gradient explosion and vanishing have boosted the performance of deep neural networks in recent years. A unique one among them is the self-normalizing neural network (SNN), which is generally more stable than initialization techniques without explicit normalization. The self-normalization property of SNN in previous studies comes from the Scaled Exponential Linear Unit (SELU) activation function. However, it has been shown that in deeper neural networks, SELU either leads to gradient explosion or loses its self-normalization property. Besides, its accuracy on large-scale benchmarks like ImageNet is less satisfying. In this paper, we analyze the forward and backward passes of SNN with mean-field theory and block dynamical isometry. A new definition for selfnormalization property is proposed that is easier to use both analytically and numerically. A proposition is also proposed which enables us to compare the strength of the self-normalization property between different activation functions. We further develop two new activation functions, leaky SELU (lSELU) and scaled SELU (sSELU), that have stronger self-normalization property. The optimal parameters in them can be easily solved with a constrained optimization program. Besides, analysis on the activation’s mean in the forward pass reveals that the selfnormalization property on mean gets weaker with larger fan-in, which explains the performance degradation on ImageNet. This can be solved with weight centralization, mixup data augmentation, and centralized activation function. On moderatescale datasets CIFAR-10, CIFAR-100, and Tiny ImageNet, the direct application of lSELU and sSELU achieve up to 2.13% higher accuracy. On Conv MobileNet V1 ImageNet, sSELU with Mixup, trainable λ, and centralized activation function reaches 71.95% accuracy that is even better than Batch Normalization.
[]
[ { "authors": [ "Devansh Arpit", "Yoshua Bengio" ], "title": "The benefits of over-parameterization at initialization in deep re{lu} networks, 2020", "venue": "URL https://openreview.net/forum?id=rJggX0EKwS", "year": 2020 }, { "authors": [ "Devansh Arpit", "Yingbo Zhou", "Bhargava Kota", "Venu Govindaraju" ], "title": "Normalization propagation: A parametric technique for removing internal covariate shift in deep networks", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Rebekka Burkholz", "Alina Dubatovka" ], "title": "Exact information propagation through fully-connected feed forward neural networks", "venue": "arXiv preprint arXiv:1806.06362,", "year": 2018 }, { "authors": [ "Zhaodong Chen", "Lei Deng", "Guoqi Li", "Jiawei Sun", "Xing Hu", "Ling Liang", "Yufei Ding", "Yuan Xie" ], "title": "Effective and efficient batch normalization using a few uncorrelated data for statistics estimation", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Zhaodong Chen", "Lei Deng", "Bangyan Wang", "Guoqi Li", "Yuan Xie" ], "title": "A comprehensive and modularized statistical framework for gradient norm equality in deep neural networks", "venue": "arXiv preprint arXiv:2001.00254,", "year": 2020 }, { "authors": [ "Lei Deng", "Guoqi Li", "Song Han", "Luping Shi", "Yuan Xie" ], "title": "Model compression and hardware acceleration for neural networks: A comprehensive survey", "venue": "Proceedings of the IEEE,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Nicholas J Higham" ], "title": "The accuracy of floating point summation", "venue": "SIAM Journal on Scientific Computing,", "year": 1993 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ping Luo", "Xinjiang Wang", "Wenqi Shao", "Zhanglin Peng" ], "title": "Towards understanding regularization in batch normalization", "venue": "arXiv preprint arXiv:1809.00846,", "year": 2018 }, { "authors": [ "Vincent Michalski", "Vikram Voleti", "Samira Ebrahimi Kahou", "Anthony Ortiz", "Pascal Vincent", "Chris Pal", "Doina Precup" ], "title": "An empirical study of batch normalization and group normalization in conditional computation", "venue": null, "year": 1908 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh", "Hao Wu" ], "title": "Mixed precision training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ben Poole", "Subhaneil Lahiri", "Maithra Raghu", "Jascha Sohl-Dickstein", "Surya Ganguli" ], "title": "Exponential expressivity in deep neural networks through transient chaos", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Samuel S Schoenholz", "Justin Gilmer", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "Deep information propagation", "venue": "arXiv preprint arXiv:1611.01232,", "year": 2016 }, { "authors": [ "Hanie Sedghi", "Vineet Gupta", "Philip M. Long" ], "title": "The singular values of convolutional layers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Shuang Wu", "Guoqi Li", "Lei Deng", "Liu Liu", "Dong Wu", "Yuan Xie", "Luping Shi" ], "title": "l1-norm batch normalization for efficient training of deep neural networks. IEEE transactions on neural networks and learning", "venue": null, "year": 2018 }, { "authors": [ "Lechao Xiao", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz", "Jeffrey Pennington" ], "title": "Dynamical isometry and a mean field theory of cnns: How to train 10, 000-layer vanilla convolutional neural networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Yann N Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "arXiv preprint arXiv:1901.09321,", "year": 2019 }, { "authors": [ "Wenzhao Zheng", "Zhaodong Chen", "Jiwen Lu", "Jie Zhou" ], "title": "Hardness-aware deep metric learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Denny Zhou", "Mao Ye", "Chen Chen", "Tianjian Meng", "Mingxing Tan", "Xiaodan Song", "Quoc Le", "Qiang Liu", "Dale Schuurmans" ], "title": "Go wide, then narrow: Efficient training of deep thin networks", "venue": null, "year": 2007 }, { "authors": [ "Chen" ], "title": "2020b), its expectation can be computed with E", "venue": "∂hl", "year": 2020 }, { "authors": [ "Chen" ], "title": "2020b) proves the theorem as follows", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020b) (Multiplication)", "venue": null, "year": 2020 }, { "authors": [ "C latency" ], "title": "EXPERIMENT SETUP FOR MODERATE-SCALE BENCHMARKS The experiments in Section 6.1 and 6.2 are based on a 56-layer Convolutional Neural Network shown in Table 4. The H and W are 32 for CIFAR-10, CIFAR-100, and 64 for Tiny ImageNet", "venue": "Following Klambauer et al", "year": 2020 }, { "authors": [ "explosion. Following Chen" ], "title": "2020b), we set = 0.017 for dSELU, lSELU, and sSELU", "venue": null, "year": 2020 }, { "authors": [ "Chen" ], "title": "2020b), we choose the Conv MobileNet V1", "venue": null, "year": 2020 }, { "authors": [ "Zhang" ], "title": "2018), the γ for interpolation is drawn from Beta", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep neural networks (DNNs) have achieved state-of-the-art performance on different tasks like image classification (He et al., 2015; Zheng et al., 2019). This rapid development can be largely attributed to the initialization and normalization techniques that prevent the gradient explosion and vanishing. The initialization techniques (He et al., 2015; Xiao et al., 2018) initialize the parameters in networks to have good statistical property at beginning, and assume that this property can be more or less maintained throughout the training process. However, this assumption is likely to be violated when the network gets deeper or is trained with higher learning rate. Hence, normalization techniques are proposed to explicitly normalize the network parameters (Salimans & Kingma, 2016; Arpit et al., 2016) or the activations (Ioffe & Szegedy, 2015b; Ulyanov et al., 2016) during training. In particular, Batch Normalization (BN) (Ioffe & Szegedy, 2015a) has become a standard component in DNNs, as it not only effectively improves convergence rate and training stability, but also regularizes the model to improve generalization ability.\nHowever, BN still has several drawbacks. First, when calculating the mean and variance, the accumulation must be done under FP32 to avoid underflow (Micikevicius et al., 2018). This brings challenges when training neural networks in low bit width. Second, the performance degradation under micro batch size also makes it more difficult to design training accelerators, as the large batch size increases the memory size to store the intermediate results for backward pass (Deng et al., 2020). Besides, Chen et al. (2020a); Wu et al. (2018) show that BN introduces considerable overhead.\nThe self-normalizing neural network (SNN) provides a promising way to address this challenge. SNN initializes the neural network to have a good statistical property at the beginning just like\ninitialization techniques. However, the statistics deviation in forward and backward passes can be gradually fixed during propagation, thus it is more robust to the deviation from initial properties (Chen et al., 2020b). For instance, the mean and variance of output activations with SELU in Klambauer et al. (2017) automatically converge the fixed point (0, 1). Chen et al. (2020b) analyze the Frobenius norm of backward gradient in SNN activated with SELU. They reveal a trade-off between the self-normalization property and the speed of gradient explosion in the backward pass, and the hyper-parameters need to be configured according to the depth of the network. The resulting activation function, depth-aware SELU (dSELU), has achieved even higher accuracy than the original configuration on moderate-scale datasets like CIFAR-10, and makes the SNN trainable on ImageNet.\nHowever, in deeper neural networks, the dSELU gradually degenerates to ReLU and loses its selfnormalization property. Moreover, even with dSELU, the test accuracy on ImageNet with Conv MobileNet V1 (Howard et al., 2017) is still 1.79% lower than BN (Chen et al., 2020b). Therefore, we aim to answer the following three questions in this paper: 1). Is SELU the only activation function that has self-normalization property? 2). If it is not, is there a better choice? And how do we compare the strength of self-normalization property between different activation functions? 3). Why the performance of SNN on ImageNet is less satisfying? Is there any way to alleviate that?\nIn this paper, we analyze the signal propagation in both forward and backward passes in serial deep neural networks with mean-field theory (Poole et al., 2016) and block dynamical isometry (Chen et al., 2020b). Our main theoretical results are summarized as follows:\n• We illustrate that an activation function would demonstrate self-normalization property if the second moment of its Jacobian matrix’s singular values φ(q) is inversely proportional to the second moment of its input pre-activations q, and the property gets stronger when φ(q) gets closer to 1/q. A new definition of the self-normalization property is proposed that can be easily used both analytically and numerically.\n• We propose leaky SELU (lSELU) and scaled SELU (sSELU). Both of them have an additional parameter, β, that can be configured to achieve stronger self-normalization property. The hyper-parameters can be solved with a constrained optimization program, thus no additional hyper-parameter relative to dSELU is introduced.\n• We reveal that models with larger fan-in have weaker normalization effectiveness on the mean of the forward pass signal. This can be solved with explicit weight centralization, mixup data augmentation (Zhang et al., 2018), and centralized activation function.\nOn CIFAR-10, CIFAR-100, and Tiny ImageNet, lSELU and sSELU achieves up to 2.13% higher test accuracy than previous studies. On ImageNet - Conv MobileNet V1, sSELU with Mixup, trainable λ, and centralized activation function achieves comparable test accuracy (71.95%) with BN. Besides, we provide a CUDA kernel design for lSELU and sSELU that has only 2% overhead than SELU." }, { "heading": "2 RELATED WORK", "text": "In this section, we present an overview of existing studies on the self-normalizing neural networks (SNN) as well as statistical studies on forward and backward signals in deep neural networks.\nSelf-normalizing Neural Network. Scaled Exponential Linear Unit (SELU) (Klambauer et al., 2017) scales the Exponent Linear Unit (ELU) by a constant scalar λ. The λ and original parameter α in ELU are configured such that the mean and variance of output activation have a fixed point (0, 1). The authors further prove that this fixed point is still stable and attractive even when the input activations and the weights are unnormalized. Chen et al. (2020b) investigate the fixed point in backward gradient. They reveal that the gradient of SNN is exploding with the rate (1 + ) per layer, where is a small positive value. The self-normalizing property gets stronger when is larger, whereas the gradient will explode at a higher rate. Therefore, they propose the depth-aware SELU in which the ≈ 1/L is used to derive the optimal α and λ in SELU for a network with depth L. Statistical Analysis of Deep Neural Networks. Schoenholz et al. (2016); Poole et al. (2016); Burkholz & Dubatovka (2018) investigate the forward activations under the limit of large layer width with mean-field theory. They have identified an Order-to-Chaos phase transition characterized by the second moment of singular values of the network’s input-output Jacobian matrix. The neural network has good performance when it is on the border of the order and chaos phases. On the other\nhand, Chen et al. (2020b) develop a very handy framework for analyzing the Frobenius norm of gradient. They illustrate that the gradient norm equality is a universal philosophy behind various different initialization, normalization techniques, and even some neural network structures. The gradient norm equality means the Frobenius Norm of the gradient is more or less equal in different layers so that the information flow in the backward pass can be well preserved. (Arpit & Bengio, 2020)" }, { "heading": "3 SELF-NORMALIZATION PROPERTY", "text": "In this section, we formally define the self-normalization property under the problem formulation, notations, and assumptions as follows.\nProblem Formulation. Let’s consider a DNN with L layers. Each layer performs a linear transform followed by a non-linear element-wise activation function f , i.e.\nxl = f(hl), hl = Wlxl−1 + bl, l = 1, ..., L, (1)\nwhere xl ∈ RNl is the output feature vector of layer l, hl is the pre-activation vector, Wl is the weight of fully-connected layer or the expanded doubly block circulant matrix (Sedghi et al., 2019) of 2D convolution, bl is the vector of biases, and we denote the loss as L. Besides, without loss of generality, for f and x ∼ N(0, q), we have\n(1 + δq)E [ f2(x) ] = E[ ( df(x)/dx)2 ] E[x2], (2)\nwhere δq is a function of q. Following previous studies (Poole et al., 2016; Chen et al., 2020b), for ∀ l, we make the assumptions as follows:\nAssumption 1 The mean of entries inWl and bl are zero.\nAssumption 2 With central limit theory, the entries in hl follow i.i.d. N(0, ql), ql = 1Nlh T l hl.\nAssumption 3 The eigenvalues ofW Tl Wl are independent with entries in hl−1.\nKlambauer et al. (2017) first define the self-normalization property of a neural network as follows.\nDefinition 1 (Self-normalizing Neural Network) A neural network is self-normalizing if it possesses a mapping g : Ω→ Ω for each activation y that maps mean and variance from one layer to the next and has a stable and attracting fixed point depending on (ω, τ) in Ω. Furthermore, the mean and the variance remain in the domain Ω, that is g(Ω) ⊆ Ω, where Ω = {(µ, ν)|µ ∈ [µmin, µmax], ν ∈ [νmin, νmax]}. When iteratively applying the g, each point within Ω converges to this fixed point.\nThis definition imitates the explicit normalization techniques like BN, which ensures that the feedforward signal is normalized. Based on Definition 1, Klambauer et al. (2017) propose the SELU:\nf(x) = λ { x if x > 0 αex − α if x ≤ 0 . (3)\nBesides, Klambauer et al. (2017) initialize the entries in Wl with N(0, 1/Nl−1), so that the output pre-activation will have the same second moment of input activation. With the stable fixed points of mean and variance around (0, 1), the optimal choice for λ and α can be derived from∫ ∞\n−∞ f(z)\ne− z2 2 √ 2π dz = 0, ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1. (4)\nFurthermore, the authors prove that the fixed points for mean and variance are still attractive even when the statistical properties of the parameters in the neural network deviate from the initial setup.\nHowever, the statistical fixed point in the forward pass doesn’t necessarily lead to good dynamics of gradient. Chen et al. (2020b) analyze the Frobenius norm of the gradient in neural networks activated by SELU. With the same activation function shown in equation 3, their analysis shows that the optimal λ and α can be configured by preserving the Frobenius norm of backward gradient and second moment of forward activations with equations as follows:∫ ∞\n−∞\n( df(z)\ndz\n)2 e− z2 2\n√ 2π dz=1 + , ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1. (5)\nwhere is a small positive constant, without which the only solution for equation 5 would be λ = √\n2 and α = 0, and the activation function degenerates back to ReLU with the initialization technique proposed in He et al. (2015). Thus it will lose the self-normalization property. Conversely, a relatively large will bring stronger self-normalization property, but meanwhile make the Frobenius norm of gradient explode with rate (1+ ) per layer. Notably, the original configuration of SELU can be obtained by setting = 0.0716. Therefore, Chen et al. (2020b) assert that having ≈ 1L could bring a good trade-off between gradient norm stability and self-normalization property. Experiments on CIFAR-10 and ImageNet show that the new configuration results in higher accuracy.\nInspired by Chen et al. (2020b), we formally redefine the self-normalization property as follows:\nDefinition 2 (Self-normalization Property) Given an activation function f , we define operand φ as\nφ(q) = ∫ ∞ −∞ ( df( √ qz) d √ qz )2 e− z2 2 √ 2π dz. (6)" }, { "heading": "If f satisfies:", "text": "φ(1) = 1 + , ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1, min(1, 1 q ) < φ(q) < max(1, 1 q ), (7)\nthen we say f has the self-normalization property.\nWhile the first two equations in equation 7 are identical to equation 5 that constructs fixed-points for both the second moment of activations and the Frobenius norm of the gradient, the third one makes these fixed points attractive, as we have the proposition as follows.\nProposition 3.1 (Strength of Self-normalization Property) Under all the three Assumptions and Definition 2, we represent φ(q) as a linear interpolation between 1 and 1/q as follows.\nφ(q) = { 1 + (1− γq<1)(1/q − 1) q < 1 1/q + γq>1(1− 1/q) q > 1 . (8)\nwhere γq ∈ (0, 1) is a function of q. Then the following conclusions hold (Proof: Appendix A.2):\n• The self-normalization property gets stronger when γq<1 and γq>1 get closer to 0. In particular, we have |γq<1| ≈ |γq>1| ≈ |dφ(q)dq |q=1 + 1| when q is around 1.\n• For layer l, the gradient explodes under the rate (1 + δql), i.e. Π l i=1(1 + δqi−1)E [ || ∂L∂hl || 2 2 ] = q0E [ || ∂L∂h0 || 2 2 ] .\nProposition 3.1 is derived based on Assumption 1, whereas the mean of weight matrices may shift during training. Fortunately, Proposition 3.2 shows that the deviation of the mean of forward activations can also be normalized by simply multiplying with the weight matrix.\nProposition 3.2 (Normalization of Mean) Under the assumption that the entries in the weight matrix wij are independent with the input activations, and their expectation has an upper bound µ, i.e. ∀ i, j, E[wij ] ≤ µ. Then we say multiplication with the weight matrix normalizes the mean if µ < 1Nl−1 holds, where Nl−1 is the fan-in of the current layer l. Moreover, the mean is scaled down by ratio smaller than µNl−1. (Proof: Appendix A.3)" }, { "heading": "4 NOVEL SELF-NORMALIZING ACTIVATION FUNCTIONS", "text": "Proposition 3.1 reveals that f with φ(q) closer to 1/q may have stronger self-normalization property. Therefore, on the basis of SELU, we propose to add an additional hyper-parameter β that can be configured to bring φ(q) closer to 1/q and encode other interesting properties. As demos, we find the following two activation functions are quite promising.\nScaled Scaled Exponential Linear Unit (sSELU). The sSELU is defined as follows\nf(x) = λ { x if x > 0 αeβx − α if x ≤ 0 . (9)\nThe negative pre-activations are scaled by β before fed into the activation function. This design is motivated by the observation that without the curvature provided by the exponential term αex, φ(q) of SELU will be a constant value without self-normalization property.\nLeaky Scaled Exponential Linear Unit (lSELU). The lSELU is defined as follows\nf(x) = λ { x if x > 0 αex + βx− α if x ≤ 0 , (10)\nwhich has an additional negative slope βx. This is inspired by the observation that leaky ReLU helps to avoid the saturation of negative activations. Besides, Chen et al. (2020b) show that leaky ReLU alone also improves the stability of the training process.\nDetermine the optimal λ, α, and β. Figure 1 shows that given proper parameters λ, α, and β, our sSELU and dSELU can be configured to get closer to 1/q, which indicates stronger selfnormalization property. With the first conclusion in Proposition 3.1 and equation 7, the λ, α, and β can be obtained by solving the optimization problem below when is provided:\nmin λ,α,β ∣∣∣∣∣dφ(q)dq ∣∣∣∣ q=1 + 1 ∣∣∣∣∣ , s.t. φ(1) = 1 + , ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1, λ ≥ 1. (11)\nIn particular, the constraint λ ≥ 1 is inspired by the argument in Klambauer et al. (2017) that “a slope larger than one can increase the variance if it is too small in the lower layer”. In this paper, we find that constraining λ ≥ 1 provides two other benefits. First, having λ ≈ 1 helps to maintain the mean of the output activations around 0. Second, having larger λ slows down the gradient explosion in the backward pass. The detailed discussion can be found in Appendix A.4.\nDetermine the . While Chen et al. (2020b) propose to have < 1/L to avoid gradient explosion, where L is the depth of the network, their derivation is based on the assumption that δq ≈ 0 in equation 2. However, after taking the nonzero δq into consideration, our Proposition 3.1 shows that the rate is actually (1+δq) rather than (1+ ). We plot the relationship between (1 + δq) and q under different in Figure 2.\nFirst of all, because of the first term in equation 7, we have δq = when q = 1, this illustrates the intuition behind using (1 + ) to characterize the rate of gradient explosion. Therefore, ≈ 1/L is still a good choice to arbitrarily determine , especially for relatively shallow networks. As lSELU and sSELU has relatively higher δq>1\nin Figure 2, a relatively smaller than that of dSELU may yield better performance. Last but not least, in deeper neural networks, q has more chance to deviate from the fixed point q = 1, and δq gets larger when q gets larger. Therefore, the trade-off between the strength of self-normalization property and the speed of gradient explosion may become too complex to be captured by ≈ 1/L, and it might be more promising to determine the proper on the validation set." }, { "heading": "5 LARGE-SCALE SELF-NORMALIZING NEURAL NETWORK", "text": "Chen et al. (2020b) evaluate SELU and dSELU on Conv MobileNet V1 (Howard et al., 2017) on ImageNet. While SELU suffers from gradient explosion, the accuracy of dSELU is 1.79% lower than the BN baseline. We observe that there are two major reasons behind this performance degradation and propose several tricks that improve the performance on large-scale SNNs.\nNonzero Mean in the Forward Pass. Proposition 3.2 reveals that the nonzero mean can be diminished by multiplying with the weight matrices when µ < 1Nl−1 . On small-scale SNNs, as Nl−1 is relatively small, this condition is easy to satisfy, and we don’t have to worry about the deviation of the mean from 0. However, in large-scale SNNs for large datasets like ImageNet, larger fan-in is required to ensure the network has enough parameters to model the more complex problem. In Appendix A.5, we empirically show that models with larger fan-in tend to have larger µNl−1, which implies weaker self-normalization property on the mean. As our Proposition 3.1 is based on Assumption 1, a greatly biased mean may violate the assumption. As a result, for large-scale SNNs, we have to consider the influence of nonzero-mean.\nWhile the influence of the weight matrix on the mean is well captured by Proposition 3.2, the influence of the activation function is more complex. In particular, for layer l, we assume the preactivations follow i.i.d. N(E[hl], σ2), and the output mean can be computed with\nE[xl] = ∫ ∞ −∞ f(x) 1√ 2πσ2 e− (x−E[hl]) 2 2σ2 dx (12)\nWe plot the relationship in Figure 3, in which the solid line represents the theoretical value and the dash line is the value measured via numerical experiments. When the variance σ2 is large, there will be a positive bias on the mean of output. The explanation is quite intuitive: the saturated region in the negative axis has an asymmetric growth rate compared with the positive axis. Hence, when the variance is large, the positive part contributes more than the negative part, which increases the mean.\nLack of Regularization during Training. Luo et al. (2018) show that batch normalization also regularizes the training process. In particular, using the statistics of minibatch µB and σB for explicit normalization introduces additional Gaussian noise that regularizes the training process, and it also discourages the reliance on a single neuron and penalizes correlations among neurons. However, activation functions with the self-normalization property don’t have these features as they do not rely on the statistics from minibatch.\nBased on the analysis above, we find that three techniques can be used to improve the performance of large-scale SNNs: mixup data augmentation (Zhang et al., 2018), weight centralization, and centralized activation functions.\nMixup Data Augmentation. The mixup is a simple data augmentation routine that constructs virtual training examples via linear interpolation: x̃ = γxi+(1−γ)xj , ỹ = γyi+(1−γ)yj , where (xi, yi) and (xj , yj) are two training samples randomly drawn from the training set, γ ∈ (0, 1). In particular, we find that using Mixup with SNN brings two benefits.\nFirst, Mixup reduces the variance / second moment of the inputs. Under the assumption that the corresponding entries xi and xj in the two samples are independent and E[x2i ] = E[x 2 j ] := E[x\n2], E[xi] = E[xj ] = 0, we haveE [ (γxi + (1− γ)xj)2 ] = (γ2 +(1−γ2))2E[x2]. For instance, when γ = 0.7, the second moment of the sample entries is 0.58 of the original training samples, hence the variance of the input samples is implicitly decreased. With a smaller q in the first few layers, on\none hand, as shown in Figure 2, a smaller second moment q leads to smaller δq , which reduces the gradient explosion rate in the backward pass. On the other hand, as shown in Figure 3, a smaller variance also reduces the shift of output mean caused by the activation function.\nSecond, Mixup creates additional training samples from the dataset, which provides additional regularization that could further boost the accuracy. The same property is also used in Zhang et al. (2019). Besides, we empirically find that making λ trainable is also helpful when applying lSELU and sSELU to large datasets like ImageNet. The trainable λ can be viewed as the scalar multiplier initialized at 1 used in Zhang et al. (2019). Together with the bias of each layer, they serve as the affine transform applied in batch normalization (Ioffe & Szegedy, 2015a), which increases the representational power of the network (Michalski et al., 2019).\nWeight Centralization. When µ < 1/Nl−1, multiplication with the weight can effectively normalize the mean of activations. Therefore, we can explicitly centralize the weights, i.e Ŵ = W − mean(W ). As the weights are usually much smaller than the feature maps, the overhead of Weight Centralization is usually quite small. Moreover, as it doesn’t rely on the batch, Weight Centralization can still be utilized under micro-batch scenarios.\nCentralized Activation Function. When the network with a large fan-in is relatively shallow, we can trade the strength of self-normalization property with the deviation of the mean caused by the activation function. While φ(1) = 1 + , E[f(x)] = 0, and E[f2(x)] = 1 can not simultaneously hold in SELU and dSELU as they only have two parameters, the λ, α, and β in our sSELU and lSELU can be solved with\nφ(1) = 1 + , E[f(x)] = ∫ ∞ −∞ f(z) e− z2 2 √ 2π dz = 0, E[f2(x)] = ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1, (13)\nwhich ensures that the output activations still have zero-mean when the input is at the fixed point." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we validate our activation functions on multiple image classification benchmarks. In Appendix B, we present an efficient CUDA kernel design, under which the overhead of lSELU and sSELU are only 2% higher than SELU. The experiment setup is in Appendix C and the value of the parameters λ, α, β, and the resulting γq=1 are summarized in Appendix D." }, { "heading": "6.1 NORMALIZATION EFFECTIVENESS", "text": "We empirically show that our new activation functions have better normalization effectiveness than existing studies, which is demonstrated by the second moment of the output pre-activation of each convolutional layer (E[h2]) and the Frobenius norm of the gradient of the weight (|| ∂L∂W ||F ).\nAs shown in Figure 4, in the forward pass, sSELU and lSELU normalize the second moment in the forward pass better than dSELU. In the backward pass, compared with SELU and dSELU, both sSELU and lSELU have much flatter and more concentrated distribution of the Frobenius norm in the backward pass. Notably, SELU has ≈ 0.0716, and higher lead to stronger self-normalization property. This explains why it also has good dynamics in the forward pass. However, = 0.0716\nalso increases the speed of gradient explosion, which explains why SELU has worse backward dynamics. Last but not least, the further the E[h2] deviates from 0 in the forward pass, the faster the || ∂L∂W ||F increases in the backward pass. As larger q = E[h\n2] will lead to larger δq , this observation justifies the second conclusion in Proposition 3.1." }, { "heading": "6.2 MODERATE-SCALE DATASETS", "text": "We summarize the results on CIFAR-10, CIFAR-100, and Tiny ImageNet in Table 1.\nFirst of all, under most of , our lSELU and sSELU are comparable or even better than dSELU. In particular, sSELU achieves consistent accuracy improvement on CIFAR-10 and CIFAR-100, while lSELU has better performance on Tiny ImageNet. Second, the results show that ≈ 1/L ≈ 0.017 is not always the best choice for dSELU, lSELU, and sSELU, but the best accuracy achieved in our sSELU and lSELU are under relatively smaller than dSELU (Chen et al., 2020b). These two observations accord with our arguments in Section 4 on the selection of proper .\n6.3 LARGE SCALE SNN\nIn this part, we evaluate our conclusions in Section 5. First, adding Weight Centralization or Mixup successfully solve the gradient explosion problem in SELU, lSELU, and sSELU. Second, in dSELU, lSELU, and sSELU, making λ trainable brings additional performance improvement than using Mixup alone. Moreover, after relaxing the constraint λ ≥ 1 to λ ≥ 0.5, the test accuracy “lSELU (λ ≥ 0.5)+Mixup” drops by 8.25%, this demonstrates the importance of constraining λ to be no less than 1. Last but not least, by combining centralized lSELU and sSELU with Mixup and trainable λ, we achieve 71.82% and 71.95% top-1 accuracy." }, { "heading": "7 CONCLUSION", "text": "In this paper, we analyze the forward and backward pass signals in SNNs and redefine the selfnormalization property. Two novel activation functions, lSELU and sSELU, are developed under this definition. A constrained optimization program is proposed to solve the optimal configurations. Moreover, we reveal the reason behind the performance degradation of SNN under large fan-in, and several solutions are proposed. With our novel methods, advanced results are achieved on multiple benchmarks. Our study demonstrates a new research direction for the design of activation functions." }, { "heading": "A PROOFS", "text": "A.1 SIGNAL PROPAGATION IN DEEP NEURAL NETWORKS\nFor convenience, we denote the Jacobian matrix ∂f(hl−1)hl−1 asDl−1, and tr(WlW T l )tr(Dl−1D T l−1) as χl, where tr is the normalized trace.\nProposition A.1 (Forward Signal under Mean Field Theory) Under the formulation, notations, and assumptions above, the evolution of the second moment of pre-activations ql∈[1,L] in the forward pass can be described with\nql = q0Π l i=1 χi 1 + δqi−1 , l = 1, ..., L. (14)\nProof. Under Assumption 1 & 2, the pre-activation vector of input in layer l can be characterized with a Gaussian random variable x = √ qlz, where z is a random variable following N(0, 1). With these definitions, we can investigate how q evolves between layer l − 1 and l:\nql = 1\nNl (Wlf(hl−1) + bl)\nT (Wlf(hl−1) + bl) = σ 2 b +\n1\nNl f(hl−1)\nTUTΛUf(hl−1), (15)\nwhere σ2b = 1 Nl bTl bl, U is an Orthogonal matrix, and Λ is a diagonal matrix of eigenvalues in W Tl Wl. We characterize the diagonal entries in Λ with random variable λ whose probability density function is p(λ). With Assumption 3, we have\nql=σ 2 b+ Nl−1 Nl\n∫ f2( √ ql−1z) e− z2 2\n√ 2π dz\n∫ λp(λ)dλ=σ2b+tr(WlW ,T l ) ∫ f2( √ ql−1z) e− z2 2\n√ 2π dz, (16)\nThen, we substitute equation 2 into equation 16 which yields\nql=σ 2 b+ ql−1 1+δq−1 tr(WlW T l )\n∫ [ f ′( √ ql−1z) ]2 e− z22√ 2π dz=σ2b+ ql−1 1+δq−1 tr(WlW T l )tr(Dl−1D T l−1).\n(17) As the bias vector is usually initialized with zero and shared among multiple feature entries, σb has a lower impact than the second term. Therefore, if we neglect the σ2b , with the notation χl = tr(WlW T l )tr(Dl−1D T l−1), we have\nql = q0Π l i=1 χi 1 + δqi−1 , l = 1, ..., L. (18)\nProposition A.2 (Backward Gradient under Block Dynamical Isometry) Under the formulation, notations, and assumptions above, the evolution of the Frobenius norm of the gradient in the backward pass can be described with\nE [ || ∂L ∂hl ||22 ] /E [ || ∂L ∂h0 ||22 ] ≈Πli=1 1 χi . (19)\nProof. Given the gradient ∂L∂hl , with the chain rule, we have ∂L ∂hl−1 = DTl W T l ∂L ∂hl and\n∂L ∂h0 = Πli=1D T i W T i ∂L ∂hl . (20)\nIn particular, we are interested in the Frobenius norm of ∂L∂hl represented as || ∂L ∂hl ||22. According to Chen et al. (2020b), its expectation can be computed with\nE [ || ∂L ∂h0 ||22 ] ≈ tr (( Πli=1D T i W T i )T ( Πli=1D T i W T i )) = E [ || ∂L ∂hl ||22 ] . (21)\nChen et al. (2020b) proves the theorem as follows.\nDefinition 3 Chen et al. (2020b) (kth Moment Unitarily Invariant) Let {Ai} := {A1,A2...,AL} be a series independent random matrices. Let {Ui} := {U1,U3...,UL} be a series independent haar unitary matrices independent of {A1,A2...,AL}. We say that (ΠiAi)(ΠiAi)T is the kth moment unitarily invariant if ∀0 < p ≤ k, we have\ntr (( (ΠiAi)(ΠiAi) T )p) = tr (( (ΠiUiAi)(ΠiUiAi) T )p) . (22)\nTheorem A.1 Chen et al. (2020b) (Multiplication). Given J := Π1i=LJi, where {Ji ∈ Rmi×mi−1} is a series of independent random matrices. If (Π1i=LJi)(Π 1 i=LJi)\nT is at least the 1st moment unitarily invariant (Definition 3), we have\ntr ( (Π1i=LJi)(Π 1 i=LJi) T ) Π1i=Ltr ( JiJi T ) . (23)\nTherefore, equation 21 can be further simplified with Theorem A.1 as follows.\nE [ || ∂L ∂hl ||22 ] /E [ || ∂L ∂h0 ||22 ] ≈Πli=1\n1\ntr(WiW Ti )tr(Di−1D T i−1)\n=Πli=1 1\nχi . (24)\nA.2 PROOF OF PROPOSITION 3.1\nProposition 3.1 (Strength of Self-normalization Property) Under Definition 2, we represent φ(q) as a linear interpolation between 1 and 1/q as follows.\nφ(q) = { 1 + (1− γq<1)(1/q − 1) if q < 1 1/q + γq>1(1− 1/q) if q > 1 . (25)\nwhere γq ∈ (0, 1) is a function of q. Then the following conclusions hold:\n• The self-normalization property gets stronger when γq<1 and γq>1 get closer to 0. In particular, |γq<1| ≈ |γq>1| ≈ |dφ(q)dq |q=1 + 1| when q is around 1.\n• For layer l, the gradient explodes under rate (1 + δql), i.e. Π l i=1(1 + δqi−1)E [ || ∂L∂hl || 2 2 ] =\nq0E [ || ∂L∂h0 || 2 2 ] .\nProof. When γq<1 and γq>1 are approaching 0, φ(q) gets closer to 1/q. With equation 6, we have φ(q) = tr(DlD T l ) where Dl is the Jacobian matrix of the activation function in layer l. As the weights are initialized with N(0, 1Nl ), we have tr(WlW T l ) = 1.\nIn the forward pass, with equation 2, we have ql+1 = 11+δql φ(ql)ql. We can substitute equation 8\nand get { 1− ql+1 =\nγql<1 1+δql (1− ql) + (1− 11+δql ) if ql < 1 ql+1 − 1 =\nγql>1 1+δql (ql − 1)− (1− 11+δql ) if ql > 1 . (26)\nIn the backward pass, with equation 14 and equation 19, we have E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = 1/ql Πli=1(1 + δqi−1) . (27)\nBecause of\nE [ || ∂L ∂hl+1 ||22 ] = E [ || ∂L∂hl || 2 2 ] tr(DlDTl )tr(Wl+1W T l+1) = E [ || ∂L∂hl || 2 2 ] φ(ql) , (28)\nwe have E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = 1 Πli=1(1+δqi−1 ) 1 ql+(1−γql<1)(1−ql) = 1+γ′ql<1 (1/ql−1) Πli=1(1+δqi−1 ) , if ql < 1 E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = 1 Πli=1(1+δqi−1 ) 1 1+γql>1(ql−1) = 1/ql+(1−γ′ql>1)(1−1/ql) Πli=1(1+δqi−1 ) , if ql > 1 , (29) where { γ′ql<1 = γql<1ql ql+(1−γql<1)(1−ql)\n∈ (0, 1) γ′ql>1 = γql>1ql 1+γql>1(ql−1) ∈ (0, 1) (30)\nare the monotonically increasing functions of γql<1 and γql>1, respectively. Similarly, we can derive how the deviation from the fixed point evolves during the back propagation. E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] − 1 = γ′ql<1 ( E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] − 1 ) + (1− γ′ql<1)( 1 Πli=1(1+δqi−1 ) − 1), if ql < 1 1− E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = γ′ql>1 ( 1− E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ])+ (1− γ′ql>1)(1− 1Πli=1(1+δqi−1 ) ), if ql > 1 .\n(31)\nFirst of all, when δqi are neglectable, equation 26 and equation 31 can be simplified as 1− ql+1 = γql<1(1− ql), E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] − 1 = γ′ql<1 ( E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] − 1 ) , if ql < 1 ql+1 − 1 = γql>1(ql − 1), 1− E [ || ∂L∂hl+1 || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = γ′ql>1 ( 1− E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ]) , if ql > 1 . (32) As γ′ql<1 and γ ′ ql>1\nare the monotonically increasing functions of γql<1 and γql>1, it is obvious that with smaller γql<1 and γql>1, the deviation from the fixed point in both forward and backward passes shrinks faster.\nIn particular, when q is around the fixed point q = 1 as ensured by the second term in equation 7, we can approximate φ(q) and 1/q with their first-order Taylor expansion around q = 1, with the definition of γql<1 and γql>1 in equation 8, we have γ′q<1 ≈ γq<1 = 1/(1+∆q)−φ(1+∆q) 1/(1+∆q)−1 ≈ 1 + dφ(q) dq ∣∣∣∣ q=1 + ∆q , if ∆q < 0 γ′q>1 ≈ γq>1 = φ(1+∆q)−1/(1+∆q) 1/(1−1+∆q) ≈ 1 + dφ(q) dq ∣∣∣∣ q=1 + ∆q , if ∆q > 0 . (33)\nAs a result, we can reduce the number of layers required to diminish the deviation by minimizing |dφ(q)dq |q=1 + 1|.\nThen, we discuss the influence of δq . The fixed point of the two recursive functions in equation 26 and equation 31 can be computed as 1− q = δql<11+δql<1−γq<1 , E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] − 1 = 1 Πli=1(1+δqi−1 ) − 1, if q < 1 q − 1 = −δql<11+δql<1−γq>1 , 1− E [ || ∂L∂hl || 2 2 ] q0E [ || ∂L∂h0 || 2 2 ] = 1− 1 Πli=1(1+δqi−1 ) , if q > 1 . (34)\nWhile the fixed point of deviation slightly deviates from 0, in the backward pass, we have Πli=1(1 + δqi−1)E [ || ∂L∂hl || 2 2 ] = q0E [ || ∂L∂h0 || 2 2 ] , which suggests that the gradient explodes with rate (1 + δql) at layer l.\nA.3 PROOF OF PROPOSITION 3.2\nProposition 3.2 (Normalization of Mean) Under the assumption that the entries in the weight matrix wij are independent with the input activations, and their expectation has an upper bound µ,\ni.e. ∀ i, j, E[wij ] ≤ µ. Then we say multiplication with the weight matrix normalizes the mean if µ < 1Nl−1 holds, where Nl−1 is the fan-in of the current layer l. Moreover, the mean is scaled down by ratio smaller than µNl−1.\nProof. With equation 1, the jth entry in the output pre-activation hl can be computed with\nhl,j = Nl−1∑ i=1 wj,ixl−1,i. (35)\nTherefore, with the assumption on independence between weight and input activations, we have\nE[hl,j ] = Nl−1∑ i=1 E[wj,ixl−1,i] ≤ Nl−1µE [ 1 Nl−1 1Txl−1 ] . (36)\nAs the term E [\n1 Nl−1 1Txl−1 ] can be viewed as the mean of the input activations, when µ < 1Nl ,\nequation 36 reveals that the mean is reduced after multiplying with the weight matrix.\nA.4 BENEFITS OF HAVING λ ≥ 1\nFirst of all, we show that having λ ≈ 1 helps to maintain the mean of the output activations around 0. As we normalize the mean by multiplying with the weights, we don’t require the E[f(x)] = 0 when x ∼ N(0, 1) like Klambauer et al. (2017). However, according to Proposition 3.2, the speed of the mean converging to 0 gets slower when the expectation of entries in the weight matrix deviates from 0 and when the fan-in gets larger. Therefore, it’s still ideal to avoid shifting the mean too much when the activations flowing through the activation functions. As shown in Figure 5, we simulate the forward pass in a 64-layer fully-connected neural network, and plot the distribution of output activations in layer 1, 4, 16, and 64.\nIt is obvious that when λ < 1, a spike around 0 is observed for both sSELU and lSELU, and this leads to a large negative mean of the output activations. On the other hand, for instance, by solving\nφ(1) = 1 + , ∫ ∞ −∞ f2(z) e− z2 2 √ 2π dz = 1, ∫ ∞ −∞ f(z) e− z2 2 √ 2π dz = 0 (37)\nunder = 0.03, we have λ ≈ 1.0360 and 1.0362 for sSELU and lSELU, respectively. Therefore, λ = 1 is a good starting point for the optimization.\nSecond, we show that having larger λ slows down the gradient explosion in the backward pass. According to the second conclusion in Proposition 3.1, the backward gradient explodes under rate (1 + δq), thus keeping δq low is critical for avoiding gradient explosion. According to the definition in equation 2, (1+ δq) can be computed with\n1 + δq = φ(q) q ∫∞ −∞ f 2(z) e − z2 2√ 2π dz . (38)\nWe plot the relationship between the maximum (1 + δq) under q ∈ (0, 2] and λ in Figure 6. Obviously, the maximum (1 + δq) decreases when λ gets larger. This observation is quite intuitive. The 1 + δi\ncharacterizes the relative deviation between E[f2(x)] and E[(df(x)/dx)2]E[x2]. For the positive pre-activations, we have\nE[f2(x+)] = ∫ ∞ 0 λ √ qz e− z2 2 √ 2π dz = λ ∫ ∞ 0 √ qz e− z2 2 √ 2π dz = E[(df(x+)/dx+) 2]E[x2+]. (39)\nHence, the deviation is contributed only by the negative part. With a larger λ, the positive activations are scaled up, thus the negative activations have to be scaled down to preserve the overall second moment. Therefore, the negative part contributes less to the overall second moment, and the relative deviation between E[f2(x)] and E[(df(x)/dx)2]E[x2] gets smaller. All in all, a larger λ leads to smaller δq , and a smaller δq reduces the gradient explosion rate (1 + δq).\nA.5 THE µNl−1 WITH INCREASING FAN-IN\nHere we empirically illustrate that µNl−1 increases when the fan-in Nl−1 gets larger. The experiment is performed on a 32-layer CNN activated with dSELU. We collect the µNl−1 of each convolutional layer and the final fully-connected layer after each epoch among total 10 epochs. The learning rate is set to 0.005. Let the number of input channels of layer l be cl, the k in each title dSELU×k indicates that the number of input channel is scaled to k × cl.\nAs shown in Figure 7, with larger fan-in, the layers tend to have larger µNl−1. According to Proposition 3.2, larger µNl−1 will lead to weaker self-normalization property on the mean. Notably, if we further scaling the number of input channels with k greater than 4, gradient explosion happens. These two observations justify our conclusion that the shift of mean is more influential in networks with larger fan-in." }, { "heading": "B EFFICIENT IMPLEMENTATION OF LSELU AND SSELU ON GPU", "text": "In this section, we present an efficient implementation for lSELU and sSELU on GPU. In particular, we take lSELU as an example, as the same strategy can be directly applied to sSELU.\nAlgorithm 1: Forward Kernel of lSELU. Data: Input Feature: X ∈ RN , Output Feature: Y ∈ RN , λ, α, β ∈ R; ThreadIdx: t;\nBlockIdx: b; Thread Block Size: T ; Number of Thread Blocks: B; 1 begin 2 for i = b× T + t to N step B × T do 3 ifX[i] > 0 then 4 Y [i] = λ×X[i] 5 else 6 Y [i] = λ× (α× eX[i] + β ×X[i]− α)\nForward Pass. The forward pass kernel for lSELU is shown in Algorithm 1. In the forward pass, we have\ny = λ { 1 if x > 0 αex + βx− α if x ≤ 0 . (40)\nThe implementation is quite straightforward, all the threads stride across the feature map and element-wisely compute the output activations. The input and output feature maps are treated as 1D array, therefore the kernel achieves good coalescing in both read and write. While we take T = 1024, the number of thread blocks B is computed by B = b(N + T − 1)/T c, so the number of thread blocks is large enough to achieve a high utilization rate.\nAlgorithm 2: Backward Kernel of lSELU. Data: Gradient of Input Feature: GX ∈ RN , Gradient of Output Feature: GY ∈ RN ; Input\nFeature: X; λ, α, β ∈ R; Gradient of λ: Gλ ∈ R; ThreadIdx: t; BlockIdx: b; Thread Block Size: T ; Number of Thread Blocks: B;\n1 begin 2 float p = 0, float c = 0 3 for i = b× T + t to N step B × T do 4 float x = X[i], float dydx , float y, float gy = Gy[i] 5 ifX[i] > 0 then 6\ndy dx = λ, y = gy × x− c\n7 else 8\ndy dx = λ× (α× e x + β), y = gy × (α× ex + β × x− α)− c 9 float t = p+ y, c = (t− p)− y, p = t\n10 GX [i] = gy × dydx 11 syncthreads() 12 sum = BlockReduce(p) 13 if t = 0 then 14 atomicAdd(&Gλ, sum)\nBackward Pass. The backward pass kernel is shown in Algorithm 2. When the λ is trainable, the backward pass of lSELU is shown as follows:\n∂L ∂x = ∂L ∂y ∂y ∂x = ∂L ∂y × λ { 1 if x > 0 αex + β if x ≤ 0 , ∂L ∂λ = ∑ y λ ∂L ∂y . (41)\nAs the ∂L∂y is used to compute both ∂L ∂x and ∂L ∂λ , it can be cached in registers for data reuse (line 4). When the threads stride through the whole feature map, each thread holds a partial sum in a private register p (line 2). In order to avoid the underflow of floating point accumulation, Khan summation algorithm (Higham, 1993) is applied (line 9). At last, we use the block reduction in the CUB library to get the partial sum of the whole thread block, and the final result is atomically added to the ∂L∂λ .\nDifferent from the forward pass, we choose B to be a few thousands (usually much smaller than B = b(N + T − 1)/T c). The motivation behind this is that while it is large enough to keep all the\nstreaming multiprocessors busy, it is also small enough to keep most of the reduction on chip and reduce the atomic transactions.\nWe evaluate our new CUDA kernels on NVIDIA V100 GPU. We randomly generate an input feature map with size of [512, 64, 56, 56] as the input of the activation function, and we compare the kernel latency of forward and backward passes with the native SELU in PyTorch. The results are summarized in the table below:\nCompared with the original SELU, our new implementation with the trainable λ only increases the latency by around 2%, which is neglectable. The reason behind this is that the latency of activation functions are bounded by the DRAM bandwidth of GPU (Chen et al., 2020b), and the computation units are underutilized. As our CUDA kernels don’t introduce additional DRAM access, it has low impact on the latency." }, { "heading": "C EXPERIMENT SETUP FOR MODERATE-SCALE BENCHMARKS", "text": "The experiments in Section 6.1 and 6.2 are based on a 56-layer Convolutional Neural Network shown in Table 4. The H and W are 32 for CIFAR-10, CIFAR-100, and 64 for Tiny ImageNet. Following Klambauer et al. (2017); Chen et al. (2020b), the weights in the convolving kernels are initialized with i.i.d. N(0, 1khkwcin ), where kh and kw are the height and width of the filters and cin is the input channels. The models are optimized with SGD with momentum=0.9, weight decay=0.0005.\nFor Section 6.1, we train the model from scratch for 3190 iterations (10 epochs) on CIFAR-10 under the learning rate 0.015. The choice of learning rate is based on the observation that it is large enough to simulate the fierce update of parameters but also small enough to avoid gradient explosion. Following Chen et al. (2020b), we set = 0.017 for dSELU, lSELU, and sSELU. We collect the second-moment of the output pre-activations of each convolutional layer as well as the Frobenius norm of backward gradient on the convolving kernels in each iteration.\nFor Section 6.2, as the model has a relatively small fan-in, we directly apply sSELU and lSELU without techniques mentioned in Section 5. Besides, We clip the gradient by ”[−2, 2]” for all the experiments to increases stability. All the results are averaged among 4 independent runs to reduce fluctuation. For CIFAR-10 and CIFAR-100, the models are trained with batch size 128 for 130 epochs with the initial learning rate set to 0.01, and decayed to 0.001 at epoch 80. For Tiny ImageNet, the models are trained with batch size 64 for 200 epochs. The initial learning rate is set to 0.001, and decayed by 10 at epoch 130, 180.\nFor Section 6.3, following Chen et al. (2020b), we choose the Conv MobileNet V1 (Howard et al., 2017). The “Conv” indicates that traditional convolutions rather than depthwise separable convolution is used, as the latter one requires more epochs to converge (Zhou et al., 2020). The model is trained for 90 epochs with batch size 512 under leaning rate 0.02 (decayed by 10× at epoch 60 and 75). Following Zhang et al. (2018), the γ for interpolation is drawn from Beta distribution Beta(0.7, 0.7). For all the experiments of dSELU, lSELU, and sSELU, we follow Chen et al. (2020b) and set = 0.06." }, { "heading": "D COMPARISON OF PARAMETERS BETWEEN DIFFERENT ACTIVATION FUNCTIONS", "text": "We summarize the value of the parameters λ, α, β, and the corresponding γq under different configurations in Table 5. According to Proposition 3.1, smaller γq=1 will lead to stronger selfnormalization property. As shown in Table 5, under the same , our sSELU and lSELU have lower γq=1 compared with dSELU, this justifies our intuition that lSELU and sSELU can be configured to have stronger self-normalization property. Second, the result shows that for each activation function, γq=1 increases when gets larger. However, as shown in Figure 2, larger also leads to larger δq , which increases the speed of gradient explosion in the backward pass. Last but not least, for experiments on MobileNet V1, our lSELU and sSELU under = 0.06 achieve approximately the same γq=1 with SELU with ≈ 0.0716, whereas the latter one has a higher gradient explosion rate." }, { "heading": "E EXPERIMENTS ON FULLY-CONNECTED NEURAL NETWORKS", "text": "The performance of SELU proposed in Klambauer et al. (2017) is first demonstrated on fullyconnected neural networks. In this section, we compare our sSELU and lSELU against the original SELU in a 64-layer fully-connected neural network on three typical datasets: UCI miniboone, UCI adult, and HTRU. The results are summarized in Table 6. As the neural network has 64 layers, we only evaluate sSELU and lSELU at ∈ {0.01, 0.017, 0.03}. The results show that with all these\nthree , our sSELU and lSELU achieve consistent improvement over SELU, which further justifies the effectiveness of our activation functions." } ]
2,020
null
SP:7d6dc558590032eefb2033e9a2c784124bac8ac1
[ "The paper provides a number of adversarial attacks on hybrid neural-symbolic systems. The systems are recommender and QA systems which use an underlying knowledge-graph (KG) such as ConceptNet. Previous work has suggested that the KGs are important for good performance, and moreover that the use of KGs lends the system a degree of interpretability. The attacks are successful - maintaining performance whilst seriously degrading the KG - throwing doubt on these claims." ]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also “explain” which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG’s semantics and structure. Our findings raise doubts about KG-augmented models’ ability to reason about KG information and give sensible explanations.
[ { "affiliations": [], "name": "TARGETED PERTURBATION" }, { "affiliations": [], "name": "Mrigank Raman" }, { "affiliations": [], "name": "Aaron Chan" }, { "affiliations": [], "name": "Siddhant Agarwal" }, { "affiliations": [], "name": "Peifeng Wang" }, { "affiliations": [], "name": "Hansen Wang" }, { "affiliations": [], "name": "Sungchul Kim" }, { "affiliations": [], "name": "Ryan Rossi" }, { "affiliations": [], "name": "Handong Zhao" }, { "affiliations": [], "name": "Nedim Lipka" }, { "affiliations": [], "name": "Xiang Ren" } ]
[ { "authors": [ "Qingyao Ai", "Vahid Azizi", "Xu Chen", "Yongfeng Zhang" ], "title": "Learning heterogeneous knowledge base embeddings for explainable", "venue": "recommendation. Algorithms,", "year": 2018 }, { "authors": [ "Siddhant Bhambri", "Sumanyu Muku", "Avinash Tulasi", "Arun Balaji Buduru" ], "title": "A study of black box adversarial attacks in computer vision", "venue": null, "year": 1912 }, { "authors": [ "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Adversarial attacks on node embeddings via graph poisoning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Yixin Cao", "Xiang Wang", "Xiangnan He", "Zikun Hu", "Tat-Seng Chua" ], "title": "Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences", "venue": "In The world wide web conference,", "year": 2019 }, { "authors": [ "Daoyuan Chen", "Yaliang Li", "Min Yang", "Hai-Tao Zheng", "Ying Shen" ], "title": "Knowledge-aware textual entailment with graph attention network", "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Jinyin Chen", "Yangyang Wu", "Xuanheng Xu", "Yixian Chen", "Haibin Zheng", "Qi Xuan" ], "title": "Fast gradient attack on network embedding", "venue": "arXiv preprint arXiv:1809.02797,", "year": 2018 }, { "authors": [ "Liang Chen", "Jintang Li", "Jiaying Peng", "Tao Xie", "Zengxu Cao", "Kun Xu", "Xiangnan He", "Zibin Zheng" ], "title": "A survey of adversarial learning on graphs", "venue": null, "year": 2003 }, { "authors": [ "Qian Chen", "Xiaodan Zhu", "Zhen-Hua Ling", "Diana Inkpen", "Si Wei" ], "title": "Neural natural language inference models enhanced with external knowledge", "venue": "arXiv preprint arXiv:1711.04289,", "year": 2017 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian", "Xin Huang", "Lin Wang", "Jun Zhu", "Le Song" ], "title": "Adversarial attack on graph structured data", "venue": "arXiv preprint arXiv:1806.02371,", "year": 2018 }, { "authors": [ "Joe Davison", "Joshua Feldman", "Alexander M Rush" ], "title": "Commonsense knowledge mining from pretrained models", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Giorgio Fagiolo" ], "title": "Clustering in complex directed networks", "venue": "Physical Review E,", "year": 2007 }, { "authors": [ "Yanlin Feng", "Xinyue Chen", "Bill Yuchen Lin", "Peifeng Wang", "Jun Yan", "Xiang Ren" ], "title": "Scalable multi-hop relational reasoning for knowledge-aware question answering", "venue": "arXiv preprint arXiv:2005.00646,", "year": 2020 }, { "authors": [ "Jingyue Gao", "Xiting Wang", "Yasha Wang", "Xing Xie" ], "title": "Explainable recommendation through attentive multi-view learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "F. Maxwell Harper", "Joseph A. Konstan" ], "title": "The movielens datasets: History and context", "venue": "ACM Trans. Interact. Intell. Syst.,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Xiang Li", "Aynaz Taheri", "Lifu Tu", "Kevin Gimpel" ], "title": "Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1445–1455, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1137. URL https://www.aclweb.org/anthology/ P16-1137", "year": 2016 }, { "authors": [ "Bill Yuchen Lin", "Xinyue Chen", "Jamin Chen", "Xiang Ren" ], "title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Shangwen Lv", "Daya Guo", "Jingjing Xu", "Duyu Tang", "Nan Duan", "Ming Gong", "Linjun Shou", "Daxin Jiang", "Guihong Cao", "Songlin Hu" ], "title": "Graph-based reasoning over heterogeneous external knowledge for commonsense question answering", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Kaixin Ma", "Jonathan Francis", "Quanyang Lu", "Eric Nyberg", "Alessandro Oltramari" ], "title": "Towards generalizable neuro-symbolic systems for commonsense question answering", "venue": "arXiv preprint arXiv:1910.14087,", "year": 2019 }, { "authors": [ "Yao Ma", "Suhang Wang", "Tyler Derr", "Lingfei Wu", "Jiliang Tang" ], "title": "Attacking graph convolutional networks via rewiring", "venue": "arXiv preprint arXiv:1906.03750,", "year": 2019 }, { "authors": [ "Todor Mihaylov", "Peter Clark", "Tushar Khot", "Ashish Sabharwal" ], "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ryan Musa", "Xiaoyan Wang", "Achille Fokoue", "Nicholas Mattei", "Maria Chang", "Pavan Kapanipathi", "Bassem Makni", "Kartik Talamadupula", "Michael Witbrock" ], "title": "Answering science exam questions using query reformulation with background knowledge", "venue": "In Automated Knowledge Base Construction (AKBC),", "year": 2019 }, { "authors": [ "Jukka-Pekka Onnela", "Jari Saramäki", "János Kertész", "Kimmo Kaski" ], "title": "Intensity and coherence of motifs in weighted complex networks", "venue": "Physical Review E,", "year": 2005 }, { "authors": [ "Fabio Petroni", "Tim Rocktäschel", "Patrick Lewis", "Anton Bakhtin", "Yuxiang Wu", "Alexander H Miller", "Sebastian Riedel" ], "title": "Language models as knowledge bases", "venue": null, "year": 1909 }, { "authors": [ "Steffen Rendle" ], "title": "Factorization machines with libfm", "venue": "ACM Transactions on Intelligent Systems and Technology,", "year": 2012 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jari Saramäki", "Mikko Kivelä", "Jukka-Pekka Onnela", "Kimmo Kaski", "Janos Kertesz" ], "title": "Generalizations of the clustering coefficient to weighted complex networks", "venue": "Physical Review E,", "year": 2007 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Tao Shen", "Yi Mao", "Pengcheng He", "Guodong Long", "Adam Trischler", "Weizhu Chen" ], "title": "Exploiting structured knowledge in text via graph-guided representation learning", "venue": null, "year": 2004 }, { "authors": [ "Weiping Song", "Zhijian Duan", "Ziqing Yang", "Hao Zhu", "Ming Zhang", "Jian Tang" ], "title": "Explainable knowledge graph-based recommendation via deep reinforcement", "venue": "learning. ArXiv,", "year": 2019 }, { "authors": [ "Robyn Speer", "Joshua Chin", "Catherine Havasi" ], "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "venue": "arXiv preprint arXiv:1612.03975,", "year": 2016 }, { "authors": [ "Yiwei Sun", "Suhang Wang", "Xianfeng Tang", "Tsung-Yu Hsieh", "Vasant Honavar" ], "title": "Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach", "venue": "In Proceedings of The Web Conference", "year": 2020 }, { "authors": [ "Alon Talmor", "Jonathan Herzig", "Nicholas Lourie", "Jonathan Berant" ], "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "venue": "arXiv preprint arXiv:1811.00937,", "year": 2018 }, { "authors": [ "Hongwei Wang", "Fuzheng Zhang", "Min Hou", "Xing Xie", "Minyi Guo", "Qi Liu" ], "title": "Shine: Signed heterogeneous information network embedding for sentiment link prediction", "venue": "In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Hongwei Wang", "Fuzheng Zhang", "Jialin Wang", "Miao Zhao", "Wenjie Li", "Xing Xie", "Minyi Guo" ], "title": "Ripplenet: Propagating user preferences on the knowledge graph for recommender systems", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 }, { "authors": [ "Hongwei Wang", "Fuzheng Zhang", "Mengdi Zhang", "Jure Leskovec", "Miao Zhao", "Wenjie Li", "Zhongyuan Wang" ], "title": "Knowledge-aware graph neural networks with label smoothness regularization for recommender systems", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Hongwei Wang", "Miao Zhao", "Xing Xie", "Wenjie Li", "Minyi Guo" ], "title": "Knowledge graph convolutional networks for recommender systems", "venue": "In The world wide web conference,", "year": 2019 }, { "authors": [ "Xiang Wang", "Yao-Kun Xu", "Xiangnan He", "Yixin Cao", "Meng Wang", "Tat-Seng Chua" ], "title": "Reinforced negative sampling over knowledge graph for recommendation", "venue": "Proceedings of The Web Conference", "year": 2020 }, { "authors": [ "Xiaoyan Wang", "Pavan Kapanipathi", "Ryan Musa", "Mo Yu", "Kartik Talamadupula", "Ibrahim Abdelaziz", "Maria Chang", "Achille Fokoue", "Bassem Makni", "Nicholas Mattei" ], "title": "Improving natural language inference using external knowledge in the science questions domain", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Huijun Wu", "Chen Wang", "Yuriy Tyshetskiy", "Andrew Docherty", "Kai Lu", "Liming Zhu" ], "title": "Adversarial examples on graph data: Deep insights into attack and defense", "venue": null, "year": 1903 }, { "authors": [ "Bishan Yang", "Wen tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge", "venue": null, "year": 2015 }, { "authors": [ "Wei Emma Zhang", "Quan Z Sheng", "Ahoud Alhazmi", "Chenliang Li" ], "title": "Adversarial attacks on deep-learning models in natural language processing: A survey", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, neural reasoning over knowledge graphs (KGs) has emerged as a popular paradigm in machine learning and natural language processing (NLP). KG-augmented models have improved performance on a number of knowledge-intensive downstream tasks: for question answering (QA), the KG provides context about how a given answer choice is related to the question (Lin et al., 2019; Feng et al., 2020; Lv et al., 2020; Talmor et al., 2018); for item recommendation, the KG mitigates data sparsity and cold start issues (Wang et al., 2018b; 2019a;b; 2018a). Furthermore, by using attention over the KG, such models aim to explain which KG information was most relevant for making a given prediction (Lin et al., 2019; Feng et al., 2020; Wang et al., 2018b; 2019b; Cao et al., 2019; Gao et al., 2019). Nonetheless, the process in which KG-augmented models reason about KG information is still not well understood. It is assumed that, like humans, KG-augmented models base their predictions on meaningful KG paths and that this process is responsible for their performance gains (Lin et al., 2019; Feng et al., 2020; Gao et al., 2019; Song et al., 2019).\nIn this paper, we question if existing KG-augmented models actually use KGs in this human-like manner. We study this question primarily by measuring model performance when the KG’s semantics and structure have been perturbed to hinder human comprehension. To perturb the KG, we propose four perturbation heuristics and a reinforcement learning (RL) based perturbation algorithm. Surprisingly, for KG-augmented models on both commonsense QA and item recommendation, we find that the KG can be extensively perturbed with little to no effect on performance. This raises doubts about KG-augmented models’ use of KGs and the plausibility of their explanations." }, { "heading": "2 PROBLEM SETTING", "text": "Our goal is to investigate whether KG-augmented models and humans use KGs similarly. Since KGs are human-labeled, we assume that they are generally accurate and meaningful to humans. Thus, across different perturbation methods, we measure model performance when every edge in the KG has been perturbed to make less sense to humans. To quantify the extent to which the KG has been perturbed, we also measure both semantic and structural similarity between the original ∗Work done while MR, SA and HW interned remotely at USC. Code and data are available at https: //github.com/INK-USC/deceive-KG-models. †Equal contribution.\nKG and perturbed KG. If original-perturbed KG similarity is low, then a human-like KG-augmented model should achieve worse performance with the perturbed KG than with the original KG. Furthermore, we evaluate the plausibility of KG-augmented models’ explanations when using original and perturbed KGs, by asking humans to rate these explanations’ readability and usability.\nNotation Let Fθ be an KGaugmented model, and let (Xtrain, Xdev, Xtest) be a dataset for some downstream task. We denote a KG as G = (E ,R, T ), where E is the set of entities (nodes), R is the set of relation types, and T = {(e1, r, e2) | e1, e2 ∈ E , r ∈ R} is the set of facts (edges) composed from existing entities and relations (Zheng et al., 2018). Let G ′ = (E ,R′, T ′ ) be the KG obtained after perturbing G, where R ′ ⊆ R and T ′ 6= T . Let f(G,G ′ ) be a function that measures similarity between G and G ′ . Let g(G) be the downstream performance when evaluating Fθ on Xtest and G. Also, let ⊕ denote the concatenation operation, and let\nNL(e) denote the set of L-hop neighbors for entity e ∈ E .\nHigh-Level Procedure First, we train Fθ on Xtrain and G, then evaluate Fθ on Xtest and G to get the original performance g(G). Second, we freeze Fθ, then perturb G to obtain G ′ . Third, we evaluate Fθ on Xtest and G ′ to get the perturbed performance g(G ′ ). Finally, we measure g(G) − g(G′) and f(G,G ′ ) to assess how human-like Fθ’s reasoning process is. This procedure is illustrated in Fig. 1. In this paper, we consider two downstream tasks: commonsense QA and item recommendation.\nCommonsense QA Given a question x and a set of k possible answers A = {y1, ..., yk}, the task is to predict a compatibility score for each (x, y) pair, such that the highest score is predicted for the correct answer. In commonsense QA, the questions are designed to require commonsense knowledge which is typically unstated in natural language, but more likely to be found in KGs (Talmor et al., 2018). Let F textφ be a text encoder (Devlin et al., 2018), F graph ψ be a graph encoder, and F cls ξ be an MLP classifier, where φ, ψ, ξ ⊂ θ. Let G(x,y) denote a subgraph of G consisting of entities mentioned in text sequence x ⊕ y, plus their corresponding edges. We start by computing a text embedding htext = F textφ (x⊕ y) and a graph embedding hgraph = F graph φ (G(x,y)). After that, we compute the score for (x, y) as S(x,y) = F clsξ (htext ⊕ hgraph). Finally, we select the highest scoring answer: ypred = argmaxy∈A S(x,y). KG-augmented commonsense QA models vary primarily in their design of Fgraphψ . In particular, path-based models compute the graph embedding by using attention to selectively aggregate paths in the subgraph. The attention scores can help explain which paths the model focused on most for a given prediction (Lin et al., 2019; Feng et al., 2020; Santoro et al., 2017).\nItem Recommendation We consider a set of users U = {u1, u2, ..., um}, a set of items V = {v1, v2, ..., vn}, and a user-item interaction matrix Y ∈ Rm×n with entries yuv. If user u has been observed to engage with item v, then yuv = 1; otherwise, yuv = 0. Additionally, we consider a KG G, in which R is the set of relation types in G. In G, nodes are items v ∈ V, and edges are facts of the form (v, r, v′), where r ∈ R is a relation. For the zero entries in Y (i.e., yuv = 0), our task is to predict a compatibility score for user-item pair (u, v), indicating how likely user u is to want to engage with item v. We represent each user u, item v, and relation r as embeddings u, v, and r, respectively. Given a user-item pair (u, v), its compatibility score is computed as 〈u,v〉, the inner product between u and v. KG-augmented recommender systems differ mainly in how they use G to compute u and v. Generally, these models do so by using attention to selectively aggregate items/relations in G. The attention scores can help explain which items/relations the model found most relevant for a given prediction (Wang et al., 2018b; 2019b)." }, { "heading": "3 KG SIMILARITY METRICS", "text": "To measure how much the perturbed KG has deviated from the original KG, we propose several metrics for capturing semantic (ATS) and structural (SC2D, SD2) similarity between KGs.\nAggregated Triple Score (ATS) ATS measures semantic similarity between two KGs. Let sG be an edge (triple) scoring function, such that sG(e1, r, e2) measures how likely edge (e1, r, e2) is to exist in G. Also, assume sG has been pre-trained on G for link prediction. Then, ATS is defined as fATS(G,G ′ ) = 1\n|T ′ | ∑ (e1,r,e2)∈T\n′ sG(e1, r, e2) ∈ [0, 1], which denotes the mean sG score across all edges in G′. Intuitively, if a high percentage of edges in G′ are also likely to exist in G (i.e., high ATS), then we say that G′ and G have high semantic similarity. sG is task-specific, as KGs from different tasks may differ greatly in semantics. For commonsense QA, we use the sG from Li et al. (2016); for item recommendation, we use the sG from Yang et al. (2015). While ATS captures semantic KG differences, it is not sensitive to KG connectivity structure. Note that fATS(G,G) may not equal 1, since sG may not perfectly generalize to KGs beyond those it was trained on.\nSimilarity in Clustering Coefficient Distribution (SC2D) SC2D measures structural similarity between two KGs and is derived from the local clustering coefficient (Saramäki et al., 2007; Onnela et al., 2005; Fagiolo, 2007). For a given entity in G (treated here as undirected), the local clustering coefficient is the fraction of possible triangles through the entity that exist (i.e., how tightly the entity’s neighbors cluster around it). For entity ei ∈ E , the local clustering coefficient is defined as ci = 2Tri(ei)/(deg(ei)(deg(ei) − 1)), where Tri(ei) is the number of triangles through ei, and deg(ei) is the degree of ei. For each relation r ∈ R, let Gr be the subgraph of G consisting of all edges in T with r . That is, Gr = (E , r, T ′ ), where T ′ = {(e, r, e′) | e, e′ ∈E}. Let cr denote the |E|-dimensional clustering coefficient vector for Gr, where the ith element of cr is ci. Then, the mean clustering coefficient vectors for G and G′ are co = 1|R| ∑ r∈R c r and cp = 1|R′| ∑ r∈R′ c\nr, respectively. SC2D is defined as fSC2D(G,G\n′ ) = 1− ‖co−cp‖2‖co−cp‖2+1 ∈ [0, 1], with higher value indicating higher similarity.\nSimilarity in Degree Distribution (SD2) SD2 also measures structural similarity between two KGs, while addressing SC2D’s ineffectiveness when the KGs’ entities have tiny local clustering coefficients (e.g., the item KG used by recommender systems is roughly bipartite). In such cases, SC2D is always close to one regardless of perturbation method, thus rendering SC2D useless. Let dr denote the |E|-dimensional degree vector for Gr, where the ith element of dr is deg(ei). Then, the mean degree vectors for G and G′ are do = 1|R| ∑ r∈R d r and dp = 1|R′| ∑ r∈R′ d\nr, respectively. SD2 is defined as fSD2(G,G\n′ ) = 1− ‖do−dp‖2‖do−dp‖2+1 ∈ [0, 1], with higher value indicating higher similarity." }, { "heading": "4 METHODS FOR TARGETED KG PERTURBATION", "text": "We aim to study how a KG’s semantics and structure impact KG-augmented models’ downstream performance. To do so, we measure model performance in response to various forms of targeted KG perturbation. While a KG’s semantics can be perturbed via its relation types, its structure can be perturbed via its edge connections. Therefore, we design five methods — four heuristic, one RL — for perturbing KG relation types, edge connections, or both (Fig. 2)." }, { "heading": "4.1 HEURISTIC-BASED KG PERTURBATION", "text": "Our four KG perturbation heuristics are as follows: Relation Swapping (RS) randomly chooses two edges from T and swaps their relations. Relation Replacement (RR) randomly chooses an edge (e1, r1, e2) ∈ T , then replaces r1 with another relation r2 = argminr∈R sG(e1, r, e2). Edge Rewiring (ER) randomly chooses an edge (e1, r, e2) ∈ T , then replaces e2 with another entity e3 ∈ E \\N1(e1). Edge Deletion (ED) randomly chooses an edge (e1, r, e2) ∈ T and deletes it. For ED, perturbing all edges means deleting all but 10 edges." }, { "heading": "4.2 RL-BASED KG PERTURBATION", "text": "We introduce an RL-based approach for perturbing the KG. Given a KG, G, we train a policy to output a perturbed KG, G′, such that ATS, fATS(G,G ′ ), is minimized, while downstream performance, g(G ′ ), is maximized. Specifically, the RL agent is trained to perturb G via relation replacement, so we call our algorithm RL-RR. Because the agent is limited to applying N = |T | perturbations to G, our RL problem is framed as a finite horizon Markov decision process. In the rest of this section, we define the actions, states, and reward in our RL problem, then explain how RL-RR is implemented.\nActions The action space consists of all possible relation replacements in G, i.e., replacing (e1, r, e2) ∈ T with (e1, r′, e2). Since having such a large action space poses computational issues, we decouple each action into a sequence of three subactions and operate instead in this smaller subaction space. Hence, a perturbation action at time step t would be at = (a(0)t , a (1) t , a (2) t ). Namely, a (0) t is sampling entity e1 ∈ E; a (1) t is selecting edge (e1, r, e2) ∈ T ; and a (2) t is selecting relation r′ ∈ R to replace r in (e1, r, e2). To make the policy choose low-ATS perturbations, we further restrict the a(2)t subaction space to be the K subactions resulting in the lowest ATS. Note that each a (i) t is represented by its corresponding pre-trained TransE (Bordes et al., 2013) entity, relation, or edge embedding in G. Since these TransE embeddings are not updated by the perturbation policy, we use a(i)t to refer to both the subaction and subaction embedding. Meanwhile, at does not have any representation besides its constituent subaction embeddings.\nStates The state space is the set of all G′ with the same entities and connectivity structure as G. Here, we make a distinction between state and state embedding. The state at t is the actual KG after t perturbation steps and is denoted as Gt. The state embedding at t is a vector representation of Gt and is denoted as st. To match at, we also decouple st into substate embeddings: st = (s(0)t , s (1) t , s (2) t ).\nReward The reward function pushes the policy to maximize downstream performance. For commonsense QA, higher reward corresponds to lower KL divergence between the predicted and true answer distributions. For item recommendation, we use validation AUC as the reward function." }, { "heading": "4.2.1 DQN ARCHITECTURE AND TRAINING", "text": "As described above, RL-RR is modeled as an action-subaction hierarchy. At the action (top) level, for t, the policy selects an action at given state st, then performs at on Gt to obtain Gt+1. At the subaction (bottom) level, for index i ∈ [0, 1, 2] within time step t, the policy selects a subaction a (i+1) t given s (i) t and, if any, previous subactions.\nAt t, the policy takes as input the substate embedding s(0)t . One approach for computing s (0) t would be to directly encode Gt with a graph encoder Fgraph, such that s(0)t = Fgraph(Gt) (Dai et al., 2018; Sun et al., 2020; Ma et al., 2019b). However, since we aim to assess graph encoders’ ability to capture KG information, it would not make sense to use a graph encoder for KG perturbation. Instead, we use an LSTM (Hochreiter & Schmidhuber, 1997) to update substate embeddings both within and across time steps, while jointly encoding substate and subaction embeddings. Observe that\nthis means s(i)t only implicitly captures KG state information via a (i−1) t , since the choice of each subaction is constrained precisely by which entities, relations, or edges are available in Gt.\nTo train RL-RR, we use the DQN algorithm (Mnih et al., 2015). Abstractly, the goal of DQN is to learn a Q-function Q(st, at), which outputs the expected reward for taking action at in state st. In our implementation, Q(st, at) is decomposed into a sequential pair of sub-Q-functions: Q1(a (1) t |s (0) t , a (0) t ) = 〈MLP(a (1) t ),MLP(h (0) t )〉 and Q2(a (2) t |s (1) t , a (0) t , a (1) t ) = 〈MLP(a (2) t ),MLP(h (1) t )〉. MLP denotes the vector representation computed by a multi-layer perceptron, while h(0)t and h (1) t denote the respective LSTM encodings of (s(0)t , a (0) t ) and (s (1) t , [a (0) t ⊕ a (1) t ]).\nFigure 3: DQN Architecture for RL-RR\nFig. 3 depicts the perturbation procedure at t. First, we either initialize s(0)t with a trained embedding weight vector if t= 0, or set it to s(2)t−1 otherwise. Second, we uniformly sample a(0)t , which is encoded as h(0)t = LSTMCell1(s (0) t , a (0) t ). LSTMCell1 also updates s(0)t to s(1)t . Third, we compute Q1(a(1)t |s (0) t , a (0) t ), which takes h(0)t as input and outputs a (1) t . Fourth, we encode a (1) t as h (1) t = LSTMCell2(s (1) t , [a (0) t ⊕ a (1) t ]). LSTMCell2 also updates s (1) t to s (2) t . Fifth, we compute Q2(a (2) t |s (1) t , a\n(0) t , a (1) t ), which takes h (1) t as input and outputs a (2) t . Note that a(1)t and a (2) t are selected -greedily during training and greedily during evaluation. Finally, using at = (a(0)t , a (1) t , a (2) t ), we perturb Gt to get Gt+1.\nIdeally, for each t, we would evaluate Gt+1 on the downstream task to obtain the reward. However, downstream evaluation is expensive, so we only compute reward every T time steps. Moreover, for the policy to generalize well, state embeddings (st−T+1, ... , st−1, st) should not correlate with the order of actions (at−T+1, ... , at−1, at). Thus, for every T time steps during training, we shuffle the last T actions after computing reward, then update the LSTM and sub-Q-functions with respect to the shuffled actions. Doing so encourages state embeddings to be invariant to action order." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we test KG-augmented models on their ability to maintain performance and explainability when the KG has been extensively perturbed. As explained in Sec. 2 and Fig. 1, the model is first trained on a given dataset using the original KG, frozen throughout KG perturbation, then used to compare downstream performance between original KG and perturbed KG. For all models, datasets, and perturbation methods, we measure performance and KG similarity when all |T | KG edges have been perturbed, averaged over three runs. For a subset of model-dataset-perturbation configurations, we also measure performance as a function of the number of edges perturbed. In addition, we conduct a user study where humans are asked to rate original and perturbed KGs, with respect to readability and usability for solving downstream tasks." }, { "heading": "5.1 COMMONSENSE QA", "text": "For commonsense QA, the KG-augmented models we experiment with are RN (with attentional path aggregation) (Lin et al., 2019; Santoro et al., 2017) and MHGRN (Feng et al., 2020), which have been shown to outperform non-KG models (Devlin et al., 2018; Liu et al., 2019) and a number of KG-augmented models (Lin et al., 2019; Ma et al., 2019a; Wang et al., 2019c; Schlichtkrull et al., 2018) on this task. For both RN and MHGRN, we use a BERT-Base (Devlin et al., 2018) text encoder. We evaluate on the CommonsenseQA (CSQA) (Talmor et al., 2018) and OpenBookQA (OBQA) (Mihaylov et al., 2018) datasets, using ConceptNet (Speer et al., 2016) as the KG. Performance is measured using accuracy (Acc), which is the standard metric for commonsense QA (Lin et al., 2019; Feng et al., 2020).\nCSQA Results for CSQA are given in Table 1. For RN and MHGRN, we see that RL-RR achieves slightly worse accuracy than Original KG, while RS, RR, and ER perform on par with No KG. For both models, ED performs noticeably worse than No KG.\nOBQA Results for OBQA are shown in Table 2. For RN, we see that RL-RR actually obtains better accuracy than Original KG. For MHGRN, RL-RR yields marginally worse accuracy than Original KG. Meanwhile, for both RN and MHGRN, all heuristics uniformly achieve similar accuracy as Original KG, which itself significantly outperforms No KG.\nAnalysis Tables 1-2 demonstrate that perturbing a KG does not necessarily imply decreased performance, nor does it guarantee the creation of invalid or novel facts. As shown by the KG similarity scores, some perturbation methods cause greater semantic or structural KG changes than others. Perturbed KGs produced by RS and ED have high ATS (i.e., semantic similarity to original KG), while RR, ER, and RL-RR achieve relatively low ATS. Meanwhile, SC2D and SD2 are quite low for all perturbation methods, indicating consistently low structural similarity between original and perturbed KG. RL-RR and RR collectively have the lowest SC2D and SD2 for CSQA, while RLRR has the lowest SC2D and SD2 for OBQA. Notably, across all perturbation methods and models, RL-RR attains the highest accuracy while also having the lowest KG similarity scores overall. The results of a T-test (three runs for both models) show that RL-RR achieves a statistically significant improvement over its heuristic counterpart, RR. Still, even RR has a fairly high accuracy to KG similarity ratio. This suggests that our KG-augmented models are not using the KG in a human-like way, since RL-RR and RR can both achieve high performance despite extensively corrupting the original KG’s semantic and structural information." }, { "heading": "5.2 ITEM RECOMMENDATION", "text": "The KG-augmented recommender systems we consider are KGCN (Wang et al., 2019b) and RippleNet (Wang et al., 2018b). We evaluate these models on the Last.FM (Rendle, 2012) and MovieLens-20M (Harper & Konstan, 2016) datasets, using the item KG from Wang et al. (2019a). As mentioned in Sec. 1, item KGs have been shown to benefit recommender systems in cold start scenarios (Wang et al., 2018b). Therefore, following Wang et al. (2018b), we simulate a cold start scenario by using only 20% and 40% of the train set for Last.FM and Movie Lens-20M, respectively. Performance is measured using AUC, which is the standard metric for item recommendation (Wang et al., 2019b; 2018b). Since the item KG is almost bipartite, the local clustering coefficient of each item in the KG is extremely small, and so SC2D is not meaningful here (Sec. 3). Thus, for item recommendation, we do not report SC2D.\nLast.FM Results for Last.FM are shown in Table 3. For KGCN and RippleNet, we see that RS, RR, and RL-RR achieve about the same AUC as Original KG, with RL-RR slightly outperforming Original KG. ER performs similarly to Original KG for KGCN, but considerably worse for RippleNet. ED’s AUC is on par with No KG’s for KGCN and much lower than No KG’s for RippleNet.\nMovieLens-20M Results for MovieLens-20M are displayed in Table 4. For both KGCN and RippleNet, we find that relation-based perturbation methods tend to perform on par with Original KG. Here, ER is the better of the two edge-based perturbation methods, performing about the same as Original KG for KGCN, but noticeably worse for RippleNet. Somehow, for both KGCN and RippleNet, ED achieves even worse AUC than No KG. On the other hand, we see that ED achieves very high ATS, while RS, RR, ER, and RL-RR achieve more modest ATS scores.\nAnalysis Like in commonsense QA, Tables 3-4 show that KG-augmented models can perform well even when the KG has been drastically perturbed. Using the T-test with three runs, for almost all perturbation methods, we find a statistically insignificant difference between the perturbed KG’s AUC and the original KG’s AUC. The perturbed KGs produced by ED have high ATS, while RS, RR, ER, and RL-RR achieve modest ATS scores. However, all perturbation methods have fairly low SD2 (except RR on Last.FM). In particular, across both datasets and models, RL-RR has the highest AUC overall, while also having the lowest KG similarity scores overall. This serves as additional evidence that the model is not using the KG in a human-like manner, since RL-RR achieves high performance despite significantly perturbing the original KG’s semantic and structural information." }, { "heading": "5.3 AUXILIARY EXPERIMENTS AND ANALYSIS", "text": "Varying Perturbation Level For a subset of model-dataset-perturbation settings, we measure the performance and ATS of various perturbation methods as a function of the percentage of KG edges perturbed. For MHGRN on CSQA, Fig. 4a shows that, across all levels of perturbation, RL-RR maintains higher accuracy than No KG. Meanwhile, RS’s accuracy reaches No KG’s accuracy at 100% perturbation, and RR’s does so at 60% perturbation. In Fig. 4b, we see that RL-RR’s and RR’s ATS drop significantly as the perturbation percentage increases, whereas RS’s ATS remains quite high even at 100% perturbation. For RippleNet on MovieLens-20M, Fig. 4c shows a flat performance curve for all perturbation methods. Meanwhile, for all perturbation methods in Fig. 4d, ATS decreases steadily as perturbation level increases, with RL-RR’s ATS dropping most.\nThese findings support the hypothesis that KG perturbation does not imply performance decrease or KG corruption. Building on the results of previous experiments, in both model-dataset settings, RL-RR largely maintains the model’s performance despite also heavily perturbing the KG’s seman-\nMethod CSQA OBQA\nRN MHGRN RN MHGRN\nNo KG 53.41 53.41 62.00 62.00 Orignal KG 56.87 57.21 66.80 68.00 Zero Subgraph Emb. 53.10 53.99 64.80 66.40 Rand. Subgraph Emb. 52.60 52.48 64.75 65.90 Rand. Ent./Rel. Emb. 53.02 54.03 64.45 64.85\nTable 5: Noisy Baselines for Commonsense QA. Noisy baseline accuracy on CSQA and OBQA.\nMethod Last.FM MovieLens-20M\nKGCN RippleNet KGCN RippleNet\nNo KG 50.75 50.75 91.30 91.30 Original KG 55.99 56.23 96.62 97.46 Rand. Ngbd. 55.91 51.04 96.21 92.11\nTable 6: Noisy Baselines for Item Recommendation. Noisy baseline AUC on Last.FM and MovieLens-20M.\ntics. Interestingly, for RippleNet on MovieLens-20M, performance is completely unaffected by KG perturbation, even though the KG’s semantic information is apparently being corrupted.\nNoisy Baselines To see if KGs yielded by our perturbation methods capture more than just random noise, we compare them to several noisy baselines. Table 5 gives results for three noisy baselines on commonsense QA: (1) replace subgraph embedding with zero vector, (2) replace subgraph embedding with random vector, and (3) replace entity/relation embeddings with random vectors. For CSQA, the noisy baselines perform noticeably worse than both Original KG and RL-RR, while being on par with No KG (Table 1). For OBQA, the noisy baselines’ perform slightly better than No KG, but considerably worse than Original KG and all of the perturbation methods (Table 2). Table 6 displays results for our noisy baseline in item recommendation, which entails randomizing each entity’s neighborhood. We find that KGCN performs about the same for this noisy baseline as for Original KG and our best perturbation methods, whereas RippleNet performs much worse (Tables 3-4). RippleNet may be more sensitive than KGCN to entity neighbor randomization because RippleNet considers directed edges. This is supported by RippleNet’s performance dropping when we perturb edge connections (Tables 3-4). In both tasks, the noisy baselines show that our perturbation methods yield KGs that capture measurably useful information beyond just noise. For KGCN, the unexpected discovery that noisy baselines perform similarly to Original KG suggest that even noisy KGs can contain useful information for KG-augmented models.\nHuman Evaluation of KG Explanations We conduct a user study to measure the plausibility of KG-augmented models’ path-based explanations. For both the original KG and RL-RR perturbed KG, we sample 30 questions from the CSQA and OBQA test sets which were correctly answered by MHGRN. For each question, we retrieve the top-scoring path for each answer choice via MHGRN’s path decoder attention. We then ask three human subjects to rate each path for readability and usability, with ratings aggregated via majority voting. Readability (Read) is whether the path\nmakes sense, usability (Use) is whether the path is relevant to the given question-answer pair, and both are measured on a [0, 1] scale. We obtain a Fleiss’ κ of 0.1891, indicating slight agreement between raters. To illustrate, we provide examples of explanation paths and their consensus ratings. Given the question James chose to not to print the cards, because he wanted to be more personal. What type of cards did he choose, instead?, the Original KG path is PRINT —[ANTONYM]→ HANDWRITTEN (Read=1.0; Use=2.0), and the RL-RR path is PRINT — [NOTDESIRES]→ HANDWRITTEN (Read=0.0; Use=0.0). Here, the Original KG path seems plausible, but the RL-RR path does not.\nIn Table 7, we see that Original KG and RL-RR got relatively low ratings for both readability and usability. Whereas MHGRN successfully utilizes all KG paths in this user study, humans largely struggle to read or use them. This suggests that KG-augmented models and humans process KG information quite differently, thus challenging the role of KG paths as plausible explanations. Also, Original KG beats RL-RR in readability and usability overall, signaling RL-RR’s ability to corrupt the KG. CSQA’s lower sensitivity to perturbation can be explained by the fact that CSQA is con-\nstructed from ConceptNet. Every CSQA question-answer is based on ConceptNet entities/relations, so a random ConceptNet subgraph is more likely to have semantic overlap with a CSQA questionanswer than with an OBQA question-answer. Hence, a perturbed ConceptNet subgraph may also be more likely to overlap with a CSQA question-answer, which means perturbing the KG might have a smaller impact on human judgments of CSQA paths. Note that this result concerns explainability and does not say anything about the model’s performance on CSQA and OBQA.\nValidation of KG Similarity Metrics Using our human evaluation results, we validate our three proposed KG similarity metrics: ATS, SC2D and SD2. We find that the Pearson correlation coefficient between the human evaluation scores in Table 7 and the three KG similarity scores in Tables 1-2 are 0.845, 0.932 and 0.932, respectively. This indicates high correlation and that our metrics aptly capture a perturbed KG’s preservation of semantic/structural information from its original KG.\nWhy do perturbed KGs sometimes perform better than the original KG? In our experiments, relation-based perturbations (RS, RR, RL-RR) generally outperform edge-based perturbations (ER, ED). Also, we find that the original KG can contain noisy relation annotations which are sometimes “corrected” by relation-based perturbations. In certain cases, this may result in the perturbed KG achieving slightly higher performance than the original KG (RR and RL-RR for RN-CSQA; RL-RR for Last.FM). Similarly, in our user study, despite all questions being correctly answered by the model, there were some RL-RR explanations that received higher readability/usability ratings than their original KG counterparts. Although the original KG achieved higher human ratings than the RL-RR KG did overall, both KGs still achieved relatively low ratings with respect to our scales. While our main argument centers on KG-augmented models’ flaws, this counterintuitive finding suggests that KGs themselves are flawed too, but in a way that can be systematically corrected." }, { "heading": "6 RELATED WORK", "text": "KG-Augmented Neural Models Although neural models may already capture some semantic knowledge (Petroni et al., 2019; Davison et al., 2019), augmenting them with external KGs has improved performance on various downstream tasks: commonsense QA (Lin et al., 2019; Shen et al., 2020; Lv et al., 2020; Musa et al., 2019), item recommendation (Wang et al., 2019b; 2020; Song et al., 2019; Cao et al., 2019), natural language inference (Chen et al., 2017; Wang et al., 2019c), and others (Chen et al., 2019; Kapanipathi et al.). KG-augmented models have also been designed to explain the model’s predictions via attention over the KG (Lin et al., 2019; Zhang et al., 2019; Song et al., 2019; Cao et al., 2019; Gao et al., 2019; Ai et al., 2018).\nAdversarial Perturbation of Graphs Inspired by adversarial learning in computer vision (Bhambri et al., 2019) and NLP (Zhang et al., 2020), some recent works have addressed adversarial perturbation in graph learning (Chen et al., 2020). Multiple paradigms have been proposed for graph perturbation, including gradient-based methods (Chen et al., 2018; Bojchevski & Günnemann, 2019; Wu et al., 2019), RL-based methods (Ma et al., 2019b; Dai et al., 2018), and autoencoder-based methods (Chen et al., 2018). Whereas such works aim to minimally perturb the graph while maximally impacting the graph’s performance, our purpose for graph perturbation is to see whether KG-augmented models’ use KGs in a human-like way and provide plausible explanations." }, { "heading": "7 CONCLUSION", "text": "In this paper, we analyze the effects of strategically perturbed KGs on KG-augmented model predictions. Using four heuristics and a RL policy, we show that KGs can be perturbed in way that drastically changes their semantics and structure, while preserving the model’s downstream performance. Apparently, KG-augmented models can process KG information in a way that does not align with human priors about KGs, although the nature of this process still requires further investigation. Moreover, we conduct a user study to demonstrate that both perturbed and unperturbed KGs struggle to facilitate plausible explanations of the model’s predictions. Note that our proposed KG perturbation methods merely serve as analytical tools and are not intended to directly improve model performance or explainability. Nonetheless, we believe our findings can guide future work on designing KG-augmented models that are better in these aspects. Additionally, our results suggest that KG-augmented models can be robust to noisy KG data. Even when the KG contains a fairly small amount of signal, these models are somehow able to leverage it. This could be useful in situations where it is impractical to obtain fully clean KG annotations." }, { "heading": "8 ACKNOWLEDGMENTS", "text": "This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research, the Defense Advanced Research Projects Agency with award W911NF-19-20271, and NSF SMA 18-29268. We would like to thank all collaborators at the USC INK Research Lab for their constructive feedback on the work." } ]
2,021
null